Hacker Newsnew | past | comments | ask | show | jobs | submit | wowamit's commentslogin

Is finding the right stocks to invest in an LLM problem? Language models aren't the right fit, I would presume. It would also be insightful to compare this with traditional ML models.

> Regardless of which model you're using, you may notice that Claude frequently ignores your CLAUDE.md file's contents.

This is a news for me. And at the same time it isn’t. Without the knowledge of how the models actually work, most of the prompting is guesstimate at best. You have no control over models via prompts.


It's really frustrating to see every innovation eventually crammed into advertisements. The brightest minds spend most of their energy figuring out effective ways of serving ads.

The brightest minds figuring out how to manipulate the beliefs of the masses is a time-honored tradition.

I was always afraid to switch from Gmail, knowing the impact it would have. But I switched to Fastmail this year and my experience has been comparatively frictionless. My fear was unfounded.

That's great info. I need some encouragement to make the switch

With so many settings spread across multiple sections, especially in Workspace accounts, it's challenging to keep track of how existing settings are affected by each new addition. I generally review these regularly, yet find surprises now and then.

Knowing what setting does what in Gmail is becoming difficult by the day.


The way I read it is beads steers agents to make use of the .beads/ folder to stay in sync across machines. So, my understanding is a dedicated branch for beads data will break the system.

But wouldn't that dedicated branch, pushed to origin, also work for staying synced across multiple machines?

Depends what you mean by “synced”—do you want your beads state to be coupled with commits (eg: checking out an old commit also shows you the beads state at that snapshot)? Using a separate branch would decouple this. I think the coupling is a nice feature, but it isn’t a feature that other bug trackers have, so using a separate branch would make beads more like other bugtrackers. If you see the coupling as noise, though, then it sounds like that is what you want.

The way I understand this, when the agent runs `bd onboard` at startup, it gets the instructions from beads, which might refer to data files in the beads directory. Keeping them in sync via a separate branch would be an unnecessary overhead. Right?

I don't see it as extra overhead - it just changes the git one-liner they use to push and pull their issue tracking content by a few characters.

I like the idea of keeping potentially noisy changes out of my main branch history, since I look at that all the time.


You are right. I dug through the document some more. The setup, as mentioned for protected branches [1], should ideally work without much overhead. It does suggest merging back to main, but the FAQ also mentions that the frequency can be decided individually.

[1] https://github.com/steveyegge/beads/blob/main/docs/PROTECTED...


I went through the whole readme first and kept wondering what problem the system aims to address. I understood that it is a distributed issue tracker. But how can that lead to a memory upgrade? It also hints at replacing markdown for plans.

So is the issue the format or lack of structure which a local database can bring in?


LLMs famously don't have a memory - every time you start a new conversation with the you are effectively resetting them to a blank slate.

Giving them somewhere to jot down notes is a surprisingly effective way of working around this limitation.

The simplest version of this is to let them read and write files. I often tell my coding agents "append things you figure out to notes.md as you are working" - then in future sessions I can tell them to read or search that file.

Beads is a much more structured way of achieving the same thing. I expect it works well partly because LLM training data makes them familiar with the issue/bug tracker style of working already.


I’ve been using beads for a few projects and I find it superior to spec kit or any other form of structured workflow.

I also find it faster to use. I tell the agent the problem, ask them to write a set of tasks using beads, it creates the tasks and it creates the “depends on” tree structure. Then I tell it to work on one task at a time and require my review before continuing.

The added benefit is the agent doesn’t need to hold so much context in order to work on the tasks. I can start a new session and tell it to continue the tasks.

Most of this can work without beads but it’s so easy to use it’s the only spec tool I’ve found that has stuck.


Do you find that it interferes with coding agents’ built-in task management features? I tried beads a few weeks ago and Claude exhibited some strange behavior there. I’ll have to try it again, everything is changing so quickly.

Is there a good way to use beads without pushing your .beads dir upstream?

Add the .beads directory to .gitignore and always make edits on that same machine.

Is there a good way to put it in a separate repo?

Thanks! It is the structure that matters here, then. Just like you, I ask my agents to keep updating a markdown file locally and use it as a reference during working sessions. This mechanism has worked well for me.

I even occasionally ask agents to move some learnings back to my Claude.md or Agents.md file.

I'm curious whether complicating this behaviour with a database integration would further abstract the work in progress. Are we heading down a slippery slope?


Using Claude code recently I was quite impressed by the TODO tool. It seemed like such a banal solution to the problem of keeping agents on track. But it works so well and allows even much smaller models to do well on long horizon tasks.

Even more impressive lately is how good the latest models are without anything keeping them on track!


I often have them append to notes, too, but also often ask them to deduplicate those notes, without which they can become quite redundant. Maybe redundancy doesn't matter to the AI because I've got tokens to burn, but it feels like the right thing to do. Particularly because sometimes I read the notes myself.

A much-needed project. Making yourself invisible to such privacy-invasive devices will be the need of the day. Of the two approaches you mentioned, blocking/jamming the specific wireless traffic would be pretty interesting, if possible.

> blocking/jamming the specific wireless traffic would be pretty interesting, if possible.

And probably highly illegal.


At the end of the day, legality is what society settles as an acceptable way of running itself when all the stakeholders reluctantly agree or at least don't protest too much. Right now the 'costs' are sufficiently low that no one cares. As with most things, I suspect that there is a threshold ( though likely much higher than I have previously anticipated ) at which normal person would be unwilling to go as if anything changed.

Nobody cares? Here (sweden) its illegal to possess outside of military use. The US seems to give fines between 20-200 thousand $ for its usage and potential imprisonment for its sale. Even overboosting wifi routers for better range gets people in trouble.

It's among the most illegal things you could easily do with basic electronics equipment.

why? Part of it is historical; it used to be complicated, so being in possession of one got you in trouble with the anti terrorism squad.

These days; it's because it can block emergency services, police and military radio, and burglary alarms.

They may be lenient for a nerd playing with a router but the law its not on your side when push comes to shove.

https://legalclarity.org/are-signal-jammers-illegal-in-the-u...


This is overstating the case. In the US you can buy such devices (usually from Aliexpress but Amazon has devices capable of some jamming/deauth). They are illegal to use for intentional jamming of other people’s equipment. However, unless you go around jamming important safety equipment or making local hams angry then nothing will happen. The FCC has its hands full and can’t even seem to address persistent issues on the ham bands until they get really bad.

Deauth attacks weer common in the Google Glasses days. Nobody got arrested as far as I can remember.

Yeah, true. Implementing this would be tricky.

Implementing it is trivial. you can just overlook the radio from openWRT and drown out every 2.4-2.5 gHz device in a 100M radius.

Doing it targeted is more difficult since it does frequency hopping, but you could probably reverse the frequency hopping algorithm to target specifically Bluetooth and force high packet loss.

This is still illegal for radio jamming reasons, and also patent infringement since a misbehaving Bluetooth device has not gotten permission to use Bluetooth patents held by SIG.


I'll feel much safer when I'm visible only to every single ATM camera, traffic camera, random smartphone camera and doorbell camera, but not to people's glasses.

Same here. At first, I thought it was a one-file-based database, which would have been even more commendable.


That should have been possible, since it's based on SQLite.

I am curious to better understand the benchmarks against simple SQLite, especially under typical load. Any level of latency will be an unnecessary overhead.


How do you use simple SQLite without any code in a webapp where the database is on the server?


You can't. I am curious if this adds another layer of latency.

Can easily go either way.

If you're comparing in-process SQLite to talking to SQLite over HTTP you'll probably get a small penalty for any language. When co-located on the same machine, you can probably expect something like ~5ms for JS (just for the event-loop to do the IO and getting back to you).

However, if you have multiple processes reading and writing from the same DB it may actually aid latency due to congestion.

I ran some benchmarks (TrailBase author here and big fan of PocketBase): https://trailbase.io/reference/benchmarks#insertion-benchmar..., where you can see, e.g. the single-process drizzle (i.e. JS with in-process SQLite) performance vs over-HTTP for TrailBase. PocketBase should be similar when not fully loaded. There's also some concrete latency percentile numbers when at full-tilt: https://trailbase.io/reference/benchmarks#read-and-write-lat.... On my machine you can expect p50 to be around 15-20ms.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: