Setting aside the marvelous murk in that use of "you," which parenthetically I would be happy to chat about ad nauseum,
I would say this is a fine time to haul out:
Ximm's Law: every critique of AI assumes to some degree that contemporary implementations will not, or cannot, be improved upon.
Lemma: any statement about AI which uses the word "never" to preclude some feature from future realization is false.
Lemma: contemporary implementations have already improved; they're just unevenly distributed.
I can never these days stop thinking about the XKCD the punchline of which is the alarmingly brief window between "can do at all" and "can do with superhuman capacity."
I'm fully aware of the numerous dimensions upon which the advancement from one state to the other, in any specific domain, is unpredictable, Hard, or less likely to be quick... but this the rare case where absent black swan externalities ending the game, line goes up.
"every critique of AI assumes to some degree that contemporary implementations will not, or cannot, be improved upon."
They're token predictors. This is inherently a limited technology, which is optimized for making people feel good about interacting with it.
There may be future AI technologies which are not just token predictors, and will have different capabilities. Or maybe there won't be. But when we talk about AI these days, we're talking about a technology with a skill ceiling.
This is oddly timed in as much as one of the big success stories I've heard from a friend is their new practice of having Claude Code develop in Rust, than translate that to WebAssembly.
That seems much more like the future than embracing Node... <emoji here>
If you’re making a web app your fancy rust wasm module still has to interface with the dom, so you can’t escape that. Claude might offer you some fake simplicity on that front for awhile, but skeptical that’s it fully scalable
What is your argument for why denecessitating labor is very bad?
This is certainly the assertion of the capitalist class,
whose well documented behavior clearly conveys that this is not because the elimination of labor is not a source of happiness and freedom to pursue indulgences of every kind.
It is not at all clear that universal life-consuming labor is necessary for a society's stability and sustainability.
The assertion IMO is rooted rather in that it is inconveniently bad for the maintenance of the capitalists' control and primacy,
in as much as those who are occupied with labor, and fearful of losing access to it, are controlled and controllable.
People willing to do something harder or more risky than others will always have a bigger chance to get a better position. Be that sports, labor or anything in life.
I am 1000% OK with living in a world where basic needs are fully provided for,
and competition and drive are worked out in domains which do not come at the expense of someone else's basic needs.
Scifi has speculated about many potential outlets for "human drive," the frontier/pioneer spirit being a big one; if I could name my one dream for my kids it'd be that they live in an equitable post-scarcity society which has turned its interests to exploring the solar system and beyond.
Sports, "FKT" competitions, and social capital ("influence") are also relatively innocuous ways to absorb the drive for hierarchy and power.
The X factor is, is the will to dominate/control/be subjugated to, suppressible or manageable.
The devil's bargain for those on this site: a pleasant work environment and paycheck deriing from engagementmaxxing and the resultant surveillance this provides.
Scott Alexander put his finger on the most salient aspect of this, IMO, which I interpret this way:
the compounding (aggregating) behavior of agents allowed to interact in environments this becomes important, indeed shall soon become existential (for some definition of "soon"),
to the extent that agents' behavior in our shared world is impact by what transpires there.
--
We can argue and do, about what agents "are" and whether they are parrots (no) or people (not yet).
But that is irrelevant if LLM-agents are (to put it one way) "LARPing," but with the consequence that doing so results in consequences not confined to the site.
I don't need to spell out a list; it's "they could do anything you said YES to, in your AGENT.md" permissions checks.
"How the two characters '-y' ended civilization: a post-mortem"
I'm not sure how to react to that without being insulting, I find their works pretty well written and understandable (and I'm not a native speaker or anything). Maybe it's the lesswrong / rationalist angle?
> it's "they could do anything you said YES to, in your AGENT.md" permissions checks.
Nothing fed to an LLM is a "permissions check", they're filler for a context window after which the generator produces the some likely tokens. If AGENTS.md can make your agent do something, it was already able to do that without the AGENTS.md.
Can't speak for the benefits of https://nono.sh/ since I haven't used it, but a downside of using docker for this is that it gets complicated if you want the agent to be allowed to do docker stuff without giving it dangerous permissions. I have a Vagrant setup inspired by this blogpost https://blog.emilburzo.com/2026/01/running-claude-code-dange..., but a bug in VirtualBox is making one core run at 100% the entire time so I haven't used it much.
> We can argue and do, about what agents "are" and whether they are parrots (no) or people (not yet).
It's more helpful to argue about when people are parrots and when people are not.
For a good portion of the day humans behave indistinguishably from continuation machines.
As moltbook can emulate reddit, continuation machines can emulate a uni cafeteria. What's been said before will certainly be said again, most differentiation is in the degree of variation and can be measured as unexpectedness while retaining salience. Either case is aiming at the perfect blend of congeniality and perplexity to keep your lunch mates at the table not just today but again in future days.
People like to, ahem, parrot this view, that we are not much more than parrots ourselves. But it's nonsense. There is something it is like to be me. I might be doing some things "on autopilot" but while I'm doing that I'm having dreams, nostalgia, dealing with suffering, and so on.
It’s a weird product of this hype cycle that inevitably involves denying the crazy power of the human brain - every second you are awake or asleep the brain is processing enormous amounts of information available to it without you even realizing it, and even when you abuse the crap out of the brain, or damage it, it still will adapt and keep working as long as it has energy.
No current ai technology could come close to what even the dumbest human brain does already.
A lot of that behind-the-scenes processing is keeping our meatbags alive, though, and is shared with a lot of other animals. Language and higher-order reasoning (that AI seems better and better at) has only evolved quite recently.
All your thoughts are and experiences are real and pretty unique in some ways. However, the circumstances are usually well-defined and expected (our life is generally very standardized), so the responses can be generalized successfully.
You can see it here as well -- discussions under similar topics often touch the same topics again and again, so you can predict what will be discussed when the next similar idea comes to the front page.
So what if we are quite predictable. That doesn't mean we are "trying" to predict the next word, or "trying" to be predictable, which is what llms are doing.
Over a large population, trends emerge. An LLM is not a member of the population, it is a replicator of trends in a population, not a population of souls but of sentences, a corpus.
In other words: notionally, if not literally, by the time trailing numbers are collected they are out of date.
This is of course axiomatic, but, that staleness is a serious matter in this particular moment.
It's a cliché that six months can be a lifetime on the bleeding edge of tech.
This is the first time in my career that is more or less literally true.
Humans reason poorly with non-linear change.
This entire article is a demonstration of that.
reply