API Opus 4.6 will tell you it's still 2025, admit it's wrong then revert back to being convinced it's 2025 as it nears it's context limit.
I'll go so far as to say LLM agents are AGI-lite but saying we "just need the orchestration layer" is like saying ok we have a couple neurons, now we just need the rest of the human.
Manual orchestration is a brittle crutch IMO - you don't get to the moon by using longer and longer ladders. A powerful model in theory should be able to self orchestrate with basic tools and environment. The thing is that it also might be as expensive as a human to run - from a tokens AND liability perspective.
And yet another way to look at it is maybe current LLM agents are AGI, but it turns out that AGI in this form is actually not that useful because of its many limitations and solving those limitations will be a slow and gradual process.
IMO if you haven't seen an agent (SOTA) veer off a plan and head towards a landmine you haven't used them long enough. And now with Ralph loops, etc it will just bury it. ClawdBot/MoltBot/OpenClaw is what ~2 months old so "hasn't happened yet" is a bit early to call.
That said, if model performance/accuracy continues to improve exponentially you will be right.
I've seen them veer off a plan, and I've seen the posts about an agent accidentally deleting ~, but neither of those meet the definition of the lethal trifecta. I'm also not saying it can't happen - I count myself towards the ones that are waiting for it to happen. The "we" was meant literally.
That being said, I still think it's interesting that it hasn't happened yet. The longer this keeps being true, the lower my prior for this prediction will sink.
> emailed a hallucinated suicide note to all my coworkers and then formatted my drives problem ... most people are willing to accept
Are they though? I mean, I'm running all my agents in -yolo mode but I would never trust it to remain on track for more than one session. There's no real solution to agent memory (yet) so it's incredibly lossy, and so are fast/cheap sub agents and so are agents near their context limits. It's easy to see how "clean up my desktop" ends with a sub-subagent at its context limit deciding to format your hard drive.
Maybe I'm just not working on complex or big enough projects but I haven't encountered a case of a feature that couldn't be implemented in one or two context windows. Or using vanilla Claude Code a multi-phase plan doc with a couple of sub agents and a final verification pass with Codex.
I guess maybe I'm doing the orchestration manually, but I always find there's tons of decisions that need to be made in the middle of large plan implementations.
Your refactor example terrifies me because the best part of a refactor is cleaning out all the bandaid workarounds and obsolete business logic you didn't even know existed. Can't see how an agent swarm would be able to figure that out unless you provide a giga-spec file containing all current business knowledge. And if you don't spec it the agents will just eagerly bake these inefficiencies and problems into your migrated app.
On the other hand the average human has a context window of 2.5 petabytes that's streaming inference 24/7 while consuming the energy equivalent of a couple sandwiches per day. Oh and can actually remember things.
Citation desperately needed? Last I checked, humans could not hold the entirety of Wikipedia in working memory, and that's a mere 24 GB. Our GPU might handle "2.5 petabytes" but we're not writing all that to disc - in fact, most people have terrible memory of basically everything they see and do. A one-trick visual-processing pony is hardly proof of intelligence.
I think the idea is that we may not store 2.5 petabytes of facts like wikipedia.
But we do store a ton of “data” in the form of innate knowledge, memories, etc.
I don’t think human memory/intelligence maps cleanly to computer terms though.
reply