This is correct. It will require non-market forces to regulate soft-landings for humans. We may see a wave of "job-preserving" legislation in the coming years but these will eventually be washed away in favor of taxing the AI economy.
I'm genuinely curious, is it: a) the writing style you can't stand, b) the fact that this piece tripped your "this is written by AI" and it's AI-written stuff you can't stand? And what the % split between the two is.
(I find there's a growing push-back against being fed AI-anything, so when this is suspected it seems like it generates outsized reactions)
I’m pro-AI in general, but I hate the AI writing style that has gotten especially bad lately. It’s down to two things, neither of which are anti-AI sentiment.
Firstly, I find the tone of voice immensely irritating. It sounds like a mixture of LinkedIn broetry, a TEDx talk, and marketing speak. That’s irritating when a human does it, but it’s especially bad when AI applies it in cases where it’s jarringly wrong for the topic at hand.
I recently saw this example:
> This isn’t just “nicer syntax” — it’s a fundamental shift in how your software thinks.
It was talking about datetime representation in software development but it has the tone of voice of somebody earnestly gesticulating on stage while explaining how they are going to solve world hunger. This is like the uncanny valley except instead of it making me uneasy it just pisses me off.
Secondly, it’s so incredibly overused. You’ll see “it’s not X—it’s Y” three times in three consecutive paragraphs. It’s irritating the first time, so when I see it throughout a whole article, I get an exceptionally low opinion of whoever published it.
Having said that, this article wasn’t particularly bad.
The saccharin writing style would be bad in isolation, but bearable. The overexposure to it is what leads me to dislike it so much I think.
The fact is written by AI does add a layer of frustration because you know someone wrote something more human and more real, but all you get to see is what the model made of it after digestion.
- Did you build your own or are you farming out to say Opencode?
- If you built your own, did you roll from scratch or use a framework? Any comments either way on this?
- How "agentic" (or constrained as the case may be) are your agents in terms of the tools you've provided them?
Not sure if I understand the question, but I'll do my best to answer.
I guess Agents/Agentic are too broad of a term. All of this is really an LLM that has a set of tools that may or may not be other LLMs. You don't really need a framework as long as you can make HTTP calls to openrouter or some other provider and handle tool calling.
I'm using the AI sdk as it plays very nicely with TypeScript and gives you a lot of interesting features like handling server-side/client-side tool calling and synchronization.
My current setup has a mix of tools, some of which are pure functions (i.e. database queries), some of which handle server-side mutations (i.e. scheduling a changelog), and some of which are supposed to run locally on the client (i.e. updating TipTap editor).
Again, hopefully this somewhat answers the question, but happy to provide more details if needed.
When you discuss caching, are you talking about caching the LLM response on your side (what I presume) or actual prompt caching (using the provider cache[0])? Curious why you'd invalidate static content?
I think I need to make this a bit more clear. I was mostly referring to caching the tools (sub-agents) if they are a pure function. But that may be a bit too speicific for the sake of this post.
i.e. you have a query that reads data that doesn't change often, so you can cache the result.
> The future may reduce the economic prosperity and push humanity to switch to some different economic system (maybe a better system). Markets don’t want to accept that. [Emphasis added]
What a silly premise. Markets don't care. All markets do is express the collective opinion; in the short term as a voting machine, in the long term as a weighing machine.
Seeing a real uptick of socio-policital prognostication from extremely smart, soaked-in-AI, tech people (like you Salvatore!), casting heavy doom-laden gestures towards the future. You're not even wrong! But this "I see something you all clearly don't" narrative, wafer thin on real analysis, packed with "the feels", coated with what-ifs.. it's sloppy thinking and I hold you to a higher standard antirez.
Markets require property rights, property rights require institutions that are dependent on property-rights holders, so that they have incentives to preserve those property rights. When we get to the point where institutions are more dependent on AIs instead of humans, property rights for humans will become inconvenient.
His framing is that markets are collective consensus and if you claim to “know better”, you need to write a lot more than a generic post. It’s so simple, and it is a reminder that antirez’s reputation as a software developer does not automatically translate to economics expert.
I think you are mixed up here. I quoted from the comment above mine, which was harshly and uncharitably critical of antirez’s blog post.
I was pushing back against that comment’s snearing smugness by pointing to an established field that uses clear terminology about how and why markets are useful. Even so, I invited an explanation in case I was missing something.
Anyone curious about the terms I used can quickly find explanations online, etc.
Yes but can the market not be wrong? Wrong in the sense that, failing to meet our expectations as a useful engine of society? As I understood, what was meant with this this article is that AI completely changes the equations across the board that current market direction appears dangerously irrational to OP. I'm not sure what was meant with your comment though besides haggling over semantics and attacking some in-expertise of the authors socio-politic philosophizing that you perceive.
Of course it can be wrong, and it is in many instances. It's a religion. The vast, vast majority of us would prefer to live in a stable climate with unpolluted water and some fish left in the oceans, yet "the market" is leading us elsewhere.
I don't like the idea of likening the market to a religion, but I think it definitely has some glaring flaws. In my mind the biggest is that the market is very effective at showing the consensus of short-term priorities, but it has no ability to reflect long-term strategic consensus.
While I'm certain you'll find plenty of people who believe in the principle of model welfare (or aliens, or the tooth fairy), it'd be surprising to me if the brain-trust behind Anthropic truly _believed_ in model "welfare" (the concept alone is ludicrous). It makes for great cover though to do things that would be difficult to explain otherwise, per OP's comments.
The concept is not ludicrous if you believe models might be sentient or might soon be sentient in a manner where the newly emerged sentience is not immediately obvious.
Do I think that or think even they think that? No. But if "soon" is stretched to "within 50 years", then it's much more reasonable. So their current actions seem to be really jumping the gun, but the overall concept feels credible.
It's lazy to believe that humanity's collective decision-making would, in the future, protect AI's merely for being conscious beings. The tech economy *today* runs on the slave labor of humans, in foreign, third-world countries. All humanity needs to do is draw a line, push the conscious AI's outside that line, and declare, "not our problem anymore!" That's what we do today, with humans. That is the human condition.
Show me a tech company that lobbies for "model welfare" for conscious human models enslaved in Xinjiang labor camps, building their tech parts. You know what—actually most of them lobby against that[0]. The talk hurts their profits. Does anyone really think, that any of them would blink about enslaving a billion conscious AI's to work for free? That faced with so much profit, the humans in charge would pause, and contemplate abstract morals?
Maybe humanity will be in a nicer place in the future—but, we won't get there by letting (of all people!) tech-industry CEO's lead us there: delegating our moral reason to these people who demand to position themselves as our moral leaders.
It's certainly not a given. But it might happen, if we push for it. As might much more moral behavior towards sentient animals, if we push for it.
I believe a company like Anthropic would be extremely cautious and respectful if a majority of their staff believed they had created a model which was likely conscious. Anthropic is populated by the kinds of people who have been thinking and writing about potential future sentient AIs for decades. As for the other companies, who knows, but hopefully companies like Anthropic can help push them into behaving similarly.
Why would they post a whole blog post about it then? They even say they aren't certain as to the moral status of LLMs, implying this is a topic of live debate inside the company.
None of this is in any way surprising, in fact I wrote an essay predicting this direction back in 2022:
This is hard to pin down. There are plenty of metal companies providing hosted inference at market rates (i.e. assumed profitably if heading towards some commodity price floor). The premise that every single one of these companies is operating at a loss is unlikely. The open question is about the "off-book" training costs for the models running on these servers: are your unit economics positive when factoring training costs. And if those training costs are truly off-book, it's not a meritless argument to say the model providers are "subsidizing" the inference industry. But it's not a clear cut argument either.
Anthropic and OpenAI are their own beasts. Are their unit economics negative? Depends on the time frame you're considering. In the mid-longer run, they're staking everything on "most decidedly not negative". But what are the rest of us paying on the day OpenAI posts 50% operating margins?