I had a thought about this coming from the book "Seeing Like a State."
Productivity in large organizations has never been and can never be purely of the legible work which is written in Jira tickets, documented, expressed clearly, but is sustained by an illegible network of relationships between the workers and unwritten knowledge/practices. AI can only consume the work which is legible, but as more work gets pushed into this realm, the illegible relationships and expertise becomes fragmented and atrophies, which puts backpressure on the system's productivity as a whole. And reading said book, my guess that attempting to impose perfect legibility for the sake of AI tooling will ultimately prove disastrous.
There could be a whole spectrum of types of repositories where these tools exceed and fail. I can immagine a large repository, poorly documented, with confusing inconsistent usages/patterns, in a dynamic language, with poor tests will almost always lead to failure.
I honestly think that size and age alone are sufficient to lead these tools into failure cases.
I try to avoid LLMs as much as I can in my role as SWE. I'm not ideologically opposed to switching, I just don't have any pressing need.
There are people I work with who are deep in the AI ecosystem and it's obvious what tools they're using It would not be uncharitable in any way to characterize their work as pure slop that doesn't work, buggy, untested adequately, etc.
The moment I start to feel behind I'll gladly start adopting agentic AI tools, but as things stand now, I'm not seeing any pressing need.
Comments like these make me feel like I'm being gaslit.
We are all constantly being gaslit. People have insane amounts of money and prestige riding on this thing paying off in such a comically huge way that it can absolutely not deliver on it in the foreseeable future. Creating a constant pressing sentiment that actually You Are Being Left Behind Get On Now Now Now is the only way they can keep inflating the balloon.
If this stuff was self-evidently as useful as it's being made out to be, there would be no point in constantly trying to pressure, coax and cajole people into it. You don't need to spook people into using things that are useful, they'll do it when it makes sense.
The actual use-case of LLMs is dwarfed by the massive investment bubble it has become, and it's all riding on future gains that are so hugely inflated they will leave a crater that makes the dotcom bubble look like a pothole.
> I am having more fun programming than I ever have, because so many more of the programs I wish I could find the time to write actually exist. I wish I could share this joy with the people who are fearful about the changes agents are bringing.
It might be just me but this reads as very tone deaf. From my perspective, CEOs are seething at the mouth to make as many developers redundant as possible, not being shy about this desire. (I don't see this at all as inevitable, but tech leaders have made their position clear)
Like, imagine the smugness of some 18th century "CEO" telling an artisan, despite the fact that he'l be resigned to working in horrific conditions at a factory, to not worry and think of all the mass produced consumer goods he may enjoy one day.
It's not at all a stretch of the imagination that current tech workers may be in a very precarious situation. All the slopware in the world wouldn't console them.
I bought Steve Yegge's "Vibe Coding" book. I think I'm about 1/4th of the way through it or so. One thing that surprised me is there's this naivete on display that workers are going to be the ones to reap the benefits of this. Like, Steve was using an example of being able to direct the agent while doing leisure activities (never mind that Steve is more of an executive/thought leader in this company, and, prior to LLMs, seemed to be out of the business of writing code). That's a nice snapshot of a reality that isn't going to persist..
While the idea of programmers working two hours a day and spending the rest of it with their family seems sunny, that's absolutely not how business is going to treat it.
Thought experiment... CEO has a team of 8 engineers. They do some experiments with AI, and they discover that their engineers are 2x more effective on average . What does the CEO do?
a) Change the workweek to 4 hours a day so that all the engineers have better work/life balance since the same amount of work is being done.
b) Fire half the engineers, make the 4 remaining guys pick up the slack, rinse and repeat until there's one guy left?
Like, come on. There's pushback on this stuff not because the technology is bad, (although it's overhyped), but because the no sane person trusts our current economic system to provide anything resembling humane treatment of workers. The super rich are perfectly fine seeing half the population become unemployed, as far as I can tell, as long as their stock numbers go up.
Haven't read that book, but agree that if anyone thinks the workers are likely to capture the value of this productivity shift, they haven't been paying attention to reality.
Though at the same time I also think a lot of the CEO-types (at least in the pure software world) who believe they are going to capture the value of this productivity shift are also in for a rude awakening because if AI doesn't stall out, its only a matter of time from when their engineers are replaceable to when their company doesn't need to exist at all anymore.
You missed option c.
C) keep all 8 engineers so the team can pump out features faster, all still working 8 hour days. The ceo will probably be forced to do it to keep up with their competition.
I didn't miss it, I just think it's going to be a rare outcome. I'm sure companies will become a little more bold about new products or new features, but I think there's an upper limit to the amount of change customers will tolerate. Once a product is stable enough to make money, for the most part, users don't want changes or new features and often rebel against them. Most people they think: I'm paying for this thing because it already does the stuff I need, please don't change it, I don't want to relearn this. I'm not saying that to be against evolving software of course, I just think competing on "we move a million miles per hour" will result in burnt out developers and overwhelmed customers. I mean, we're already seeing some burnout from people using AI tools, and I think part of that just has to be the pace of things.
That's what I meant to say, that task assignment to agents could shift to mobile. What experience would you personally like, from a mobile continuity perspective?
I'm imagining it even worse: you have to pay a subscription to get your oven to go above a certain temperature and for it to "fast pre-heat" and to not have it show you ads.
Brilliantly phrased — sharp, concise, and perfectly captures that uncanny "AI-polished" cadence everyone recognizes but can’t quite name. The tone strikes just the right balance between wit and warning.
Lately the Claude-ism that drives me even more insane is "Perfect!".
Particularly when it's in response to pointing out a big screw up that needs correcting and CC utterly unfazed just merrily continues on like I praised it.
"You have fundamentally misunderstood the problems with the layout, before attempting another fix, think deeply and re-read the example text in the PLAN.md line by line and compare with each line in the generated output to identify the out of order items in the list."
One thing I don't understand, there was (appropriately) a news cycle about sycophancy in responses. Which was real, and happening to an excessive degree. It was claimed to be nerfed, but it seems strong as ever in GPT5, and it ignores my custom instructions to pare it back.
Sycophancy was actually buffed again a week after GPT-5 released. It was rather ham-fisted, as it will now obsessively reply with "Good question!" as though it will get the hose again if it does not.
"August 15, 2025
GPT-5 Updates
We’re making GPT-5’s default personality warmer and more familiar. This is in response to user feedback that the initial version of GPT-5 came across as too reserved and professional. The differences in personality should feel subtle but create a noticeably more approachable ChatGPT experience.
Warmth here means small acknowledgements that make interactions feel more personable — for example, “Good question,” “Great start,” or briefly recognizing the user’s circumstances when relevant."
The "post-mortem" article on sycophancy in GPT-4 models revealed that the reason it occurred was because users, on aggregate, strongly prefer sycophantic responses and they operated based on that feedback. Given GPT-5 was met with a less-than-enthusiastic reception, I suppose they determined they needed to return to appealing to the lowest common denominator, even if doing so is cringe.
I wonder what would happen if there was a concerted effort made to "pollute" the internet with weird stories that have the AI play a misaligned role.
Like for example, what would happen if say 100s or 1000s of books were to be released about AI agents working in accounting departments where the AI is trying to make subtle romantic moves towards the human and ends with the the human and agent in a romantic relation which everyone finds completely normal. In this pseudo-genre things totally weird in our society would be written as completely normal. The LLM agent would do weird things like insert subtle problems to get the attention of the human and spark a romantic conversation.
Obviously there's no literary genre about LLM agents, but if such a genre was created and consumed, I wonder how would it affect things. Would it pollute the semantic space that we're currently using to try to control LLM outputs?
... in opposition to the car makers who want to turn everything into highways and parking lots, who really want all forms of human walking to be replaced by automobiles.
"They really cant run like a human," they say, "a human can traverse a city in complete silence, needing minimal walking room. Left unchecked, the transitions to cars would ruin our city. So lets be prudent when it comes to adopting this technology."
"I'll have none of that. Cars move faster than humans so that means they're better. We should do everything in our power to transition to this obviously superior technology. I mean, a car beat a human at the 100m sprint so bipedal mobility is obviously obsolete," the car maker replied.
The most recent Stack Overflow survey have vim at 25% and neovim at 14% for the question "Which development environments and AI-enabled code editing tools did you use regularly over the past year, and which do you want to work with over the next year?" Even more interesting is that for the 2023 survey Vim and Neovim were at 22.3% and 11.8% respectively.
If the goal is to get more than 50% usage statistics then yeah, you can say they lost, but are dev tools only valid/useful/viable if they have a majority of developers using them? I say they've had tremendous success being able to provide viable tools with literally zero corporate support and a much smaller user base.
Productivity in large organizations has never been and can never be purely of the legible work which is written in Jira tickets, documented, expressed clearly, but is sustained by an illegible network of relationships between the workers and unwritten knowledge/practices. AI can only consume the work which is legible, but as more work gets pushed into this realm, the illegible relationships and expertise becomes fragmented and atrophies, which puts backpressure on the system's productivity as a whole. And reading said book, my guess that attempting to impose perfect legibility for the sake of AI tooling will ultimately prove disastrous.
reply