Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Bit too soon to tell, no? Claude Code wasn't released until the latter half of Q2, offering little time for it to show up in those figures, and Q3 data is only preliminary right now. Moreover, it seems to be the pairing with Opus 4.5 that lends some credence to the claims. However, it was released in Q4. We won't have that data for quite a while. And like Claude Code, it came late in the quarter, so realistically we really need to wait on Q1 2026 figures, which hasn't happened yet and won't really start to appear until summertime and beyond.

That said, I expect you are right that we won't see it show up. Even if we assume the claim is true in every way for some people, it only works for exceptional visionaries who were previously constrained by typing speed, which is a very, very, very small segment of the developer population. Any gains that small group realize will be an unrecognizable blip amid everything else. The vast majority of developers need all that typing time and more to have someone come up with their next steps. Reducing the typing time for them doesn't make them any more productive. They were never limited by typing speed in the first place.





The productivity studies on software engineers directly don't show much of a productivity gain certainly nowhere near the 10x the frontier labs would like to claim.

When including re-work of bugs in the AI generated code some studies find that AI has no positive impact on software developer productivity, and can even have a negative impact.

The main problem with these studies are they are backward looking, so frontier labs can always claim the next model will be the one that delivers the promised productivity gains and displace human workers.


> The productivity studies on software engineers directly don't show much of a productivity gain certainly nowhere near the 10x the frontier labs would like to claim.

Which studies are you talking about? The last major study that I saw (that gained a lot of attention) was published half a year ago, and the study itself was conducted on developers using AI tools in 2024.

The technology has improved so rapidly that this study is now close-to-meaningless.


A few studies over different time frames:

[1] https://www.youtube.com/watch?v=1OzxYK2-qsI [2] https://www.coderabbit.ai/blog/state-of-ai-vs-human-code-gen... [3] https://www.youtube.com/watch?v=JvosMkuNxF8 [4] https://www.faros.ai/blog/ai-software-engineering

"The technology has improved so rapidly that this study is now close-to-meaningless."

You could have said that anytime in the last 3 years, but the data has never shown it to be true. Is there data to show that the current gen models are so much better than the last gen models that the existing productivity data should be ignored? I don't think the coding benchmarks show a step change in capabilities, its generally dev vibes rather than a large change to measurements.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: