Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

While I agree with some components of this blog, I also think that the author is speaking from a specific vantage point. If you are working at a large company on a pre-existing codebase, you likely have to deal with complexity that has compounded over many product cycles, pull requests, and engineer turnover. From my experience, AI has increased my performance roughly by 20%. This is primarily due to LLMs bypassing much of the human slop that has accumulated over the years on Google.

For newer languages, packages, and hardware-specific code, I have yet to use a single frontier model that has not slowed me down by 50%. It is clear to me that LLMs are regurgitating machines, and no amount of thinking will save the fact that the transformer architecture (all ML really) poorly extrapolates beyond what is in the training canon.

However, on zero-to-one projects that are unconstrained by my mag-seven employer, I am absolutely 10x faster. I can churn through boilerplate code, have faster iterations across system design, and generally move extremely fast. I don't use agentic coding tools as I have had bad experiences in how the complexity scales, but it is clear to me that startups will be able to move at lightning pace relative to the large tech behemoths.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: