Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Pretty sure this "compute is the new oil" thesis fell flat when OAI failed to deliver on GPT-5 hype, and all the disappointments since.

It's still all about the (yet to be collected) data and advancements in architecture, and OAI doesn't have anything substantial there.



It's absolutely no longer about the data. We produce millions of new humans a year who wind up better at reasoning then these models but don't need to read the entire contents of the Internet to do it.

A relatively localized, limited lived experience apparently conveys a lot that LLM input does not - there's an architecture problem (or a compute constraint).


AI having societally useful impact is 100% about the data and overall training process (and robotics...), of which raw compute is a relatively trivial and fungible part.

No amount of reddit posts and H200s will result in a model that can cure cancer or drive high-throughput waste filtering or precision agriculture.


I think GPT 5 is pretty good. My use case is vscode copilot and the GPT 5 Codex model and the 5 mini model are a lot better than 4.1. o4 mini was pretty good too.

Its slow as balls as of late though. So I use a lot of sonnet 4.5 just because it doesn't involve all this waiting even though I find sonnet to be kinda lazy.


Sure, GPT-5 is pretty good. So are a dozen other models. It's nowhere near the "scary good" proto-AGI that Altman was fundraising on prior to its inevitable release.


Even more so where is the model that is beating GPT-5? This level that fell flat should have been easy to jump over if the scaling narratives were holding.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: