Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Said person does not give a shit about whether things are correct or could even work, as long as they look "somewhat plausible".

This seems to be the fundamental guiding ideology of LLM boosterism; the output doesn't actually _really_ matter, as long as there's lots of it. It's a truly baffling attitude.



> It's a truly baffling attitude.

I wish, but no, it's not baffling. We live in a post-truth society, and this is the sort of fundamental nihilism that naturally results.


I agree that it fits in with a certain trope. But do people really believe that?

What I mean is:

Some people recognize that there are circumstances where the social aspects of agreement seem to be the dominant concern, e.g. when the goal is to rally votes. The cynical view of "good beliefs" in that scenario is group cohesion, rather than correspondence with objective reality.

But most everyone would agree that there are situations where correlation with objective reality is the main concern. E.g., when someone is designing and building the bridge they cross every day.


> We live in a post-truth society

Oversimplified to an awful degree. There is a lot of variation between people, cultures, even countries.


They always market the % of lines generated by AI. But if you are forced to use a tool that constantly inserts generations, that number is always going to be high even if the actual benefit is nil or negative.

If the AI tool generates a 30 line function which doesn’t work. And you spend time testing and modifying the 3 lines of broken logic. The vast majority of the code was AI generated even if it didn’t save you any time.


> They always market the % of lines generated by AI

That's crazy, should really be the opposite. If someone releases weights that promises "X% less lines generated compared to Y", I'd jump on that in an instant, more LLMs are way too verbose by default. Some are really hard to even use prompts to get them to be more concise (looking at you, various Google models)


It is possible to take this too far, though - consider the OpenAI IMO proofs[1], for instance, and compare them to Gemini's.[2]

[1] https://github.com/aw31/openai-imo-2025-proofs

[2] https://arxiv.org/pdf/2507.15855 Appendix A


It's a perfectly understandable attitude: "Get me a fat paycheck with the least effort possible." The longer term problem is that those who see the most benefit in productivity from LLMs are also the ones weak enough to be most easily replaced by LLMs entirely.


And yet this fine example of a used car salesman is being rewarded by everyone and their dog hosting their stuff on GitHub, feeding the Copilot machinery with their work for free, so it can be sold back to them.


All of the crypto grifters have shifted to AI.

Fundamentals don't matter anymore, just say whatever you need to say to secure the next round of funding.


They never mattered, at least a long as you've been alive. The "Soviet statistics" discussion at the start of the article was an amazing example of Western capitalistic propaganda, because the same nonsense with statistics is also out of control in the West, just not so mind numbingly obvious. The USA is the king of propaganda, far in advance of all rivals.


All of the finance grifters have shifted to tech.


nah they are in government now


> the fundamental guiding ideology of LLM boosterism

It's the same as the ideology of FOMO Capitalism:

= The billionaire arseholes are saying it, it must be true

= Stock valuations are in the trillions, there must be enormous value

= The stock market is doing so well, your concerns about fundamental social-democratic principles are unpatriotic

= You need to climb aboard or you will lose even worse than you lost in the crypto-currency hornswoggle




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: