Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I dont think the "meme" that LLMs follow instructions inconsistently will ever die because they do. It's in the nature of how LLMs function under the hood.

>Also who sits around rerunning the same prompt over and over again to see if you get a different outcome like its a slot machine?

Nobody. Plenty of people do like to tell the LLM that somebody might die if they dont do X properly and other such faith based interventions with their "magic box" though.

Boy do their eyes light up when they hit the "jackpot", too (LLM writes what appears to be the correct code on the first shot).



They're so much more consistent now than they used to be. The new LLMs almost always boast about how much better they are at "instruction following" and it really shows, I find Claude 4.5 and GPT-5.x models do exactly what I tell them to most of the time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: