Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Criticism feels harsh. Of course models don't know what they don't know. Reporters can have the same biases. They could have worded it better "lowers the probability of hallucinating", but it is correct it helps to guard against it. It's just that it's not a binary thing.


> we made sure to tell the model not to guess if it wasn’t sure

Fair enough, but it's kind of ridiculous that in 2025 this "hack" still helps produce more reliable results.


Alas, current LLM prompting has a lot of hacks. Half of them are useless, of course, while the other half are critical for success. The trick is: which one is which?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: