Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The first thing I tell the juniors under my supervision: any LLM is not a fact machine, even though sometimes it pretends to be. Double check everything!


The thing I always tell those who heavily trust its output is to ask it something you either already know the answer to or are something of an expert in; the flaws become much more evident.

The old joke is that you can get away with anything with a hi-vis vest and enough confidence, and LLM's pretty much work on that principle


A super heavy overconfidence of any LLM is what confuses a lot of people.


My company went head first into AI integration into everything.

I'm counting down the days until some important business decision is based on AI output that is wrong.


That had happened already.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: