Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My biggest issue with LLMs right now is that they're such spineless yes men. Even when you ask their opinion on if something is doable or should it be done in the first place, more often than not they just go "Absolutely!" and shit out a broken answer or an anti-pattern just to please you. Not always, but way too often. You need to frame your questions way too carefully to prevent this.

Maybe some of those character.ai models are sassy enough to have stronger opinions on code?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: