Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We all routinely talk about things we don't fully understand. We have to. That's life.

Whatever flawed analogy you're using, it can be more or less wrong though. My claim is that, to a first approximation, LLMs behave more like people than like regular software, therefore anthropomorphising them gives you better high-level intuition than stubbornly refusing to.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: