Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Right, but typically LLMs are really poor at this. I can come up with some arbitrary systems of equations for it to solve and odds are it will be wrong. Maybe even very wrong.


That is more indicative of the quality of their reasoning than their ability to reason in principle, though. And maybe even quality of their reasoning specifically in this domain - e.g. it's not a secret that most major models are notoriously bad at tasks involving things like counting letters, but we also know that if you specifically train a model to do that, it does in fact drastically improve its performance.

On the whole I think it shouldn't be surprising that even top-of-the-line LLMs today can't reason as well as a human - they aren't anywhere near as complex as our brains. But if it is a question of quality rather than a fundamental disability, then larger models and better NN designs should be able to gradually push the envelope.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: