Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yann LeCun's prediction was empirically refuted. He says that the longer LLMs run, the less accurate they get. OpenAI showed the opposite is true.


They didn't show this, they just increased the length where accuracy breaks down.


Explain? OpenAI showed the new scaling law in December 2024 that performance keeps increasing proportional to ln(N reasoning tokens)


link?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: