Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In the n-dimensional solution space of all potential approaches (known and unknown) to building a true human equivalent AGI, what are the odds that current LLMs are even directionally correct?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: