Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The moment somebody at OpenAI reads the QuietSTaR paper (GPT-2 scale network with GPT-3 quality!) and understands what it means, the timer to AGI begins.

I think parametric memory and accelerated grokking are equally promising. The convergence of all of these will dramatically improve reasoning. The next few years will be interesting indeed.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: