Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

i had the same thought as you as i was reading GP comment but lately i've come across some videos on 'Interpretability' of llm output where they said llm store 'core ideas' in a abstract way and produce concrete output on the fly based on that core representation. This is very different from viewing llms are next token predictors which would imply what you said in your comment.

https://www.youtube.com/watch?v=fGKNUvivvnc&t=2748s



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: