Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

an LLM trained on just othello moves will reconstruct a board state from the sequence of moves to aid in prediction of the next move. You have no idea what an LLM is or is not doing to predict. Prediction is just the objective. Don't confuse that for the process.


> You have no idea what an LLM is or is not doing to predict.

Indeed, but I'm not trying to with that comment, which is just about how my own seems to work on self-reflection. That said, I do have reason to doubt the accuracy of human introspection of our own thought processes, and therefore my own judgment in this may also be flawed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: