Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> > If you present a simple chess board to an LLM or a complex board to an LLM and ask it to generate the next move, it always responds in the same amount of time.

> Is that true, especially if you ask it to think step-by-step?

That's fair -- there's a lot of room to grow in this area.

If the LLM has been trained to operate with running internal-monologue, then I believe they will operate better. I think this definitely needs to be explored more -- from what little I understand of this research, the results are sporadically promising, but getting something like ReAct (or other, similar structures) to work consistently is something I don't think I've seen yet.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: