Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is it possible some of these LLMs actually have internal tools / calculators? ie blackboxing what ChatGPT has as explicit plugins


even if there were some mixture-of-experts shenanigans going on, there is no introspection or reasoning, so the model isn’t able to comment on or understand its “inner experience”, if you can call matrix multiplications an inner experience


I was imagining system-prompt-based tool use, where the LLM "knows" it can call some calculator to get digits of pi


If it were, they still wouldn't be able to commentate about it.


Unless the existence of such a tool at its disposal is in its context. A strategy would be to provide a set of tools, how to invoke them internally, and a description of their apparent interaction in the LLMs initial context.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: