even if there were some mixture-of-experts shenanigans going on, there is no introspection or reasoning, so the model isn’t able to comment on or understand its “inner experience”, if you can call matrix multiplications an inner experience
Unless the existence of such a tool at its disposal is in its context. A strategy would be to provide a set of tools, how to invoke them internally, and a description of their apparent interaction in the LLMs initial context.