Can you, though? I thought LLMs just by virtue of how they work, are non-deterministic. Let alone if new data is added to the LLM, further retraining happens, etc.
Is it possible to get the same output, 1:1, from the same prompt, reliably?
They are assuming a lot of things, like the LLM doesn't change, and that you have full control over the randomness . This might be possible if you are running the LLM locally.
Is it possible to get the same output, 1:1, from the same prompt, reliably?