Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Can you, though? I thought LLMs just by virtue of how they work, are non-deterministic. Let alone if new data is added to the LLM, further retraining happens, etc.

Is it possible to get the same output, 1:1, from the same prompt, reliably?



They are assuming a lot of things, like the LLM doesn't change, and that you have full control over the randomness . This might be possible if you are running the LLM locally.


Well if the llm change, why not assume the index system doesn't change too?

And yeah I guess if you control the seed an llm would be deterministic.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: