Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The model didn't change.


How do you know?

Oh, let me guess... because OpenAI told you so. OpenAI, the one istitution with the most strong incentives to tell people the model is not getting worse.


Or I foresaw this and kept a collection of temperature 0.0 gpt 3.5's around.

People are incredibly silly, myself included, you get old enough and see an _insane_ influx of new people, you figure pretty much exactly this is going to happen. 95% of people don't know what temperature is. Of the 5% remaining, 4.9% think its something you just tell ChatGPT to adjust.


Supposedly. But I have the same experience. I used it to create code to design a complicated CRISPR experiment, but lately it can’t keep anything straight.


That should lead you to be less confident that it changed, not more


How so? Seems counterintuitive


I’d expect it based on the absurdity of the difference - it’s a master programmer and now it can’t do anything! - combined with self-knowledge that 0 attempts were made to make an objective comparison.

If my Tesla went 80 mph and started going 15, we wouldn’t attribute that to the-nature-of-Teslas, or a software update: there are vast incentives for anyone who knew that was done to share that publicly.

Instead, we know something is wrong with the individual car and we take it to the dealer.

Here, the missing part is objectivity via a speedometer: I have one, I know outputs are consistent on 0315 models at 0 temperature.


If you spend less compute power on the same model, the answers can degrade.

It makes total sense that they would throttle due to the incredible global demand. They can't build super computers fast enough now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: