Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Please try again with Codex on High or Extra High. 5.1-Max nerfed it a bit if you don't use higher thinking.


This is overparameterisation


No


I guess you have not tried GPT 5 Pro

GPT’s differentiator is they focused on training for “thinking” while Gemini prioritized instant response. Medium thinking is not the limit of utility

Re: overparameterization specifically Medium and High are also identically parameterized

Medium will also dynamically use even higher thinking than High. High is fixed at a higher level rather than leaving it to be dynamic, though somewhat less than Medium’s upper limit




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: