Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Do we know parameter counts? The reasoning models have typically been cheaper per token, but use more tokens. Latency is annoying. I'll keep using gpt-4.1 for day-to-day.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: