Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've heard from insiders that AWS Nova and Google Gemini - both incredibly cheap - are still charging more for inference than they spend on the server costs to run a query. Since those are among the cheapest models I expect this is true of OpenAI and Anthropic as well.

The subsidies are going to the training costs. I don't know if any model is running at a profit once training/research costs are included.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: