Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A lot of major providers offer their cutting edge model for free in some form these days, that's merely a market penetration strategy. At the end of the day (if you look at the cloud prices), TPUs are only about 30% cheaper. But NVidia produces orders of magnitude more cards. So Google will certainly need more time to train and globally deploy inference for their frontier models. For example, I doubt they could do with TPUs what xAI did with Nvidia cards.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: