Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How many people are out there buying H100s for their personal use?


Ah, but part of the reason for CUDA's success is that the open source developer who wants to run unit tests or profile their kernel can pick up a $200 card. That PhD student with a $2000 budget can pick up a card. Academic lab with $20,000 for a beefy server, or tiny cluster? nvidia will take their money.

And that's all fixed capital expenditure - there's no risk a code bug or typo by an inexperienced student will lead to a huge bill.

Also, if you're looking for an alternative to CUDA because you dislike vendor lock-in, switching to something only available in GCP would be an absurd choice.


I'm really shocked at how dependent companies have become on the cloud offerings. Want a GPU? Those are expensive, lets just rent on Amazon and then complain about operational costs!

I've noticed this at companies. Yeah, the cloud is expensive, but you have a data center, and a few servers with RTX 3090s aren't expensive. A lot of research workloads can run on simple, cheap hardware.

Even older Nvidia P40s are still useful.


Probably many orders of magnitude greater than those buying TPU's for personal use...


Technically correct, but only because TPUs aren't for sale. H100s cost like 30,000 USD, if you can even get one.


So in other words, every AI company has at least 20.


Probably not many. However, 4090s would be a different situation. There are plenty of guides on running LLMs, stable diffusion, etc. on local hardware.

The H100s would be for businesses looking to get into this space.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: