Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To win, they just needed to show up. Which is more than can be said about the competition. AMD alternatives to CUDA are/were fumbling in the dark for many years, and more open alternatives like OpenCL are too limited (by design?).

To me the situation looks quite clear: a GPU has vastly more compute than a CPU. As time goes, we will need and use more and more compute. You just need a way (general purpose language or API) to use that GPU. For some reason, other companies in this space did not see this.



CUDA and close to metal happened at the same time.

One is a framework, language extensions etc., the other is “heyy here’s an assembler for this year’s GPUs”

Also in ~2008 nVidia already made dedicated hardware for GPU compute like this thing: https://www.nvidia.com/docs/io/43395/d870-systemspec-sp-0371...


They keep failing to see this, while CUDA is a polyglot programming model, a couple of years ago at an OpenCL conference (IWOCL), someone asked the panel when Fortran support was going to happen.

Everyone on the panel reacted surprised that it would be something that anyone would want to do, and most of the answers were the kind of talk to us later.

Meanwhile PGI was shipping Fortran for CUDA, this was before them being acquired by NVidia.


I agree with this. All they had to do was the bare minimum and actually keep it alive for a few years.

This pattern is pretty common in industry. Almost all the huge companies that are winners in technology are those that got on the market and kept the thing alive - that's not sufficient, but it is necessary.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: