Nvidia got lucky because every quarterly all-hands meeting Jensen repeated that he kept on investing in CUDA and adding more silicon to the GPUs than strictly needed because, one day, an application would come along that would make it all worth it.
TFA says that Nvidia started aggressively seeding CUDA and GPUs for research in the early 2010s. It was much earlier than that: it started pretty much immediately after CUDA was introduced late 2006. And every new generation there were hardware features added to make GPU programming and porting of applications less painful. The first Nvision conference, precursor of GTC, was in 2008. That’s how you make your own luck.
I’ll never forget when, sometime around 2012?, he answered the question: “aren’t you afraid of Intel?”
His answer: “Not at all. Intel should be afraid of us. We will be bigger than them.” There was not a trace of doubt.
> "His answer: 'Not at all. Intel should be afraid of us. We will be bigger than them.' There was not a trace of doubt."
Given all the times that HN readers have derided grandiose executive pronouncements preceding flops, more people should recognize the above for what it is: not profundity but just puffery that happened to pan out. Not that skill and effort weren't involved in making it pan out but that any of a zillion things could have gone wrong to make that statement false and part of any manager's job is to project confidence and instill motivation despite knowing that.
I think he had a strategy - utilizing the massively parallel computation of GPUs for more general purpose compute as Moore's law tailed off - and he noticed that Intel couldn't see the lights of this in the rear view mirror.
Everybody's known that Moore's law was on its way out, for speed increases at least, since the mid 2000s - the seminal article was by Herb Sutter [1]. So hardware needed to get more parallel. But multicore is a distinctly different paradigm to CUDA, which is closer to SIMD but on a completely different order of magnitude. So Intel was never going to get to where the puck was skating.
That’s the point, though. This is no different than any other statement made by a CEO with good engineers behind them.
This time it worked out. Can’t give it a survivorship bias. I don’t personally mind CEOs being encouraging, but at least understand that they don’t really ever know.
IMO one big factor is that Nvidia is still fully engineering driven - it's engineers all the way to the top making the calls. Intel was like that as well, and then lost it (until Gelsinger). IMO you need domain experts in charge of companies, or they can't thrive in the long run, not unless there is an actual, almost unsurpassable moat.
It's called leadership. George Washington wasn't a brilliant general but he was able to convince people they were going to win against an empire. Whether he actually believed it himself we'll never know.
> Nvidia started aggressively seeding CUDA and GPUs for research in the early 2010s
I was at a niche graphics app startup circa 2000-2005 and even then NVidia invested enough to be helpful with info and new hardware, certainly better than other GPU companies. Post 2010 I was at a F500, industry leading tech company and an NVidia Biz Dev person came to meet with us every quarter usually bearing info and sometimes access to new hardware.
It's also worth noting that NVidia has consistently invested more than their peers in their graphics drivers. While the results aren't always perfect, NVidia usually has the best drivers in their class.
Oh interesting. I remember that Folding@Home way back then (ca 2009) was already testing protein folding on GPUs and it took advantage of CUDA. I never really thought much of it other than how cool it was that my mid-tier Nvidia graphics card could be used for something else other than games, but this explains how this ended up happening.
(Bit of a tangent, but that project was very influential in getting me interested in computer science because, wow, how cool is it that we can use GPUs to do insane parallel computing. So I guess, very very indirectly, Nvidia had a part in me being a software engineer today.)
By 2007/2008 there was a trend in HPC research called GPGPU. This involved havky techniques to get the shaders to do the computations you wanted.
CUDA started appearing in 2008 with a framework (compiler, debugger) to do GPGPU in a proper way. It got the monopoly. They’ve been benefiting from first movers advantage ever since.
GPGPU was a thing well before that! In 2004, it was ready covered in a few chapters of GPU Gems 1, increasing to 18 chapters of the 2005 GPU Gems 2, which including an FFT implementation.
Excellent point! Jensen is very focused and the company has worked incredibly hard on whatever they've put out there. The Shield is a testament of this focus. They find a budding niche and double down on building it from nothing. Most "self-driving" cars have Nvidia gear for a reason.
It's the best Android TV/Gaming device around, even by todays standards. It's stable and does what it was designed to do. With zero marketing from Google or Nvidia, the mass market obviously didn't care for this type of product and category but the device itself is great and works flawlessly. The Nvidia Game app also bundled and pushed the Nvidia game streaming concept around GeForce Now. Overall, Nvidia gave put their best foot forward with this device, providing standout support for both HW and SW.
The chip line they made for it powers the most popular console on the market and basically locked its manufacturer into Nvidia chips until they're willing to drop compatibility, so financially it probably worked out for them, even if the Shield line itself wasn't extremely financially successful.
I don’t know about it’s market success, but it’s a great product. We use it for as the frontend for all of the streaming platforms and PLEX as well as running some stuff directly from a NAS and IPTV.
I use them everywhere and have a big pile of chromecasts, satellite boxes, remote controls and Apple tvs now ready for eBay!
Some years before CUDA there was a lot of hype when the first GPGPU papers published in 2003 which showed significantly increasing performance using parallel computation from consumer graphics cards. At the time, it looked like competing on general purpose computation was a solid strategy: multi-core CPU from intel was still years away, showing up in 2005; starting from 2000 the rate of increase of clock speeds started slumping. We saw Intel started releasing more variants of processors, but the clock speeds weren't advancing exponentially anymore. The new battle for core supremacy was on the horizon.
TFA says that Nvidia started aggressively seeding CUDA and GPUs for research in the early 2010s. It was much earlier than that: it started pretty much immediately after CUDA was introduced late 2006. And every new generation there were hardware features added to make GPU programming and porting of applications less painful. The first Nvision conference, precursor of GTC, was in 2008. That’s how you make your own luck.
I’ll never forget when, sometime around 2012?, he answered the question: “aren’t you afraid of Intel?”
His answer: “Not at all. Intel should be afraid of us. We will be bigger than them.” There was not a trace of doubt.