Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Crypto miners were never "Nvidia-only". Probably because hashing algorithms were trivial to implement on OpenCL and later mROC.

Nvidia's moat is AI.



Nvidia gpus were massively inferior for Bitcoin mining, in fact, because ATI/AMD had some integer operations that allowed SHA256 to be several times more efficient.

Ironically, that's what ultimately made Nvidia the winner for gpu mining: ATI gpus had been massively deployed for Bitcoin mining prior to the dominance of mining asics. When people created new altcoins they specifically designed their work functions so that they the inventors could have an advantage vs the general public, so they designed them for nvidia gpus rather than what was already deployed. This let them buy up gpus before shortages came into effect and delayed competition from the installed base.

Sure, you can trivially port whatever to whatever, but outside of startup effect mining is naturally perfectly competitive. Being 20% less efficient vs costs means bankruptcy.

> Nvidia's moat is AI.

Nvidia had a gpu computing moat before the current AI fad, due to maturity of the CUDA ecosystem. At least AI codes are generally pretty easy to port to other architectures, similar to mining in that sense-- but the AI designers don't have a profit motive to make sure they choose algorithms that are more efficient on their hardware than yours, and your AI hardware doesn't become useless if does happen to be a few percent less efficient.


It totally depends on the era and the algorithm.

The first big mining wave was 2014-2015. This was done on 5850s, and GCN 1 and 2 GPUs. This was Bitcoin, so it was computation-focused, and I think in this era it was definitively AMD dominated due to VLIW allowing very dense execution resources plus early GCN having a large amount of raw integer processing power.

The next was 2017-2018. By this era bitcoin itself had moved off to FPGAs and ASICs, so this was around Ethereum which primarily worked based on proof of memory bandwidth. AMD GPUs were falling behind Maxwell/Pascal in terms of their memory compression (although they did use it) so they were equipped with more memory bandwidth to compensate. So for a given AMD card (Polaris, Vega, etc) you got more memory bandwidth per $, but in terms of the actual compute efficiency, NVIDIA had already pushed ahead even in Ethereum. NVIDIA was usually superior per-watt with cards like 1060, 1070, 1070 Ti, and 1080 Ti, and AMD just let you burn more watts.

However when it came to altcoins, where it was not just raw bandwidth, NVIDIA's superior compute/GPGPU efficiency took over and there were some coins that NVIDIA was 2x or more more efficient on per-watt and also the winner in absolute performance.

(the thing to remember is that compute is the part that ASICs can do efficiently, and I always questioned whether those altcoins were really ASIC-resistant. But the ProgPOW-style algorithms doing better on NVIDIA cards never bothered/confused me, the reality is that most GPGPU programs "favored NVIDIA" during this era and Ethereum's proof-of-bandwidth model was an exception. Pascal was an efficiency beast and Polaris and Vega were only ok at best, outside their enormous, dangling memory buses.)

This state of affairs persisted throughout the 5700 series until AMD launched the 6000 series, where they shifted to a design with smaller memory buses and more cache, which put them in the inverse situation of 2014-2015 where they were getting more out of a weaker memory subsystem than NVIDIA. And I think they did this on purpose because they wanted to "opt out" of the mining boom/bust cycle, and NVIDIA made a similar approach with Ada that has been extremely unpopular (despite AMD leading the way on this a few years before on their cards too).

Isn't it wonderfully coincidental that out of any amount NVIDIA could have put into the LHR to slow down when it detected mining, that they put in the exact amount that dropped their cards to the same relative mining performance as AMD, and moved to the same cache-based approaches in the next generation as well? That has always been my take around LHR - it's not that they didn't like mining revenue, it's that they didn't want NVIDIA cards to be disproportionately pulled off shelves like happened to AMD in the 2014 and 2017 mining booms. People remember the "AMD was $1600, NVIDIA was $2400" situation already, they didn't want that to persist and turn into actual marketshare.


Nvidia's moat is a polyglot GPGPU programming environemnt, great tooling and libraries.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: