The article is simple wrong, dithering is still widely used, and no we do not have enough color depth to avoid it. Go render a blue sky gradient without dithering, you will see obvious bands.
Yep, even high quality 24-bit uncompressed imagery often benefits from dithering, especially if it's synthetically generated and, even if it's natural imagery, if it's processed or manipulated - even mildly - it'll probably benefit from dithering. If it's a digital photograph, it was probably already dithered during the de-bayering process.
You can do with a static dither pattern (I've done it, and it works well). It's a bit of a trade-off between banding and noise, but at least static stuff stays static and thus easily compressable.
A very simple black-to-white gradient can only be, at most, 256 pixels wide before it starts banding on the majority of computers that use SDR displays. HDR only gives you a couple extra bits where each bit doubles how wide the gradient can be before it starts running out of unique color values. If the two color endpoints of the gradient are closer together, you get banding sooner. Dithering completely solves gradient banding.
The average desktop computer is running with 8 bit color depth the vast majority of the time, so find or generate basically any wide basic gradient and you'll see it.
In terms of color spaces, SRGB (the typical baseline default RGB of desktop computing) is quite naive and inefficient. Pretty much its only upsides are its conceptual and mathematical simplicity. There are much more efficient color spaces which use dynamic non-linear curves and are based on how the rods and cones in human eyes sense color.
> We don't really need dithering anymore because we have high bit-depth colors so its largely just a retro aesthetic now.
By the way, dithering in video creates additional problems because you want some kind of stability between successive frames.