I think Cook left easy money on the table by not competing against NVIDIA. They could've tested the waters by loading up Apple Silicon on PCIe riser cards, maturing the toolkit for AI workloads, and selling them at competitive prices. Yes I know they're in the business of making entire widgets, but it would've been easy money. The hardware and software stacks are there. Unlimited upside with nearly zero downside risk.
Apple seems to be avoiding building server hardware for some reason. It seems like a big opportunity, besides AI, the power efficiency of their chips would surely be attractive for datacentres. I think momentum is building for moving away from x86.
Most Macs (both Intel and Apple Silicon) refuse to thermal-throttle until they reach the junction temp.
Both you and the parent can be correct, here; many Macs are quite cool at idle, but also throttle much slower than equivalent Intel or AMD chips under load.
> left easy money on the table by not competing against NVIDIA
What!? Seems to me the timelines do not support this. Apple has already been diligent in using their chip design effort (for multiple generations of CPUs) - would they have had still more bandwidth for taking on the GPU field? And Apple's successes are more recent than Nvidia's success with GPUs. Apple silicon capability was not there yet when Nvidia created then conquered the GPU world.
I don't know why this is getting downvoted. Apple for sure could make very capable hardware/software for cloud AI workloads - directly taking on NVIDIA.