Exactly. The inability of people to extrapolate towards the future and foresee second-order effects is astounding. We've seen this in climate change and we've just seen this in COVID. The ones with foresight are warning about the massive upheaval coming. It's time for people to shake away their preconceived notions, look at the situation with fresh eyes, and deeply think about what the technology diff from 5 years ago to today, means for 5 years from now.
Things on an exponential trend tend to continue unless they hit a fundamental limit that leads to an inflection point and then a sloped off S-curve.
Moore's law continued on an exponential for decades. The fundamental limit in terms of transistor density are the laws of physics (uncertainty principle will eventually be a problem), but so far so many paradigms in compute improvement have emerged (especially in GPUs and AI-specific compute) that it has become super-exponential in some respects.
So the question is whether there is a fundamental barrier that AI will hit. The main issues people bring up are a lack of high quality human-generated data, fall-off in value per compute spent, and limits to autoregressive models. However it seems that pretraining has been the only paradigm beginning to show diminished returns but test-time compute and RL are still on the exponential curve.