Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Things on an exponential trend tend to continue unless they hit a fundamental limit that leads to an inflection point and then a sloped off S-curve.

Moore's law continued on an exponential for decades. The fundamental limit in terms of transistor density are the laws of physics (uncertainty principle will eventually be a problem), but so far so many paradigms in compute improvement have emerged (especially in GPUs and AI-specific compute) that it has become super-exponential in some respects.

So the question is whether there is a fundamental barrier that AI will hit. The main issues people bring up are a lack of high quality human-generated data, fall-off in value per compute spent, and limits to autoregressive models. However it seems that pretraining has been the only paradigm beginning to show diminished returns but test-time compute and RL are still on the exponential curve.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: