I keep hearing this and then try using windsurf with the latest Anthropoc models to perform simple refactorings. There’s often goofy mistakes like hallucinations that mistakenly remove imports. Will check back next year…
Well these are nonlinear emergence engines. No two will come up with same solution the more ambiguous, dynamic or complex problems get.
Just because your AI gives you a solution, doesn't mean my AI will provide the same. Now scale that fact up to different teams and different firms. How are things going to work? Why will it reduce the number of people? Just like Jurassic Park(or working on Linux), once strange unpredictable things start happening, you need more and more people running around to clean things up. They don't know what the fuck they are doing, or how to do it well, cause thats the nature of complex problems. So things spiral.
Most people are just defaulting to - oh AI will have one answer to everything. And we will all agree to that solution. This will never happen and therefore the predictions will all break.