I have no idea what you mean by "AGI, however, is mathematically impossible."
Further, your point about political pushback is short sighted. As AI becomes more lucrative there will be more impetuous to "pay" locations to have data centers, and as that becomes too expensive space is clearly the next answer.
The development of AGI assumes zero constraints, when constraints exist at every layer of the stack. That's why it's mathematically impossible.
In a system driven by capital, manufacturing can ramp to an extent but they generally can't exponentially ramp due to dependencies they have.
When you ramp one layer of the stack, other layers of the stack are pressurized. We're seeing a small preview of that now with memory pricing. But these break points for AGI are everywhere. Power capacity, power infrastructure, DC labor, cooling systems, memory, motherboards, GPUs. All of these things have dependencies that cannot be scaled exponentially, or quickly. As you pressure points of each of these dependencies, prices rise exponentially.
Let's take memory for instance, it is merely one block in the jenga tower but it's a good example. Memory is already at close to 100% capacity. Spinning up new capacity is highly constrained, and money can't really make it faster. Lead times are 4+ years on new plants, which cost billions.
The same is true for other components, and in some cases the situation is worse.
"Won't happen for 4+ years" and "mathematically impossible" are quite different. Given that humans apparently exhibit the "GI" part of "AGI", I find "mathematically impossible" difficult to believe. "Extremely unlikely with current LLM architecture", sure, but that's a very different statement from "mathematically impossible".
If you are making a prediction on the viability of AGI assuming that an entirely new technology will make the efficiency problem of LLMs moot then you're essentially engaging in mysticism, aren't you?
It is correct to say it is mathematically impossible, as all the people making AGI claims rely upon advances that are not even theoretical, they have not even been discovered yet, and the mere possibility of them is questioned by many scientists.
LLMs have hard and soft limits all over the place preventing AGI. You aren't gonna train and loop yourself to AGI because the compute does not exist, and will not exist.
My 4+ year point was for a single memory fab. Increasing capacity by merely 5% (generous assumption) takes 4 years and $10bn. It's starting to sound like the path to AGI in the current paradigm will cost infinite dollars and take infinite years of build-out.
Even with a transformational efficiency breakthrough, you still have hard limits all over the place. Where are you going to store all the data? Memory constraints again.
Given comparative advantage gives a offramp to this for a lot of what we currently understand as "economics", if the author is positing that we will be beyond this, then your response is missing the forest from the trees.
There is no indication that the surplus extracted by automated labour will be distributed to the advantage of the population. If we look at how things are going at the moment and in the present, there will be a further concentration of power and capital. And I don't see any reasons why the billionaire class should give this up. You could, of course, give an argument why things are will be different this time.
If comparative advantage will not hold then that's really something, no one understands what happens in that future, proposing some random solution at this point is unbelievably premature.
Seriously Guardian, this has to be the least interesting question possible "if AI makes human labor obsolete", I mean FFS talk about a lack of understanding.
Let's give it a couple of days since no one believes anything from benchmarks, especially from the Gemini team (or Meta).
If we see on HN that people are willing switching their coding environment, we'll know "hot damn they cooked" otherwise this is another wiff by Google.
Sundar "the manager" has presided over an enormous growth of the businesses he was handed. He also presided over the complete collapse of the internal culture. OTOH he may have fired Dianne Green, so that's something. Overall, at best Meh.
Demis ran a startup that burnt cash on vanity projects and continues to burn cash on vanity projects. Gemini is barely open source quality AI, but Google makes it nearly free and has the best distribution on the planet.
Gemini has been a joke since 1.0. No release has hurt Google's brand more. 3.0 was STOA for about 2 days, easily Gemini's best release.
Anthropic and OAI are moving at amazing pace, Google can not keep up at all.
Not a single person is using it for coding (outside of Google itself).
Maybe some people on a very generous free plan.
Their model is a fine mid 2025 model, backed by enormous compute resources and an army of GDM engineers to help the “researchers” keep the model on task as it traverses the “tree of thoughts”.
But that isn’t “the model” that’s an old model backed by massive money.
Market counter points that aren't really just a repackaging of:
1. "Google has the world's best distribution" and/or
2. "Google has a firehose of money that allows them to sell their 'AI product' at an enormous discount?
But then you drifted.
I have no idea what you mean by "AGI, however, is mathematically impossible."
Further, your point about political pushback is short sighted. As AI becomes more lucrative there will be more impetuous to "pay" locations to have data centers, and as that becomes too expensive space is clearly the next answer.
reply