Hacker Newsnew | past | comments | ask | show | jobs | submit | HardCodedBias's commentslogin

I upvoted you right off reading the first line.

But then you drifted.

I have no idea what you mean by "AGI, however, is mathematically impossible."

Further, your point about political pushback is short sighted. As AI becomes more lucrative there will be more impetuous to "pay" locations to have data centers, and as that becomes too expensive space is clearly the next answer.


The development of AGI assumes zero constraints, when constraints exist at every layer of the stack. That's why it's mathematically impossible.

In a system driven by capital, manufacturing can ramp to an extent but they generally can't exponentially ramp due to dependencies they have.

When you ramp one layer of the stack, other layers of the stack are pressurized. We're seeing a small preview of that now with memory pricing. But these break points for AGI are everywhere. Power capacity, power infrastructure, DC labor, cooling systems, memory, motherboards, GPUs. All of these things have dependencies that cannot be scaled exponentially, or quickly. As you pressure points of each of these dependencies, prices rise exponentially.

Let's take memory for instance, it is merely one block in the jenga tower but it's a good example. Memory is already at close to 100% capacity. Spinning up new capacity is highly constrained, and money can't really make it faster. Lead times are 4+ years on new plants, which cost billions.

The same is true for other components, and in some cases the situation is worse.


"Won't happen for 4+ years" and "mathematically impossible" are quite different. Given that humans apparently exhibit the "GI" part of "AGI", I find "mathematically impossible" difficult to believe. "Extremely unlikely with current LLM architecture", sure, but that's a very different statement from "mathematically impossible".

If you are making a prediction on the viability of AGI assuming that an entirely new technology will make the efficiency problem of LLMs moot then you're essentially engaging in mysticism, aren't you?

It is correct to say it is mathematically impossible, as all the people making AGI claims rely upon advances that are not even theoretical, they have not even been discovered yet, and the mere possibility of them is questioned by many scientists.

LLMs have hard and soft limits all over the place preventing AGI. You aren't gonna train and loop yourself to AGI because the compute does not exist, and will not exist.

My 4+ year point was for a single memory fab. Increasing capacity by merely 5% (generous assumption) takes 4 years and $10bn. It's starting to sound like the path to AGI in the current paradigm will cost infinite dollars and take infinite years of build-out.

Even with a transformational efficiency breakthrough, you still have hard limits all over the place. Where are you going to store all the data? Memory constraints again.


What a poor take if

"AI makes human labor obsolete"

Given comparative advantage gives a offramp to this for a lot of what we currently understand as "economics", if the author is positing that we will be beyond this, then your response is missing the forest from the trees.


There is no indication that the surplus extracted by automated labour will be distributed to the advantage of the population. If we look at how things are going at the moment and in the present, there will be a further concentration of power and capital. And I don't see any reasons why the billionaire class should give this up. You could, of course, give an argument why things are will be different this time.

I will repeat myself:

comparative advantage

[edit] I will further repeat myself:

If comparative advantage will not hold then that's really something, no one understands what happens in that future, proposing some random solution at this point is unbelievably premature.


Seriously Guardian, this has to be the least interesting question possible "if AI makes human labor obsolete", I mean FFS talk about a lack of understanding.

If this had happened it would be scandalous. Unreal, really.

Luckily, it sounds like reality was his Gemini account was banned. Much more reasonable.


"They can't afford to fall behind on it."

They are very, very seriously far behind as of 3.0.

We'll see if 3.1 addresses the issue at all.


"These models are so powerful."

Careful.

Gemini simply, as of 3.0, isn't in the same class for work.

We'll see in a week or two if it really is any good.

Bravo to those who are willing to give up their time to test for Google to see if the model is really there.

(history says it won't be. Ant and OAI really are the only two in this race ATM).


LOL come on man.

Let's give it a couple of days since no one believes anything from benchmarks, especially from the Gemini team (or Meta).

If we see on HN that people are willing switching their coding environment, we'll know "hot damn they cooked" otherwise this is another wiff by Google.


You can’t put Gemini and Meta in the same sentence. Llama 4 was DOA, and Meta has given up on frontier models. Internally they’re using Claude.

After spending all that money and firing a bunch of people? Is the new group doing anything at this point?

They are busy demonstrating that Mark Zuckerberg has no sense at all.

Deepmind was their worst acquisition ever. It is a vanity project that burns cash.

Let's be real.

Google leadership is pathetic.

Sundar "the manager" has presided over an enormous growth of the businesses he was handed. He also presided over the complete collapse of the internal culture. OTOH he may have fired Dianne Green, so that's something. Overall, at best Meh.

Demis ran a startup that burnt cash on vanity projects and continues to burn cash on vanity projects. Gemini is barely open source quality AI, but Google makes it nearly free and has the best distribution on the planet.

Gemini has been a joke since 1.0. No release has hurt Google's brand more. 3.0 was STOA for about 2 days, easily Gemini's best release.

Anthropic and OAI are moving at amazing pace, Google can not keep up at all.


Their models are absolutely not impressive.

Not a single person is using it for coding (outside of Google itself).

Maybe some people on a very generous free plan.

Their model is a fine mid 2025 model, backed by enormous compute resources and an army of GDM engineers to help the “researchers” keep the model on task as it traverses the “tree of thoughts”.

But that isn’t “the model” that’s an old model backed by massive money.


Uhh, just false.

It's just poop tier.

Come on.

Worthless.

Do you have any market counter points.

Market counter points that aren't really just a repackaging of:

  1. "Google has the world's best distribution" and/or  
  2. "Google has a firehose of money that allows them to sell their 'AI product' at an enormous discount?
Good luck!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: