>And, for at least a while, it would also presumably be doing so at an exponentially increasing rate.
Why would you presume this? I think part of a lot of people's AI skepticism is talk like this. You have no idea. Full stop. Why wouldn't progress be linear? As new breakthroughs come, newer ones will be harder to come by. Perhaps it's exponential. Perhaps it's linear. No one knows.
No one knows, but it's a reasonable assumption surely. If you're theorising a AGI, that has recursive self improvement, exponential improvements seem almost unavoidable. AGI improves understanding of electronics, physics etc, that improves the AGI leading to new understandings and so on. Add in that new discoveries in one field might inspire the AGI/humans to find things in others and it seems hard to imagine a situation where theres not a lot of progress everywhere (at least theoretical progress, building new things might be slower / more costly then reasoning they would work.)
Where I'm skeptical of AI would be in the idea an LLM can ever get to AGI level, if AGI is even really possible, and if the whole thing is actually viable. I'm also very skeptical that the discoveries of any AGI would be shared in ways that would allow exponential growth; licenses stopping using their AGI to make your own, copyright on the new laws of physics and royalties on any discovery you make from using those new laws etc.
>If you're theorising a AGI, that has recursive self improvement, exponential improvements seem almost unavoidable.
Prove it.
Also, AI will need resources. Hardware. Water. Electricity. Can those resources be supplied at an exponential rate? People need to calm down and stop stating things as truth when they literally have no idea.
Well said. It does seem that many who speculate on this are not taking into account limits that more/faster processing won’t actually help much. Say an algorithm is proven to be O(n!) for all cases, at a certain size of n, there’s not much that can be done if the algorithm is needed as is.
It's a logical presumption. Researchers discover things. AGI is a researcher that can be scaled, research faster, and requires no downtime. Full stop if you dont find that obvious you should probably figure out where your bias is coming from. Coding and algorithmic advance does not require real world experimentation.
> Coding and algorithmic advance does not require real world experimentation.
That's nothing close to AGI though. An AI of some kind may be able to design and test new algorithms because those algorithms live entirely in the digital world, but that skill isn't generalized to anything outside of the digital space.
Research is entirely theoretical until it can be tested in the real world. For an AGI to do that it doesn't just need a certain level of intelligence, it needs a model of the world and a way to test potential solutions to problems in the real world.
Claims that AGI will "solve" energy, cancer, global warming, etc all run into this problem. An AI may invent a long list of possible interventions but those interventions are only as good as the AI's model of the world we live in. Those interventions still need to be tested by us in the real world, the AI is really just guessing at what might work and has no idea what may be missing or wrong in its model of the physical world.
If AGI has human capability, why would we think it could research any faster than a human?
Sure, you can scale it, but if an LLM takes, say, $1 million a year to run an AGI instance, but it costs only $500k for one human researcher, then it still doesn’t get you anywhere faster than humans do.
It might scale up, it might not, we don’t know. We won’t know until we reach it.
We also don’t know if it scales linearly. Or if it’s learning capability and capacity will able to support exponential capability increase. Our current LLM’s don’t even have the capability of self improvement or learning even if they were capable: they can accumulate additional knowledge through the context window, but the models are static unless you fine tune or retrain them. What if our current models were ready for AGI but these limitations are stopping it? How would we ever know? Maybe it will be able to self improve but it will I’ll take exponentially larger amounts of training data. Or exponentially larger amounts of energy. Or maybe it can become “smarter” but at the cost of being larger to the point where the laws of physics mean it has to think slower, 2x the thinking but 2x the time, could happen! What if an AGI doesn’t want to improve?
> Sure, you can scale it, but if an LLM takes, say, $1 million a year to run an AGI instance, but it costs only $500k for one human researcher, then it still doesn’t get you anywhere faster than humans do.
Just from the fact that the LLM can/will work on the issue 24/7 vs a human who typically will want to do things like sleep, eat, and spend time not working, there would already be a noticeable increase in research speed.
This assumes that all areas of research are bottlenecked on human understanding, which is very often not the case.
Imagine a field where experiments take days to complete, and reviewing the results and doing deep thought work to figure out the next experiment takes maybe an hour or two for an expert.
An LLM would not be able to do 24/7 work in this case, and would only save a few hours per day at most. Scaling up to many experiments in parallel may not always be possible, if you don't know what to do with additional experiments until you finish the previous one, or if experiments incur significant cost.
So an AGI/expert LLM may be a huge boon for e.g. drug discovery, which already makes heavy use of massively parallel experiments and simulations, but may not be so useful for biological research (perfect simulation down to the genetic level of even a fruit fly likely costs more compute than the human race can provide presently), or research that involves time-consuming physical processes to complete, like climate science or astronomy, that both need to wait periodically to gather data from satellites and telescopes.
> Imagine a field where experiments take days to complete, and reviewing the results and doing deep thought work to figure out the next experiment takes maybe an hour or two for an expert.
With automation, one AI can presumably do a whole lab's worth of parallel lab experiments. Not to mention, they'd be more adept at creating simulations that obviates the need for some types of experiments, or at least, reduces the likelihood of dead end experiments.
Presumably ... the problem is this is an argument that has been made purely as a thought experiment. Same as gray goo or the paper clip argument. It assumes any real world hurdles to self improvement (or self-growth for gray goo and paper clipping the world) will be overcome by the AGI because it can self-improve. Which doesn't explain how it overcomes those hurdles in the real world. It's a circular presumption.
What fields do you expect these hyper-parallel experiments to take place in? Advanced robotics aren't cheap, so even if your AI has perfect simulations (which we're nowhere close to) it still needs to replicate experiments in the real world, which means relying on grad students who still need to eat and sleep.
Biochemistry is one plausible example. Deep Mind made hug strides in protein folding satisfying the simulation part, and in vitro experiments can be automated to a significant degree. Automation is never about eliminating all human labour, but how much of it you can eliminate.
> ...the fact that the [AGI] can/will work on the issue 24/7...
Are you sure? I previously accepted that as true, but, without being able to put my finger on exactly why, I am no longer confident in that.
What are you supposed to do if you are a manically depressed robot? No, don't try to answer that. I'm fifty thousand times more intelligent than you, and even I don't know the answer. It gives me a headache just trying to think down to your level. -- Marvin to Arthur Dent
(...as an anecdote, not the impetus for my change in view.)
>Just from the fact that the LLM can/will work on the issue 24/7 vs a human who typically will want to do things like sleep, eat, and spend time not working, there would already be a noticeable increase in research speed.
Driving A to B takes 5 hours, if we get five drivers will we arrive in one hour or five hours? In research there are many steps like this (in the sense that the time is fixed and independent to the number of researchers or even how much better a researcher can be compared to others), adding in something that does not sleep nor eat isn't going to make the process more efficient.
I remember when I was an intern and my job was to incubate eggs and then inject the chicken embryo with a nanoparticle solution to then look under a microscope. In any case incubating the eggs and injecting the solution wasn't limited by my need to sleep. Additionally our biggest bottleneck was the FDA to get this process approved, not the fact that our interns required sleep to function.
If the FDA was able to work faster/more parallel and could approve the process significantly quicker, would that have changed how many experiments you could have run to the point that you could have kept an intern busy at all times?
It depends so much on scaling. Human scaling is counterintuitive and hard to measure - mostly way sublinear - like log2 or so - but sometimes things are only possible at all by adding _different_ humans to the mix.
My point is that “AGI has human intelligence” isn’t by itself enough of the equation to know whether there will be exponential or even greater-than-human speed of increase. There’s far more that factors in, including how quickly it can process, the cost of running, the hardware and energy required, etc etc
My point here was simply that there is an economic factor that trivially could make AGI less viable over humans. Maybe my example numbers were off, but my point stands.
And yet it seems to be the prevailing opinion even among very smart people. The “singularity” it’s just presumed. I’m highly skeptical to say the least. Look how much energy it’s taking to engineer these models which are still nowhere near AGI. When we get to AGI it won’t be immediately super intelligent and perhaps it never will be. Diminishing returns surely apply to anything that is energy based?
Perhaps not, but what is the impetus of discovery? Is it purely analysis? History is littered with serendipitous invention; shower-thoughts lead to some of our best work. What's the AGI-equivalent of that? There is this spark of creativity that is a part of the human experience, which would be necessary to impart onto AGI. That spark, I believe, is not just made up of information but a complex weave of memories, experiences and even emotions.
So I don't think it's a given that progress will just be "exponential" once we have an AGI that can teach itself things. There is a vast ocean of original thought that goes beyond simple self-optimization.
Fundamentally discovery could be described as looking for gaps in our observation and then attempting to fill in those gaps with more observation and analysis.
The age of low hanging fruit shower thought inventions draws to a close when every field requires 10-20+ years of study to approach a reasonable knowledge of it.
"Sparks" of creativity, as you say, are just based upon memories and experience. This isn't something special, its an emergent property of retaining knowledge and having thought. There is no reason to think AI is incapable of hypothesizing and then following up on those.
Every AI can be immediately imparted with all expert human knowledge across all fields. Their threshold for creativity is far beyond ours, once tamed.
> It's a logical presumption. Researchers discover things. AGI is a researcher that can be scaled, research faster, and requires no downtime.
Those observations only lead to scaling research linearly, not exponentially.
Assuming a given discovery requires X units of effort, simply adding more time and more capacity just means we increase the slope of the line.
Exponential progress requires accelerating the rate of acceleration of scientific discovery, and for all we know that's fundamentally limited by computing capacity, energy requirements, or good ol' fundamental physics.
Or bottlenecked by data availability just like we humans are. Nothing will be exponential if a loop in the real world of science and engineering is involved.
Aren't we bottlenecked by not having any "prior art", as in not having reverse engineered any thinking machine like even a fly's brain? We can't even agree on a definition of consciousness and still don't understand the brain or how it works (to the extent that reverse engineering it can tell us something).
Right but for self improving AI, training new models does have a real world bottleneck: energy and hardware. (Even if the data bottleneck is solved too)
I always consider different options when planning for the future, but I'll give the argument for exponential:
Progress has been exponential in the generic. We made approximately the same progress in the past 100 years as the prior 1000 as the prior 30,000, as the prior million, and so on, all the way back to multicellular life evolving over 2 billion years or so.
There's a question of the exponent, though. Living through that exponential growth circa 50AD felt at best linear, if not flat.
Consider theoretical physics, which hasn't significantly advancement since the advent of general relativity and quantum theory.
Or neurology, where we continue to have only the most basic understanding of how the human mind actually works (let alone the origin of consciousness).
Heck, let's look at good ol' Moore's Law, which started off exponential but has slowed down dramatically.
It's said that an S curve always starts out looking exponential, and I'd argue in all of those cases we're seeing exactly that. There's no reason to assume technological progress in general, whether via human or artificial intelligence, is necessarily any different.
> We made approximately the same progress in the past 100 years as the prior 1000 as the prior 30,000
I hear this sort of argument all the time, but what is it even based on? There’s no clear definition of scientific and technological progress, much less something that’s measurable clearly enough to make claims like this.
As I understand it, the idea is simply “Ooo, look, it took ten thousand years to go from fire to wheel, but only a couple hundred to go from printing press to airplane!!!”, and I guess that’s true (at least if you have a very juvenile, Sid Meier’s Civilization-like understanding of what history even is) but it’s also nonsense to try and extrapolate actual numbers from it.
Plotting the highest observable assembly index over time will yield an exponential curve starting from the beginning of the universe. This is the closest I’m aware of to a mathematical model quantifying the distinct impression that local complexity has been increasing exponentially.
Why would you presume this? I think part of a lot of people's AI skepticism is talk like this. You have no idea. Full stop. Why wouldn't progress be linear? As new breakthroughs come, newer ones will be harder to come by. Perhaps it's exponential. Perhaps it's linear. No one knows.