If anyone knows of a steelman version of the "AGI is not possible" argument, I would be curious to read it. I also have trouble understanding what goes into that point of view.
If you genuinely want the strongest statement of it, read The Emperor's New Mind followed by Shadows of the Mind, both by Roger Penrose.
These books often get shallowly dismissed in terms that imply he made some elementary error in his reasoning, but that's not the case. The dispute is more about the assumptions on which his argument rests, which go beyond mathematical axioms and include statements about the nature of human perception of mathematical truth. That makes it a philosophical debate more than a mathematical one.
Personally, I strongly agree with the non-mathematical assumptions he makes, and am therefore persuaded by his argument. It leads to a very different way of thinking about many aspects of maths, physics and computing than the one I acquired by default from my schooling. It's a perspective that I've become increasingly convinced by over the 30+
years since I first read his books, and one that I think acquires greater urgency as computing becomes an ever larger part of our lives.
2. Humans can create rules outside the system of rules in which they follow
Is number 2 an accurate portrayal? It seems rather suspicious. It seems more likely that we just havent been able to fully express the rules under which humans operate.
Notably, those true statements can be proven in a higher level mathematical system. So why wouldn’t we say that humans are likewise operating in a certain system ourselves and likewise we have true statements that we can’t prove. We just wouldn’t be aware of them.
>likewise we have true statements that we can’t prove
Yes, and "can't" as in it is absolutely impossible. Not that we simple haven't been able to due to information or tech constraints.
Which is an interesting implication. That there are (or may be) things that are true which cannot be proved. I guess it kinda defies an instinct I have that at least in theory, everything that is true is provable.
That's too brief to capture it, and I'm not going to try to summarise(*). The books are well worth a read regardless of whether you agree with Penrose. (The Emperor's New Mind is a lovely, wide-ranging book on many topics, but Shadows of the Mind is only worth it if you want to go into extreme detail on the AI argument and its counterarguments.)
* I will mention though that "some" should be "all" in 2, but that doesn't make it a correct statement of the argument.
Is it too brief to capture it? Here is a one sentence statement I found from one of his slides:
>Turing’s version of Gödel’s theorem tells us that, for any set of mechanical theorem-proving rules R, we can construct a mathematical statement G(R) which, if we believe in the validity of R, we must accept as true; yet G(R) cannot be proved using R alone.
I have no doubt the books are good but the original comment asked about steelmanning the claim that AGI is impossible. It would be useful to share the argument that you are referencing so that we can talk about it.
That's a summary of Godel's theorem, which nobody disputes, not of Penrose's argument that it implies computers cannot emulate human intelligence.
I'm really not trying to evade further discussion. I just don't think I can sum that argument up. It starts with basically "we can perceive the truth not only of any particular Godel statement, but of all Godel statements, in the abstract, so we can't be algorithms because an algorithm can't do that" but it doesn't stop there. The obvious immediate response is to say "what if we don't really perceive its truth but just fool ourselves into thinking we do?" or "what if we do perceive it but we pay for it by also wrongly perceiving many mathematical falsehoods to be true?". Penrose explored these in detail in the original book and then wrote an entire second book devoted solely to discussing every such objection he was aware of. That is the meat of Penrose' argument and it's mostly about how humans perceive mathematical truth, argued from the point of view of a mathematician. I don't even know where to start with summarising it.
For my part, with a vastly smaller mind than his, I think the counterarguments are valid, as are his counter-counterarguments, and the whole thing isn't properly decided and probably won't be for a very long time, if ever. The intellectually neutral position is to accept it as undecided. To "pick a side" as I have done is on some level a leap of faith. That's as true of those taking the view that the human mind is fundamentally algorithmic as it is of me. I don't dispute that their position is internally consistent and could turn out to be correct, but I do find it annoying when they try to say that my view isn't internally consistent and can never be correct. At that point they are denying the leap of faith they are making, and from my point of view their leap of faith is preventing them seeing a beautiful, consistent and human-centric interpretation of our relationship to computers.
I am aware that despite being solidly atheist, this belief (and I acknowledge it as such) of mine puts me in a similar position to those arguing in favour of the supernatural, and I don't really mind the comparison. To be clear, neither Penrose nor I am arguing that anything is beyond nature, rather that nature is beyond computers, but there are analogies and I probably have more sympathy with religious thinkers (while rejecting almost all of their concrete assertions about how the universe works) than most atheists. In short, I do think there is a purely unique and inherently uncopyable aspect to every human mind that is not of the same discrete, finite, perfectly cloneable nature as digital information. You could call it a soul, but I don't think it has anything to do with any supernatural entity, I don't think it's immortal (anything but), I don't think it is separate from the body or in any sense "non-physical", and I think the question of where it "goes to" when we die is meaningless.
I realise I've gone well beyond Penrose' argument and rambled about my own beliefs, apologies for that. As I say, I struggle to summarise this stuff.
... No wonder Penrose has his doubts about the algorithmic nature of natural selection. If it were, truly, just an algorithmic process at all levels, all its products should be algorithmic as well. So far as I can see, this isn't an inescapable formal contradiction; Penrose could just shrug and propose that the universe contains these basic nuggets of nonalgorithmic power, not themselves created by natural selection in any of its guises, but incorporatable by algorithmic devices as found objects whenever they are encountered (like the oracles on the toadstools). Those would be truly nonreducible skyhooks.
Skyhook is Dennett's term for an appeal to the supernatural.
To be honest, the core of Penrose’s idea is pretty stupid. That we can understand mathematics despite incompleteness theorem being a thing, therefore our brains use quantum effects allowing us to understand it. Instead of just saying, you know, we use a heuristic instead and just guess that it’s true. I’m pretty sure a classical system can do that.
I'm sure if you email him explaining how stupid he is he'll send you his Nobel prize.
Less flippantly, Penrose has always been extremely clear about which things he's sure of, such as that human intelligence involves processes that algorithms cannot emulate, and which things he puts forward as speculative ideas that might
help answer the questions he has raised. His ideas about quantum mechanical processes in the brain are very much on the speculative side, and after a career like his I think he has more than earned the right to explore those speculations.
It sounds like you probably would disagree with his assumptions about human perception of mathematical truth, and it's perfectly valid to do so. Nothing about your comment suggests you've made any attempt to understand them, though.
I want to ignore the flame fest developing here. But, in case you are interested in hearing a doubter's perspective, I'll try to express one view. I am not an expert on Penrose's ideas, but see this as a common feature in how others try to sell his work.
Starting with "things he's sure of, such as that human intelligence involves processes that algorithms cannot emulate" as a premise makes the whole thing an exercise in Begging the Question when you try to apply it to explain why an AI won't work.
"That human intelligence involves processes that algorithms cannot emulate" is the conclusion of his argument. The premise could be summed up as something like "humans have complete, correct perception of mathematical truth", although there is a lot of discussion of in what sense it is "complete" and "correct" as, of course, he isn't arguing that any mathematician is omniscient or incapable of making a mistake.
Linking those two is really the contribution of the argument. You can reject both or accept both (as I've said elsewhere I don't think it's conclusively decided, though I know which way my preferences lie), but you can't accept the premise and reject the conclusion.
Hmm, I am less than certain this isn't still begging the question, just with different phrasing. I.e. I see how they are "linked" to the point they seem almost tautologically the same rather than a deductive sequence.
And you do it again, you apologise while insulting me. When challenged you refuse to defend the points you brought up, so that you can pretend to be right rather than be proved wrong. Incompleteness theorem is where the idea came from, but you don’t want to discuss that, you just want to drop the name, condescend to people and run away.
Here are the substantive things you've said so far (i.e. the bits that aren't calling things "stupid" and taking umbridge at imagined slights):
1. You think that instead of actually perceiving mathematical truth we use heuristics and "just guess that it's true". This, as I've already said, is a valid viewpoint. You disagree with one of Penrose' assumptions. I don't think you're right but there is certainly no hard proof available that you're not. It's something that (for now, at least) it's possible to agree to disagree on, which is why, as I said, this is a philosophical debate more than a mathematical one.
2. You strongly imply that Penrose simply didn't think of this objection. This is categorically false. He discusses it at great length in both books. (I mentioned such shallow dismissals, assuming some obvious oversight on his part, in my original comment.)
3 (In your latest reply). You think that Godel's incompleteness theorem is "where the idea came from". This is obviously true. Penrose' argument is absolutely based on Godel's theorem.
4. You think that somehow I don't agree with point 3. I have no idea where you got that idea from.
That, as far as I can see, is it. There isn't any substantive point made that I haven't already responded to in my previous replies, and I think it's now rather too late to add any and expect any sort of response.
As for communication style, you seem to think that writing in a formal tone, which I find necessary when I want to convey information clearly, is condescending and insulting, whereas dismissing things you disagree with as "stupid" on the flimsiest possible basis (and inferring dishonest motives on the part of the person you're discussing all this with) is, presumably, fine. This is another point on which we will have to agree to disagree.
The whole category of ideas of "Magic Fairy Dust is required for intelligence, and thus, a computer can never be intelligent" is extremely unsound. It should, by now, just get thrown out into the garbage bin, where it rightfully belongs.
To be clear, any claim that we have mathematical proof that something beyond algorithms is required is unsound, because the argument is not mathematical. It rests on assumptions about human perception of mathematical truth that may or may not be correct. So if that's the point you're making I don't dispute it, although to say an internally consistent alternative viewpoint should be "thrown out into the garbage" on that basis is unwarranted. The objection is just that it doesn't have the status of a mathematical theorem, not that it is necessarily wrong.
If, on the other hand you think that it is impossible for anything more than algorithms to be required, that the idea that the human mind must be equivalent to an algorithm is itself mathematically proven, then you are simply wrong. Any claim that the human mind has to be an algorithm rests on exactly the same kind of validly challengable, philosophical assumptions (specifically the physical Church-Turing thesis) that Penrose' argument does.
Given two competing, internally consistent world-views that have not yet been conclusively separated by evidence, the debate about which is more likely to be true is not one where either "side" can claim absolute victory in the way that so many people seem to want to on this issue, and talk of tossing things in the garbage isn't going to persuade anybody that's leaning in a different direction.
It is unsound because: not only it demands an existence of a physical process that cannot be computed (so far, none found, and not for the lack of searching), but it also demands that such a physical process would conveniently be found to be involved in the functioning of a human brain, and also that it would be vital enough that you can't just replace it with something amenable to computation at a negligible loss of function.
It needs too many unlikely convenient coincidences. The telltale sign of wishful thinking.
At the same time: we have a mounting pile of functions that were once considered "exclusive to human mind" and are now implemented in modern AIs. So the case for "human brain must be doing something Truly Magical" is growing weaker and weaker with each passing day.
This is the usual blurring of lines you see in dismissals of Penrose. You call the argument "unsound" as if it contains some hard error of logic and can be dismissed as a result, but what you state are objections to the assumptions (not the reasoning) based on your qualitative evaluation of various pieces of evidence, none of which are conclusive.
There's nothing wrong with seeing the evidence and reaching your own conclusions, but I see exactly the same evidence and reach very different ones, as we interpret and weight it very differently. On the "existence of a physical process that cannot be computed", I know enough of physics (I have a degree in it, and a couple of decades of continued learning since) to know how little we know. I don't find any argument that boils down to "it isn't among the things we've figured out therefore it doesn't exist" remotely persuasive. On the achievements of AI, I see no evidence of human-like mathematical reasoning in LLMs and don't expect to, IMO demos and excitable tweets notwithstanding. My goalpost there, and it has never moved and never will, is independent, valuable contributions to frontier research maths - and lots of them! I want the crank-the-handle-and-important-new-theorems-come-out machine that people have been trying to build since computers were invented. I expect a machine implementation of human-like mathematical thought to result in that, and I see no sign of it on the horizon. If it appears, I'll change my tune.
I acknowledge that others have different views on these issues and that however strongly I feel I have the right of it, I could still turn out to be wrong. I would enjoy some proper discussion of the relative merits of these positions, but it's not a promising start to talk about throwing things in the garbage right at the outset or, like the person earlier in this thread, call the opposing viewpoint "stupid".
There is no "hard error of logic" in saying "humans were created by God" either. There's just no evidence pointing towards it, and an ever-mounting pile of evidence pointing otherwise.
Now, what does compel someone to go against a pile of evidence this large and prop up an unsupported hypothesis that goes against it not just as "a remote and unlikely possibility, to be revisited if any evidence supporting it emerges", but as THE truth?
Sheer wishful thinking. Humans are stupid dumb fucks.
Most humans have never "contributed to frontier research maths" in their entire lives either. I sure didn't, I'm a dumb fuck myself. If you set the bar of "human level intelligence" at that, then most of humankind is unthinking cattle.
"Advanced mathematical reasoning" is a highly specific skill that most humans wouldn't learn in their entire lives. Is it really a surprise that LLMs have a hard time learning it too? They are further along it than I am already.
I don't know if we're even able to continue with the thread this old, but this is fun so I'll try to respond.
You're correct to point out that defending my viewpoint as merely internally consistent puts me in a position analogous to theists, and I volunteered as much elsewhere in this thread. However, the situation isn't really the same since theists tend to make wildly internally inconsistent claims, and claims that have been directly falsified. When theists reduce their ideas to a core that is internally consistent and has not been falsified they tend to end up either with something that requires surrendering any attempt at establishing the truth of anything ourselves and letting someone else merely tell us what is and is not true (I have very little time for such views), or with something that doesn't look like religion as typically practised at all (and which I have a certain amount of sympathy for).
As far as our debate is concerned, I think we've agreed that it is about being persuaded by evidence rather than considering one view to to have been proven or disproven in a mathematical sense. You could consider it mere semantics, but you used the word "unsound" and that word has a particular meaning to me. It was worth establishing that you weren't using it that way.
When it comes to the evidence, as I said I interpret and weight it differently than you. Merely asserting that the evidence is overwhelmingly against me is not an effective form of debate, especially when it includes calling the other position "stupid" (as has happened twice now in this thread) and especially not when the phrase "dumb fuck" is employed. I know I come across as comically formal when writing about this stuff, but I'm trying to be precise and to honestly acknowledge which parts of my world view I feel I have the right to assert firmly and which parts are mere beliefs-on-the-basis-of-evidence-I-personally-find-persuasive. When I do that, it just tends to end up sounding formal. I don't often see the same degree of honesty among those I debate this with here, but that is likely to be a near-universal feature HN rather than a failing of just the strong AI proponents here. At any rate "stupid dumb fucks" comes across as argument-by-ridicule to me. I don't think I've done anything to deserve it and it's certainly not likely to change my mind about anything.
You've raised one concrete point about the evidence, which I'll respond to: you've said that the ability
to contribute to frontier research maths is posessed only by a tiny number of humans and that a "bar" of "human level" intelligence set there would exclude everyone else.
I don't consider research mathematicians to possess qualitatively different abilities to the rest of the population. They think in human ways, with human minds. I think the abilities that are special to human mathematicians relative to machine mathematicians are (qualitatively) the same abilities that are special to human lawyers, social workers or doctors relative to machine ones. What's special about the case of frontier maths, I claim, is that we can pin it down. We have an unambiguous way of determining whether the goal I decided to look for (decades ago) has actually been achieved. An important-new-theorem-machine would revolutionise maths overnight, and if and when one is produced (and it's a computer) I will have no choice but to change my entire world view.
For other human tasks, it's not so easy. Either the task can't be boiled down to text generation at all or we have no unambiguous way to set a criterion for what "human-like insight" putatively adds. Maths research is at a sweet spot: it can be viewed as pure text generation and the sort of insight I'm looking for is objectively verifiable there. The need for it to be research maths is not because I only consider research mathematicians to be intelligent, but because a ground-breaking new theorem (preferably a stream of them, each building on the last) is the only example I can think of where human-like insight would be absolutely required, and where the test can be done right now (and it is, and LLMs have failed it so far).
I dispute your "level" framing, BTW. I often see people with your viewpoint assuming that the road to recreating human intelligence will be incremental, and that there's some threshold at which success can be claimed. When debating with someone who sees the world as I do, assuming that model is begging the question. I see something qualitative that separates the mechanism of human minds from all computers, not a level of "something" beyond which I think things are worthy of being called intelligent. My research maths "goal" isn't an attempt to delineate a feat that would impress me in some way, while all lesser feats leave me cold. (I am already hugely impressed by LLMs.) My "goal" is rather an attempt to identify a practically-achievable piece of evidence that would be sufficient for me to change my world view. And that, if it ever happens, will be a massive personal upheaval, so strong evidence is needed - certainly stronger than "HN commenter thinks I'm a dumb fuck".
My layman thought about that is that, with consciousness, the medium IS the consciousness -- the actual intelligence is in the tangible material of the "circuitry" of the brain. What we call consciousness is an emergent property of an unbelievably complex organ (that we will probably never fully understand or be able to precisely model). Any models that attempt to replicate those phenomena will be of lower fidelity and/or breadth than "true intelligence" (though intelligence is quite variable, of course)... But you get what I mean, right? Our software/hardware models will always be orders of magnitude less precise or exhaustive than what already happens organically in the brain of an intelligent life form. I don't think AGI is strictly impossible, but it will always be a subset or abstraction of "real"/natural intelligence.
I think it's also the case that you can't replicate something actually happening, by describing it.
Baseball stats aren't a baseball game. Baseball stats so detailed that they describe the position of every subatomic particle to the Planck scale during every instant of the game to arbitrarily complete resolution still aren't a baseball game. They're, like, a whole bunch of graphite smeared on a whole bunch of paper or whatever. A computer reading that recording and rendering it on a screen... still isn't a baseball game, at all, not even a little. Rendering it on a holodeck? Nope, 0% closer to actually being the thing, though it's representing it in ways we might find more useful or appealing.
We might find a way to create a conscious computer! Or at least an intelligent one! But I just don't see it in LLMs. We've made a very fancy baseball-stats presenter. That's not nothing, but it's not intelligence, and certainly not consciousness. It's not doing those things, at all.
I think you're tossing around words like "always" or "never" too lightly, with no justification behind them. Why do you think that no matter how much effort is spent, fully understanding the human brain will always be impossible? Always is a really long time. As long as we keep doing research to increasingly precisely model the universe around us, I don't see what would stop this from happening, even if it takes many centuries or millennia. Most people who argue this justify their point by asserting that there is some unprovable quality of the human brain which can't be modeled at all and can only be created in one way - which both lacks substance and seems arbitrary, since I don't think that this relationship provably exists for anything else that we do know about. It seems like a way to justify that humans and only humans are special.
This is how I (also as a layman) look at it as well.
AI right now is limited to trained neural networks, and while they function sort of like a brain, there is no neurogenesis. The trained neural network cannot grow, cannot expand on it's own, and is restrained by the silicon it is running on.
I believe that true AGI will require hardware and models that are able to learn, grow and evolve organically. The next step required for that in my opinion is biocomputing.
The only thing I can come up with is that compressing several hundred million years of natural selection of animal nervous systems into another form, but optimised by gradient descent instead, just takes a lot of time.
Not that we can’t get there by artificial means, but that correctly simulating the environment interactions, the sequence of progression, getting the all the details right, might take hundreds to thousands of years of compute, rather than on the order of a few months.
And it might be that you can get functionally close, but hit a dead end, and maybe hit several dead ends along the way, all of which are close but no cigar. Perhaps LLMs are one such dead end.
I don't disagree, but I think the evolution argument is a red herring. We didn't have to re-engineer horses from the ground up along evolutionary lines to get to much faster and more capable cars.
The evolution thing is kind of a red herring in that we probably don't have to artificially construct the process of evolution, though your reasoning isn't a good explanation for why the "evolution" reason is a red herring: Yeah, nature already established incomprehensibly complex organic systems in these life forms -- so we're benefiting from that. But the extent of our contribution is making some select animals mate with others. Hardly comparable to building our own replacement for some millennia of organic iteration/evolution. Luckily we probably don't actually need to do that to produce AGI.
Most arguments and discussions around AGI talk past each other about the definitions of what is wanted or expected, mostly because sentience, intelligence, consciousness are all unagreed upon definitions and therefore are undefined goals to build against.
Some people do expect AGI to be a faster horse; to be the next evolution of human intelligence that's similar to us in most respects but still "better" in some aspects. Others expect AGI to be the leap from horses to cars; the means to an end, a vehicle that takes us to new places faster, and in that case it doesn't need to resemble how we got to human intelligence at all.
True, but I think this reasoning is a category error: we were and are capable of rationally designing cars. We are not today doing the same thing with AI, we’re forced to optimize them instead. Yes, the structure that you optimize around is vitally important, but we’re still doing brute force rather than intelligent design at the end of the day. It’s not comparing like with like.
> correctly simulating the environment interactions, the sequence of progression, getting the all the details right, might take hundreds to thousands of years of compute
Who says we have to do that? Just because something was originally produced by natural process X, that doesn't mean that exhaustively retracing our way through process X is the only way to get there.
Who says that we don’t? The point is that the bounds on the question are completely unknown, and we operate on the assumption that the compute time is relatively short. Do we have any empirical basis for this? I think we do not.
The overwhelming majority of animal species never developed (what we would consider) language processing capabilities. So agi doesn't seem like something that evolution is particularly good at producing; more an emergent trait, eventually appearing in things designed simply to not die for long enough to reproduce...
Define "animal species", if you mean vertebrates, you might be surprised by the modern ethological literature. If you mean to exclude non-vertebrates ... you might be surprised by the ethological literature too.
If you just mean majority of spp, you'd be correct, simply because most are single celled. Though debate is possible when we talk about forms of chemical signalling.
Yeah, it's tricky to talk about in the span of a comment. I work on Things Involving Animals - animals provide an excellent counter-current to discussion around AGI, in numerous ways.
One interesting parallel was the gradual redefinition of language over the course of the 20th century to exclude animals as their capabilities became more obvious. So, when I say 'language processing capacities', I mean it roughly in the sense of Chomsky-era definitions, after the goal posts had been thoroughly moved away from much more inclusive definitions.
Likewise, we've been steadily moving the bar on what counts as 'intelligence', both for animals and machines. Over the last couple decades the study of animal intelligence has been more inclusive, IMO, and recognize intelligence as capabilities within the specific sensorium and survival context of the particular species. Our study of artificial intelligence are still very crude by comparison, and are still in the 'move the goalposts so that humans stay special' stage of development...
I suppose intelligence can be partitioned as less than, equal to, or greater than human. Given the initial theory depends on natural evidence, one could argue there's no proof that "greater than human" intelligence is possible - depending on your meaning of AGI.
But then intelligence too is a dubious term. An average mind with infinite time and resources might have eventually discovered general relativity.
The steelman would be that knowledge is possible outside the domain of Science. So the opposing argument to evolution as the mechanism for us (the "general intelligence" of AGI) would be that the pathway from conception to you is not strictly material/natural.
Of course, that's not going to be accepted as "Science", but I hope you can at least see that point of view.
the basic idea being that either the human mind is NOT a computation at all (and it's instead spooky unexplainable magic of the universe) and thus can't be replicated by a machine OR it's an inconsistent machine with contradictory logic. and this is a deduction based on godel's incompleteness theorems.
but most people that believe AGI is possible would say the human mind is the latter. technically we don't have enough information today to know either way but we know the human mind (including memories) is fallible so while we don't have enough information to prove the mind is an incomplete system, we have enough to believe it is. but that's also kind of a paradox because that "belief" in unproven information is a cornerstone of consciousness.
The real point isn’t AGI, it’s that the speed of knowledge is empiricism, not intelligence.
An infinitely intelligent creature still has to create a standard model from scratch. We’re leaning too hard on the deductive conception of the world, when reality is, it took hundreds of thousands of years for humans as intelligent as we are to split the atom.
I think the best argument against us ever finding AGI is that the search space is too big and the dead ends are too many. It's like wandering through a monstrously huge maze with hundreds of very convincingly fake exits that lead to pit traps. The first "AGI" may just be a very convincing Chinese room that kills all of humanity before we can ever discover an actual AGI.
The necessary conditions for "Kill all Humanity" may be the much more common result than "Create a novel thinking being." To the point where it is statistically improbable for the human race to reach AGI. Especially since a lot of AI research is specifically for autonomous weapons research.
Is there a plausible situation where a humanity-killing superintelligence isn't vulnerable to nuclear weapons?
If a genuine AGI-driven human extinction scenario arises, what's to stop the world's nuclear powers from using high-altitude detonations to produce a series of silicon-destroying electromagnetic pulses around the globe? It would be absolutely awful for humanity don't get me wrong, but it'd be a damn sight better than extinction.
Physically, maybe not, but an AGI would know that, would think a million times faster than us, and would have incentive to prioritize disabling our abilities to do that. Essentially, if an enemy AGI is revealed to us, it's probably too late to stop it. Not guaranteed, but a valid fear.
What stops them is: being politically captured by an AGI.
Not to mention that the whole idea of "radiation pulses destroying all electronics" is cheap sci-fi, not reality. A decently well prepared AGI can survive a nuclear exchange with more ease than human civilization would.
I think it's much more likely that a non-AGI platform will kill us before AGI even happens. I'm thinking the doomsday weapon from Doctor Strangelove more than Terminator.
If you have a wide enough definition of AGI having a baby is making “AGI.” It’s a human made, generally intelligent thing. What people mean by the “A” though is we have some kind of inorganic machine realize the traits of “intelligence” in the medium of a computer.
The first leg of the argument would be that we aren’t really sure what general intelligence is or if it’s a natural category. It’s sort of like “betterness.” There’s no general thing called “betterness” that just makes you better at everything. To get better at different tasks usually requires different things.
I would be willing to concede to the AGI crowd that there could be something behind g that we could call intelligence. There’s a deeper problem though that the first one hints at.
For AGI to be possible, whatever trait or traits make up “intelligence” need to have multiple realizablity. They need to be at least realizable in both the medium of a human being and at least some machine architectures. In programmer terms, the traits that make up intelligence could be tightly coupled to the hardware implementation. There are good reasons to think this is likely.
Programmers and engineers like myself love modular systems that are loosely coupled and cleanly abstracted. Biology doesn’t work this way — things at the molecular level can have very specific effects on the macro scale and vice versa. There’s little in the way of clean separation of layers. Who is to say that some of the specific ways we work at a cellular level aren’t critical to being generally intelligent? That’s an “ugly” idea but lots of things in nature are ugly. Is it a coincidence too that humans are well adapted to getting around physically, can live in many different environments, etc.? There’s also stuff from the higher level — does living physically and socially in a community of other creatures play a key role in our intelligence? Given how human beings who grow up absent those factors are developmentally disabled in many ways it would seem so. It could be there’s a combination of factors here, where very specific micro and macro aspects of being a biological human turn out to contribute and you need the perfect storm of these aspects to get a generally intelligent creature. Some of these aspects could be realizable and computers, but others might not be, at least in a computationally tractable way.
It’s certainly ugly and goes against how we like things to work for intelligence to require a big jumbly mess of stuff, but nature is messy. Given the only known case of generally intelligent life is humans, the jury is still out that you can do it any other way.
Another commenter mentioned horses and cars. We could build cars that are faster than horses, but speed is something that is shared by all physical bodies and is therefore eminently multiply realizable. But even here, there are advantages to horses that cars don’t have, and which are tied up with very specific aspects of being a horse. Horses generally can go over a wider range of terrain than cars. This is intrinsically tied to them having long legs and four hooves instead of rubber wheels. They’re only able to have such long legs because of their hooves too because the hooves are required to help them pump blood when they run, and that means that in order for them to pump their blood successfully they NEED to run fast on a regular basis. there’s a deep web of influence both on a part-to-part, and the whole macro-level behaviors of horses. Having this more versatile design also has intrinsic engineering trade-offs. A horse isn’t ever going to be as fast as a gas powered four-wheeled vehicle on flat ground but you definitely can’t build a car that can do everything a horse can do with none of the drawbacks. Even if you built a vehicle that did everything a horse can do, but was faster, I would bet you it would be way more expensive and consume much more energy than a horse. There’s no such thing as a free lunch in engineering. You could also build a perfect replica of a horse at a molecular level and claim you have your artificial general horse.
Similarly, human beings are good at a lot of different things besides just being smart. But maybe you need to be good at seeing, walking, climbing, acquiring sustenance, etc. In order to be generally intelligent in a way that’s actually useful. I also suspect our sense of the beautiful, the artistic is deeply linked with our wider ability to be intelligent.
Finally it’s an open philosophical question whether human consciousness is explainable in material terms at all. If you are a naturalist, you are methodologically committed to this being the case — but that’s not the same thing as having definitive evidence that it is so. That’s an open research project.
In short, by definition, computers are symbol manipulating devices. However complex the rules of symbol manipulation, it is still a symbol manipulating device, and therefore neither intelligent nor sentient. So AGI on computers is not possible.
This is not an argument at all, you just restate your whole conclusion as an assumption ("a symbol manipulating device is incapable of cognition").
It's not even a reasonable assumption (to me), because I'd assume an exact simulation of a human brain to have the exact same cognitive capabilities (which is inevitable, really, unless you believe in magic).
And machines are well capable of simulating physics.
I'm not advocating for that approach because it is obviously extremely inefficient; we did not achieve flight by replicating flapping wings either, after all.
You can assume whatever you want to, but if you were right, than the human brain itself would be nothing more than a symbol manipulating device. While that is not necessarily a falsifiable stance, the really interesting questions are what is consciousness, and how do we recognise consciousness.
Computer can simulate human brain on subatomic level (in theory). Do you agree this would be "sentient and intelligent" and not just symbol manipulating?
Say we do have a 1:1 representation of the human brain in software. How could we know if we're talking to a conscious simulation of a human being, versus some kind of philosophical zombie which appears conscious but isn't?
Without a solid way to differentiate 'conscious' from 'not conscious' any discussion of machine sentience is unfalsifiable in my opinion.
How do you tell the difference in other humans? Do you just believe them because they claim to be conscious instead of pointing a calibrated and certified consciousness-meter at them?
I obviously can't prove they're conscious in a rigorous way, but it's a reasonable assumption to make that other humans are conscious. "I think therefore I am" and since there's no reason to believe I'm exceptional among humans, it's more likely than not that other humans think too.
This assumption can't be extended to other physical arrangements though, not unless there's conclusive evidence that consciousness is a purely logical process as opposed to a physical one. If consciousness is a physical process, or at least a process with a physical component, then there's no reason to believe that a simulation of a human brain would be conscious any more than a simulation of biology is alive.
So, what if I told you that some humans have been vat-grown without brains and had a silicon brain emulator inserted into their skulls. Are they p-zombies? Would you demand x-rays before talking to anyone? What would you use then to determine consciousness?
Relying on these status quo proxy-measures (looks human :: 99.9% likely to have a human brain :: has my kind of intelligence) is what gets people fooled even by basic AI (without G) fake scams.