> so far we do not have buggy software that is more intelligent (and therefore more effective at accomplishing its goals) than humans are.
Of course we do! In fact, most, if not all, software is more intelligent than humans, by some reasonable definition of intelligence [1] (you could also contrive a definition of intelligence for which this is not true, but I think that's getting too far into semantics). The Windows calculator app is more intelligent and faster at multiplying large numbers together [2] than any human. JP Morgan Chase's existing internal accounting software is more intelligent and faster than any human at moving money around; so much so that it did, in any way that matters, replace human laborers in the past. Most software we build is more intelligent and faster than humans at accomplishing the goal the software sets itself at accomplishing. Otherwise why would we build it?
[1] Rob Miles uses ~this definition of intelligence: if an agent is defined as an entity making decisions toward some goal, Intelligence is the capability of that agent to make correct decisions such that the goal is most effectively optimized. The Windows Calculator App makes decisions (branches, MUL ops, etc) in pursuit of its goal (to multiply two numbers together); oftentimes quite effectively and thus with very high domain-limited intelligence [2] (possibly even more effectively and thus more intelligently than LLMs). A buggy, less intelligent calculator might make the wrong decisions on this path (oops, we did an ADD instead of a MUL).
[2] What both Altman and Yudkowsky might argue as a critical differentiation here is that traditional software systems naturally limit their intelligence to a particular domain; whereas LLMs are Generally Intelligent. The discussion approaches the metaphysical when you start asking questions like: The Windows Calculator can absolutely, undeniably, multiply two numbers together better than ChatGPT; and by a reasonable definition of intelligence, this makes the Windows Calculator more intelligent than ChatGPT at multiplying two numbers together. Its definitely inaccurate to say that the Windows Calculator is more intelligent, generally, than ChatGPT. Is it not also inaccurate to state that ChatGPT is generally more intelligent than the Windows Calculator? After all, we have a clear, well-defined domain of intelligence along-which the Windows Calculator outperforms ChatGPT. I don't know. It gets weird.
Of course, there are different domains of intelligence, and agent A can be more intelligent in domain X while agent B is more intelligent in domain Y.
If you want to make some comparison of general intelligence, you have to start thinking of some weighted average of all possible domains.
One possible shortcut here is the meta domain of tool use. ChatGPT could theoretically make more use of a calculator (say, via always calling a calculator API when it wants to do math, instead of trying to do it by itself) than a calculator can make use of ChatGPT, so that makes ChatGPT by definition smarter than a calculator, cause it can achieve the same goals the calculator can just by using it, and more.
That's really most of humans' intelligence edge for now: seems like more and more, for any given skill, there's a machine or a program that can do it better than any human ever could. Where humans excel is our ability to employ those super human tools in the aid of achieving regular human goals. So when some AI system gets super-human-ly good at using tools which are better than itself in particular domains for its own goals, I think that's when things are going to get really weird.
Of course we do! In fact, most, if not all, software is more intelligent than humans, by some reasonable definition of intelligence [1] (you could also contrive a definition of intelligence for which this is not true, but I think that's getting too far into semantics). The Windows calculator app is more intelligent and faster at multiplying large numbers together [2] than any human. JP Morgan Chase's existing internal accounting software is more intelligent and faster than any human at moving money around; so much so that it did, in any way that matters, replace human laborers in the past. Most software we build is more intelligent and faster than humans at accomplishing the goal the software sets itself at accomplishing. Otherwise why would we build it?
[1] Rob Miles uses ~this definition of intelligence: if an agent is defined as an entity making decisions toward some goal, Intelligence is the capability of that agent to make correct decisions such that the goal is most effectively optimized. The Windows Calculator App makes decisions (branches, MUL ops, etc) in pursuit of its goal (to multiply two numbers together); oftentimes quite effectively and thus with very high domain-limited intelligence [2] (possibly even more effectively and thus more intelligently than LLMs). A buggy, less intelligent calculator might make the wrong decisions on this path (oops, we did an ADD instead of a MUL).
[2] What both Altman and Yudkowsky might argue as a critical differentiation here is that traditional software systems naturally limit their intelligence to a particular domain; whereas LLMs are Generally Intelligent. The discussion approaches the metaphysical when you start asking questions like: The Windows Calculator can absolutely, undeniably, multiply two numbers together better than ChatGPT; and by a reasonable definition of intelligence, this makes the Windows Calculator more intelligent than ChatGPT at multiplying two numbers together. Its definitely inaccurate to say that the Windows Calculator is more intelligent, generally, than ChatGPT. Is it not also inaccurate to state that ChatGPT is generally more intelligent than the Windows Calculator? After all, we have a clear, well-defined domain of intelligence along-which the Windows Calculator outperforms ChatGPT. I don't know. It gets weird.