To answer this question we first need to establish context. Why do we even need
The original vision for AI was to build machines that can learn and reason at
the level of humans. Due to the tremendous difficulties of this task this
vision has largely been abandoned over the decades. Nowadays almost all AI work
relates to narrow, domain-specific, human-designed capabilities. Powerful
as these current applications may be, they are limited to their specific target
domain, and have very narrow (if any) adaptation and learning ability.
For those of us interested in reviving the original goal, a new word was needed
to differentiate it from mainstream AI . 'AGI' seems to fit the bill.
The term 'Strong AI' has long been used to refer to the grand objective,
however, it has some quite technical connotations (e.g. that AGI is possible).
'AGI' has the further advantage that it identifies with a new active movement of
actually getting back to building such technology. This 'movement' was
officially launched with the publication of the book
Artificial General Intelligencee , and has since gathered momentum with
additional publications and annual AGI conferences. Furthermore, the term has
become quite widely used generally to refer to exactly those human (or
super-human) level of capabilities in a machine.
Some people have suggested using 'AGI' for any work that is generally in the
area of autonomous learning, 'model-free', adaptive, unsupervised or some such
approach or methodology. I don't think this is justified, as many clearly narrow AI
projects use such methods. One can certainly assert that some approach or
technology will likely help achieve AGI, but I think it is reasonable to judge
projects by whether they are explicitly on a path (however far away it may be)
to achieving the grand vision.
To try and to address the question "What would it take for us to say we've
achieved AGI?", here is a proposed descriptive definition, followed by some
A computer system that matches or exceeds the real time cognitive (not
physical) abilities of a smart, well-educated human.
Cognitive abilities include, but are not limited to: holding productive
conversations; learning new commercial and scientific domains in real time
through reading, coaching, experimentation, etc.; applying existing knowledge
and skills to new domains. For example, learning new professional skills, a new
language (including computer languages), or even novel games.
Acceptable limitations include: very limited sense acuity and dexterity.
Alternative suggestions, and their merits
"Machines that can learn to do any job that humans currently do" -- I
think this fits quite well, except that it seems unnecessarily ambitious.
Machines that can do most jobs, especially mentally challenging ones would get
us to our overall goal of having machines that can help us solve difficult
problems like ageing, energy, pollution, and help us think through political and
moral issues, etc. Naturally, they would also help to build machines that
will handle remaining jobs we want to automate.
"Machines that pass the Turing Test" -- The current Turing Test asks
too much (potentially having to dumb itself down to fool judges that it is
human), and too little (limited conversation time). A much better test
would be to see if the AI can learn a broad range of new complex human-level
cognitive skills via autonomous learning and coaching.
"Machines that are self-aware/ can learn autonomously/ do autonomous
reduction/ etc." -- These definition grossly underspecify AGI. One could
build narrow systems that have these characteristics (and probably have
already), but are nowhere near AGI (and may not be on the path at all).
machine with the ability to learn from its experience and to work with
insufficient knowledge and resources." -- Important requirements but lacking
specification of the level skill one expects. Again, systems already exist that
have these qualities but are nowhere near AGI.
Why specify AGI in terms of human
While we'd expect AGI cognition to be quite different (instant access to
Internet, photographic memory, logical thinking, etc.), the goal is still to
free us from most work. In order to do that it must be able to operate in our
environment, and learn interactively via natural language and human interaction.
Why not require full sense acuity, dexterity, and embodiment? --
I think that a reasonable relaxation of requirements is to initially exclude
tasks that require high dexterity & sense acuity. The reason is that initial
focus should be on cognitive ability -- ie. a "Helen Hawking" (Helen Keller/
Stephen Hawking)" The core problem is building the brain, the intelligence
engine. It can't be totally disconnected from the world, but its senses/ actuators
do not need to be very elaborate, as long as it can operate other machines (tool