Reclaiming the meaning of 'AGI'.

The term 'Artificial General Intelligence' was coined around 2001 by several of the authors as the title of a book about approaches to achieving human-level AI. Over the years the term has become quite widely used to describe various types of advanced AI -- however, as tends to happen, the meaning has become distorted over time...

A computer system that matches or exceeds the real time cognitive (not physical) abilities of an average, well-educated human.

Cognitive abilities include, but are not limited to: holding productive conversations; learning new commercial and scientific domains in real time through reading, coaching, experimentation, etc.; applying existing knowledge and skills to new domains. For example, learning new professional skills, a new language (including computer languages), or even novel games.

Acceptable limitations include: very limited sense acuity and dexterity.           >>>more

Any system that does not meet the minimum requirement listed above.

Specifically, if...

By 'Deep Learning' I'm referring to the current (around 2015) well-publicized designs being pursued, such convolution and recurrent networks, and similar approaches.

What do the key proponents of this technology say?  I'm not aware of a single high-profile Deep Learning researcher who claims that DL will, by itself, lead to AGI (as defined above).  In fact most of them go out of their way to state that DL cannot pose a serious danger because of its inherent limitations.

Here are some of the essential AGI abilities that the majority of DL systems do not have:

 Now, this is not to say that DL combined with other techniques or frameworks cannot overcome these limitations, but then it isn't DL = AGI.  Several AGI researchers believe that some form of DL technology will be an important aspect of AGI -- perhaps forming part of a comprehensive cognitive architecture.                                                       >>>more

Any AGI must at a minimum possess a core set of cognitive abilities. The skills must be implementable in a practical way - i.e. interacting with incomplete, potentially contradictory and noisy environments using finite computing and time resources.

All of these abilities must be able to operate in real-time on (at least) 3-dimensional, dynamic (temporal) data, as well as stimulus-response (causal) relationships. Operations must be scalar, not just binary (how good is the fit, how certain am I).

Here is a basic list:

While most examples here are language-based, these abilities must also operate in a purely perception-action mode.

>>>more      ( Additional comments about 'Understanding' and "Salience' )

The Turing Test asks both too much, and too little -- in different ways.

It asks too much by insisting that the AGI needs to hide its strengths, and fake human limitations, human-specific experience, and human quirkiness.  This is quite unnecessary, and would lead to man-years of unneeded effort to give it this particular acting ability.  Furthermore it would undermine trust in the machine if it was actually constructed to lie to us so effectively.  When I ask an AGI if it had a puppy as a child I would want it to answer that it didn't have a childhood because it is a machine, etc.   It makes no sense to dismiss as non-AGI a brilliant PhD-level researcher AI because it's unwilling to fool you.

It asks too little in the way TT competitions are run.  As we have already seen with 'first machine to win the TT' the test protocol is quite limited and there are many ways to game the system.  Even with stricter criteria, one could imagine that a purpose-built, TT-busting narrow AI could fool many (most) judges. For example, a system that used a record of a huge number of tests may be able to anticipate how to answer to win (somewhat similar in the way that Watson uses huge databases and thousands of algorithms to win Jeopardy).                             


'Consciousness' is another 'suitcase word' -- it has many quite different meanings that are 'thrown into a suitcase' and a label is slapped on the whole jumble.

We don't need to concern ourselves here with the absurd notion that 'rocks are conscious', or that 'everything is' -- we can just concentrate on what is relevant to AGI.

Consciousness at its base refers to awareness: Is the entity absorbing stimuli from the environment? Is it responding? However, we usually only apply these terms to living things (and not, for example, cars) -- i.e. things that can also be unconscious. Be that as it may, AGI is a special case because while it is a machine, we still expect it to do human-level cognition; to have mental processes; to have a mind.

One can actually bypass much of the definitional debate by concentrating on what kind of consciousness-like properties an AGI needs. Here things become much clearer. A key property of human consciousness is that humans have conceptual self-awareness: we have abstract concepts for our physical self (my body), our mental self (my mind/ thought processes), as well as of an integrated whole (me), which also includes our emotions, experience, history, goals, etc.

An AGI must have all these same properties!

AGI needs to be able to conceptualize what actions it took, versus external ones. Moreover, it need to conceptualize what actions it is currently (and may potentially be) capable of, what their likely effects are. It need to understand which kind of actions will affect it, and which will affect others. It need to be aware of its cognitive processes in order to monitor and potentially modify them (meta-cognitive control). Finally, it need to have a theory of mind to be able to understand other entities motivation an context to be able make sense of their actions and to interact with them appropriately.

Qualia: I will not address this in great detail because I see it as a non-issue for AGI, and a huge philosophical boondoggle. 'Qualia' are analyzed by unpacking what 'something feels like'. It presupposes some common mode of experience, which in turn presupposes an overlap in sensory/ emotional machinery. Unless we painfully emulate human embodiment there will be no common ground between AGI and human. Even then, cognition will operate so differently that there will be a huge gulf between us and them. This does not mean that AGIs won't be able to intellectually understand human emotion and experience (and to some extent vice-versa), it just means that their experience will be very different (presumably dramatically less sensory and emotional input automatically tied to cognition).

We'll fill this in over time, but perhaps we can start with: AGI Innovations Inc