Any AGI must at a minimum possess a core set of cognitive abilities. The skills must be implementable in a practical way - i.e. interacting with incomplete, potentially contradictory and noisy environments using finite computing and time resources.

All of these abilities must be able to operate in real-time on (at least) 3-dimensional, dynamic (temporal) data, as well as stimulus-response (causal) relationships. Operations must be scalar, not just binary (how good is the fit, how certain am I).

Here is a basic list:

While most examples here are language-based, these abilities must also operate in a purely perception-action mode.



Salience -- selecting what is relevant and important to a given context and goal -- is an important aspect of intelligent systems.

This comes into play at different levels of cognition:

Firstly, in autonomous data selection on input -- what senses and features to process and /or ignore, and what level of importance to assign to them for processing. For example, most animals are wired to pay extra attention to fast moving items in their visual field, and to loud sounds. For AGI we have to assume that much more sensory input will be available than can (or should) reasonably be processed. We must also assume that relevant feature extractors such as edge or shape detection must be prioritized. It seems that some semi-automatic mechanism needs to do this pre-selection. This mechanism should be under overall high-level cognitive control to preset parameters; for example to, say, bias it to focus on changes in color or pitch.

Once input has been appropriately selected and prioritized, pattern matching, categorization, and conceptualization mechanisms need to be selected according to contextual requirements. What matters currently? For example, are we trying to match incoming patterns against each other, or against some internal reference; are we interested in shape or texture patterns; or are we just interested in object collisions?

Higher level goals also need to be selected and prioritized according to salience. What are we trying to achieve right now? What dependencies are there? What is most important in the current context?

Finally, the overall architecture has to allow for consolidation and forgetting. What information or experience should be consolidated? What should be forgotten (or archived)?

AGI need to have mechanisms in place at each of these levels (and probably some others) to evaluate salience and to adjust cognition accordingly.



'Understanding' is a suitcase word like 'intelligence' -- it has many quite different meanings that are 'thrown into a suitcase' and a label is slapped on the whole jumble (coined by Marvin Minsky). Let's try a few:

So, 'understanding' at its simplest is 'being able to respond appropriately. The next level is to be able to tell back the information in their own words. Another level is to know the implications. Personal experience adds more. Having deep subject knowledge and skill is another angle. Finally, knowing in detail how something functions - knowing the specific cause-effect mechanisms.