Desktop version

Home arrow Communication

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Cognitive architectures

The term “cognitive architecture” was introduced by Allen Newell (1990) that, starting from the 1980s, initiated his own research agenda looking for Unified Theories of Cognition to be realized via integrated intelligent systems. This view diverged from that of his historical colleague Herbert Simon, who, instead, continued in his efforts of building computational micro-models of specific cognitive phenomena. For Newell, on the other hand, it was starting to be clear that “you can’t play 20 questions with nature and win” (as the title of his famous 1973 cognitive science paper had already suggested). So he started to work on the notion of cognitive architectures that, in Newell’s view, should have played the same “middleware-like” role that computer architectures do with respect to the underlying hardware implementations and the top software layers whose interactions and information processing mechanisms they regulate.

To look at this more in detail: this class of artificial systems was described, in a recent editorial on the subject published in the journal Cognitive Systems Research that I co-authored with Mehul Bhatt, Alessandro Oltramari, and David Vernon, as follows:

Cognitive Architectures indicate both abstract models of cognition, in natural and artificial agents, and the software instantiations of such models which are then employed in the field of Artificial Intelligence (AI). The main role of Cognitive Architectures in AI is that one of enabling the realization of artificial systems able to exhibit intelligent behavior in a general setting through a detailed analogy with the constitutive and developmental functioning and mechanisms underlying human cognition. The research on Cognitive Architectures (CAs), in particular, is a wide and active area involving a plethora of disciplines such as Cognitive Science,

Artificial Intelligence, Robotics and, more recently, the area of Computational Neuroscience. CAs have been historically introduced (i) “to capture, at the computational level, the invariant mechanisms of human cognition, including those underlying the functions of control, learning, memory, adaptivity, perception and action” (this goal is crucial in the cognitivist perspective (ii) to form the basis for the development of cognitive capabilities through ontogeny over extended periods of time (this goal is one of the main target of the so called emergent perspective), (iii) to reach human level intelligence, also called General Artificial Intelligence, by means of the realization of artificial artifacts built upon them.

Lido, Bliatt, Oltramari and Vernon (2018: 1-2)

The characterization above shows how this class of system aims at building structural models, or artificial models of cognition, that are also expected to be state- of-the-art artificial systems according to cognitive-AI assumptions. What also emerges from the definition above is that the emergentist vs. cognitivist dichotomy is also reflected in this sub-area of AI and cognitive modelling research. During the last few decades, many cognitive architectures have been built relying on different theoretical and practical assumptions.

A recent survey by Kotsteruba and Tsotsos (2020) points out how over the last 40 years more than 80 different Cognitive Architectures (developed in both in their theoretical and computational counterparts and adopting cognitivist, emergentist, and hybrid approaches) have been proposed, tested, and maintained. This underlines the role that such computational artefacts have played in the past, by relying on different inspiring principles as simulative tools for understanding the mind and the underlying dynamics and interconnections between its processing mechanisms. As the computational cognitive scientist Ron Sun correctly pointed out, cognitive architectures

play an important role in computational modeling of cognition in that they make explicit the set of assumptions upon which that cognitive model is founded. These assumptions are typically derived from several sources: biological or psychological data, philosophical arguments, or ad hoc working hypotheses inspired by work in different disciplines such as neurophysiology, psychology, or artificial intelligence. Once it has been created, a cognitive architecture also provides a framework for developing the ideas and assumptions encapsulated in the architecture.

(Sun, 2004)

Interestingly enough, Sun points out an important aspect of cognitive architecture research that we will see in greater detail in the following sections: namely the fact that the general structural requirements of cognitive architecture need to be tested on specialized computational models (e.g., of reasoning, selective attention, categorization, etc.) built within those architectural constraints. It is the simulation run on such specific computational models, built in compliance with the general architectural assumptions of a cognitive architecture, that provides insights about the plausibility/implausibility of the modelled mechanisms.

Along with the different implementations and software instantiations developed in the last few decades, many different requirements and desiderata have been proposed in order to build integrated cognitive systems of this type. For example, on the “cognitivist” side, Langley, Laird, and Rogers (2009) individuated the following abilities that such systems should be capable of performing (by exploiting different task-specific models) and integrating by using the same underlying architectural assumptions:

  • 1 Recognition and categorization
  • 2 Decision-making and choice
  • 3 Perception and situation assessment
  • 4 Prediction and monitoring
  • 5 Problem solving and planning
  • 6 Reasoning and belief maintenance
  • 7 Execution and action
  • 8 Interaction and communication
  • 9 Remembering, reflection, and learning

Also, Ron Sun identified four desirable features of a cognitive architecture. These are: ecological realism; bio-evolutionary realism; cognitive realism; and eclecticism of methodologies and techniques. These can be described as follows: the first one concerns the idea that a cognitive architecture should focus on allowing the artificial cognitive system to operate in its natural environment, engaging in “everyday activities” (Sun, 2004) and, as such, should be able to perceive, decide among conflicting goals, etc. Bio-evolutionary realism postulates that a cognitive model of human intelligence should be reducible to a model of animal intelligence since human intelligence evolved from the capabilities of earlier primates. The cognitive realism principle postulates the necessity of building what we have defined as “structural models”.[1] The last feature, finally, points out the necessity of assuming a pluralistic perspective in regards to the modelling methodologies and techniques to use; and suggest that new models should draw on, subsume, or supersede older models.

More recently, Vernon et al. (2017) proposed a set of desiderata for emergen- tist and developmental cognitive architectures, focusing on partially different cognitive faculties with respect to those proposed by Langley and the cognitivist researchers. In particular, in this perspective, the most important aspects concern the capability of an agent of learning via physical, embodied, perceptual interaction with the environment, what actions to take and, in general, what principles to put in place in order to replicate the developmental ontogenetic capacities of humans within a cognitive architecture. Cognitive architectures, therefore, are assumed to represent the overall infrastructure resulting from the phylogenetic development of an organism and from which intelligent action and decisions should develop. Along this line, Vernon and colleagues identify ten desiderata for the design and development of such types of architecture:

  • 1 the need of having a value system able to guide the selection of actions to take;
  • 2 the need of having a physical embodiment (which is not considered a necessity for the cognitivist agenda);
  • 3 the need to implement the capacity of learning sensorimotor contingencies; i.e., “the relation between the actions that the agent performs and the change it experiences in its sensed data because of those actions” (Vernon et al., 2017);
  • 4 the need for a developed perceptual apparatus, with a variety of physical sensory and motor interfaces to allow the system to act on the world and perceive the effects of these actions. - the richer the sensorimotor interface, the richer the model of the world the agent can construct -
  • 5 the need to implement attention al mechanisms for facilitating cognitive development;
  • 6 the need for subsystems able to deal with the capability of prospective action (i.e., goal-directed actions guided by prospection and triggered by values);
  • 7 the need to distinguish, in the memory system of the architecture, between declarative and procedural memory, which respectively store the “knowing that” and the “knowing how” learned;
  • 8 the need to integrate different types of learning strategies: supervised learning (i.e., the types of learning based on a supervised training based on previous examples, like the one seen in Alpha Go), reinforcement learning, and unsupervised learning (the type of learning obtained with no human supervision);
  • 9 the need to model effects able to generate internal simulations in the model; and
  • 10 the need to develop systems with a constitutive autonomy based on internal self-organization able to maintain the agent’s organizational parameters within operational bounds.

One of the most relevant outcomes of this emergentist and embodied perspective is represented by the iCub cognitive architecture (Metta et al., 2010), developed by Giulio Sandini, Giorgio Metta, and their group at I IT (the Italian Institute of Technology) and employed in the humanoid robot iCub. As Vernon also points out (Vernon, 2014), however, while this class of embodied cognitive architectures has shown interesting performances for perceptual-related tasks, ontogenetic learning (including language acquisition, see Cangelosi and Parisi, 2012), and sensorimotor coordination, there is currently still a gap with respect to the older cognitive architectures developed within the cognitivist and the hybrid traditions that have been used in a wider variety of tasks, partially overlapping with the lower level activities of emergentist architectures but, additionally, resulting in more convincing performances and structural accuracy for tasks concerning high-level cognitive faculties and going from reasoning, to natural language understanding, to planning and meta-level cognitive capabilities.

Currently, among the many different cognitive architectures realized, the two unanimously recognized as the most successful ones in the AI and cognitive modelling communities are, without any doubt, SOAR and ACT-R, widely tested in several cognitive tasks involving learning, multi-step reasoning, selective attention, multimodal perception, recognition, and many others. In the following section, we first provide a general overview of both of these architectures by illustrating their main, general structural elements. Furthermore, in order to actually compare the structural accuracy and explanatory power of such systems, we will focus on a specific type of computational models dealing with the process of conceptual categorization and retrieval: a task I have worked on extensively over the last decade.

  • [1] As concerns cognitive realism, Sun underlines the role of implicit and explicit processes (directly reflected in its own cognitive architecture: CLARION [Sun, 2007]). In particular, heposits that the interconnection between such different processes should encompass all the cognitive faculties (from learning to reasoning to metacognition). In the CLARION cognitive architecture (Sun, 2007), implicit processes operate on connectionist representations and explicitprocesses on symbolic representations. This architecture is an example of a hybrid architectureable to affect autonomous generation of explicit conceptual structures by exploiting implicitknowledge acquired by trial-and-error learning, and it can also affect top-down learning byintegrating externally provided knowledge in the form of explicit rule-based conceptual structures and assimilating these into the bottom-level implicit representation.
 
<<   CONTENTS   >>

Related topics