Desktop version

Home arrow Psychology

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Cognitive Architectures

An entity capable of intelligent behavior is called an intelligent agent but not all intelligent agents perform intelligent behavior (King, 1995). The range of intelligence exhibited by an intelligent agent is large. For example, a thermostat could be considered to be an intelligent agent (Russell and Norvig, 2009).

Intelligent agents are assumed to be situated in an environment which the agent can sense, through various sensors, and alter, by using various effectors to perform actions. Physical disturbances in the environment are sensed and converted into information by the agent. The agent uses various internal knowledge and reasoning mechanisms to decide on an action to take. Actions are performed by manipulating and controlling effectors which change the environment in some way. This perceive- reason-act cycle is pervasive throughout cognitive architectures. This, an intelligent agent is anything able to sense and change the environment, even if it does not involve intelligence.

Since the beginning of artificial intelligence as a discipline in the 1950s, researchers have sought to understand intelligence and has constructed a number of formulations to describe and explain intelligent behavior. In general, these formulations are called cognitive architectures. Cognitive architectures identify and explain structures in biological or artificial intelligent agents and explore how these structures work together along with knowledge and skills to result in intelligent behavior. This chapter contains cognitive architectures collected from artificial intelligence, cognitive science, and agent theory.

Genealogy of Cognitive Architectures

Most of the cognitive architectures presented in this chapter come from three generations of researchers who have associations with one another and with universities such as: Carnegie Mellon, University of Michigan,

MIT, and Stanford. Allen Newell (Carnegie Mellon University), Herbert A. Simon (Carnegie Mellon University), John McCarthy (MIT), Marvin Minsky (MIT) and Arthur Samuel (IBM) and a few others are credited with founding and leading the field of artificial intelligence research (McCarthy et al., 1955).

Among Herbert A. Simon's doctoral students were Edward Feigenbaum and Allen Newell. Feigenbaum, is often referred to as the father of expert systems and along with Simon developed EPAM (Elementary Perceiver and Memorizer) a computer program simulating phenomena in verbal learning (Feigenbaum and Simon, 1984). EPAM formalized the notion of a "chunk" and later influenced the CHREST (Chunk Hierarchy and Retrieval Structures) model developed by Gobet and Lane (Gobet et al., 2001).

Allen Newell mentees included Hans Berliner, Paul Rosenbloom, and John Laird. Berliner is known for developing the HiTech chess program and the BKG backgammon program (Berliner, 1977; 1980). Berliner's doctoral student and HiTech team member, Murray Campbell, went on to work on IBM's DeepThought and DeepBlue system which defeated the human chess champion Gary Kasparov in 1997, a landmark event in artificial intelligence. Rosenbloom, Laird, and Newell developed the Soar cognitive architecture, one of the leading cognitive architectures (Laird, 2012).

Laird started work on Soar at Carnegie Mellon but later took a position at the University of Michigan. Developers of the EPIC cognitive architecture, David Meyer and David Kieras graduated from Michigan and are also professors there. Together with Soar, ACT-R, and CLARION, EPIC is considered among the leading cognitive architectures (Kieras and Meyer, 1997).

John Anderson received his PhD from Stanford University and, along with his doctoral student Christian Lebiere, developed the ACT-R (Adaptive Control of Thought-Rational) architecture at Carnegie Mellon (Anderson, 2013). Also associated with Stanford University was Nils Nilson and Michael Genesereth. Nilson and Genesereth authored The Logical Foundations of Artificial Intelligence in 1976 which became one of the most read books in artificial intelligence (Genesereth and Nilsson, 1987). One of Genesereth's doctoral students was Stuart Russell who later, with Peter Norvig, wrote Artificial Intelligence: A Modern Approach which has become the leading book in artificial intelligence through several revisions (Russell and Norvig, 2009). Several of the cognitive architectures presented in this chapter come from these two books.

Another student of Genesereth, was Jeffery Rosenchein who became known for work in distributed artificial intelligence which focuses on agent-based behavior (Rosenschein and Genesereth, 1988). Tom

Hinrichs, also a student of Genesereth, together with Ken Forbus from MIT, developed the cognitive companion systems concept (Forbus and Hinrichs, 2006). A cognitive companion shares many characteristics of the collaborating cognitive systems (cogs) we focus on in this book. Another Stanford professor, Pat Langley, developed Icarus, a cognitive architecture for physical agents (Langley and Choi, 2006). Langley has challenged the cognitive systems community with creating artificial entities like lawyers as a way to challenge the research community with a goal (Langley, 2013).

One of the original pioneers of artificial intelligence, Marvin Minsky, from MIT, developed the “society of mind" concept (Minsky, 1986), and later refined this to the Emotion Machine architecture (Minsky, 2007). Several of Minsky's students have made major contributions. Carl Hewitt is known for the actor model—the view of systems as collections of interacting actors (agents) (Hewitt et al., 1973). Patrick Winston, with MIT, wrote Artificial Intelligence which became a widely-read book in the 1980s and 1990s through several revisions (Winston, 1977). Luc Steels, identified components of expertise, described earlier in this book (Steels, 1990).

General Problem Solver

As shown in Fig. 6-1, one of the earliest efforts in artificial intelligence research, the General Problem Solver (GPS) was created in 1959 and was intended to be a universal problem-solver machine (Newell et al., 1959). At a given point in time, the machine exists in any of a set of states called the problem state space with one state declared the goal state as shown in Fig. 6-1. In each iteration, the machine determines the distance from the current state to the goal state and then selects an operator to perform resulting in the machine moving to a state closer to the goal. The machine iterates until the goal is achieved. GPS was the first computer program separating the knowledge of problems from the strategy of how to solve problems. While it was a necessary first logical step, GPS was never able to solve real-world problems. However, GPS did evolve eventually into the Soar architecture described later.

The general problem solver (GPS)

Fig. 6-1: The general problem solver (GPS).

Reflexive/Tropistic/Instinctive Agent

The simplest architecture to capture the perceive/act nature of agents embedded in an environment is the reflexive agent architecture shown in Fig. 6-2 (Russell and Norvig, 2009). A pervasive idea in agent theory is an agent perceives its environment, reasons about it perceptions, and then acts to effect a change in the environment. However, some actions do not require high-level reasoning. For example, if you touch a hot surface with your finger, you will jerk your finger away before your brain has a chance to process the event. In humans, this is called a reflexive action. Reflexive agents are also called tropistic or instinctive agents (Genesereth and Nillson, 1987).

A reflexive agent perceives the environment and instinctively or reflexively acts without first reasoning about the perception or action. The set of actions the agent can perform is A. Each action, when taken, causes the environment to change to another state in S (A x S —» S). During each cycle, the environment exists in a state (s <= S) however the agent is not able to detect every state in S. The agent can perceive only one of a set of partitions (groups of states) in S called T. The reflexive agent perceives the environment (see function) and detects which partition (t <= T) the environment is in (S —> T). Having perceived the environment to the best of its ability, the agent selects an action (a <= A) (action function) and then performs the selected action (do function) causing the environment to transition to a new state (A x S —* S).

Reflexive/Tropistic/Instinctive agent

Fig. 6-2: Reflexive/Tropistic/Instinctive agent.

Model-Based Reflexive/Hysteretic Agent

As shown in Fig. 6-3, a model-based reflexive agent (also called a hysteretic agent) agent maintains a set of internal representations, M, of itself and the environment and incorporates this information into selecting is actions (Russell and Norvig, 2009; Genesereth and Nillson, 1987). The set of actions the agent can perform is A. Each action, when taken, (do function) causes the environment to change to another state (AxS->S). During each cycle, the environment exists in a state (s <= S) however the agent is not able to detect every state in S. The agent can perceive only one of a set of partitions (groups of states) in S called T. The agent perceives the environment (see function) and detects which partition (t <= T) the environment is in (S —>T). The agent modifies its internal representation (model function) based on its new perceptions (M x T —» M). Having perceived the environment and updated its internal model, the agent selects an action (a <= A) (action function) using both the internal model and its perceptions (M x T —> A). The agent then performs the selected action (do function) causing the environment to transition to a new state (A x S —> S).

A model-based reflexive/hysteretic agent

Fig. 6-3: A model-based reflexive/hysteretic agent.

A model is any kind of representation depicting a certain state or condition of interest to an agent. One of the first types of models was Marvin Minsky's frames (Minsky, 1977). Frames were one of the first attempts in artificial intelligence research to capture knowledge in an independent form allowing it to be processed (other than in symbols and production rules). Frames consists of a number of slots with each slot containing a piece of information about the frame (and even other frames). An example of a frame is shown in Fig. 6-4.

A more dynamic form of a frame, called a template, was developed by Gobet (Gobet et al., 2001; Gobet, 2016). Models allow agents to recognize and classify objects, situations, and conditions in the environment. As an agent perceives the environment, it may "fill in" information about an item. When enough partial information is obtained (enough of the frame's slots are full of information), the agent can conclude knowledge about the item. This feature allows model-based agents to operate in a real-world environment and act robustly with only partial or incomplete information.

For example, an agent may maintain a set of models describing various dangers to be avoided. As time progresses, the agent acquires information about the environment. When enough information is collected in one of the "danger" models, the agent may conclude with some amount of confidence a danger of that type exists and then act accordingly.

A frame data structure

Fig. 6-4: A frame data structure.

 
<<   CONTENTS   >>

Related topics