Desktop version

Home arrow Psychology

  • Increase font
  • Decrease font

<<   CONTENTS   >>

The Nature of Expertise

In the cognitive systems era, systems will be capable of expert-level performance in virtually any domain. These "cogs" as we call them, will be made available to billions of people across the world via a host of apps and services offered largely via the Internet and used on handheld devices. But what does it mean to be an expert? What is expertise? What do we mean when we say cogs will achieve expert-level performance?

The nature of intelligence and expertise has been debated for decades. As Gobet recently pointed out, traditional definitions of expertise rely on knowledge (the know-that's) and skills (the know-how's) (Gobet, 2016). A typical traditional definition might read as follows:

"a person having special skill or knowledge in some particular field."

This is a satisfying definition because we expect an expert to know more about a topic and be able to perform skills related to that topic better than the average person. However, Gobet gives a definition of expertise from a different perspective:

" expert is somebody who obtains results vastly superior to those

obtained by the majority of the population."

This definition focuses more on outcomes achieved rather than the constituent parts of an expert as does the former definition. This point of view has woven through the field of artificial intelligence for decades.

The Turing Test and Searle’s Chinese Room

Turing (1950) described the "imitation game" in which the output of a machine is compared to the output of a human and when an observer is unable to determine which is human and which is machine, the machine can be declared as "intelligent." Put into various forms throughout the decades, this is usually referred to as the "Turing Test."

The notion has been debated over the years. For example, the thought experiment known as Searle's Chinese Room envisions a human within a closed room translating English to Chinese and Chinese to English simply by following a set of rules printed in a book (Searle, 1980). The person is not doing the translation, and in fact, may not even be able to read English or Chinese. The person is simply following the prescribed rules. Messages in one language, slid under the door, are translated and slid back out under the door. If we just look at the inputs and outputs of Searle's Chinese Room, it satisfies the Turing Test. However it begs the question: where is the intelligence?

The Turing Test focuses on the outputs, not how the outputs are produced. Saying the entire room is intelligent does not seem correct. It is not the room itself that is special here but the contents of the room is where the magic happens. Saying the human translator is intelligent is not correct either since they know nothing of English or Chinese. Simply following rules and manipulating symbols is not considered intelligence. Even if it were, it is not intelligence as related to English/Chinese translation.

Is the book of rules therefore where the intelligence lies? Few would be comfortable with saying a static, inanimate object like a book was intelligent. It does contain the knowledge needed for the expertise of translation, but the book is not able to perform actions, so possesses no skills.

What about the book plus the human? The book represents the knowledge and the human represents the skills (the ability to perform actions following the rules). Arguments could be made to this interpretation, and have been over the years. Interestingly, this interpretation almost gets us back to our human/cog idea of synthetic expertise described in this book. The human is a biological entity and the book is an artificial entity. The difference though is the book will never be able to perform any cognitive processing tasks at all, much less cognitive processing approaching the human expert level. In our view of the cognitive systems era, humans will partner with artificial entities capable of performing expert-level cognitive processing. So, it is closer to correct in thinking of human/cog collaboration being partners of two sub-experts who achieve expertise when working together. We later argue human/cog collaboration will lead to a future in which more of the average population will attain expert-level performance—synthetic expertise (Fulbright, 2020). More on this later as it is the theme of the book.


Nobel laureate and Turing Award recipient Herbert A. Simon studied the nature of expertise as early as the 1950s. Simon and Gilmartin (1973) estimated experts need to learn on the order of 50,000 "chunks" of information/knowledge about the specific domain. Furthermore, experts gather this store of knowledge from years of experience (on the order of 10,000 hours). This represents the "knowledge" portion of the traditional definition of expertise. Simon and Gilmartin created the Memory-Aided Pattern Perceiver (MAPP) model describing how experts look at a current situation and match it to the enormous store of domain knowledge. (Luc Steels, 1990) later described this type of knowledge as deep domain knowledge. In their study of expert Chess players (Chase and Simon, 1973), noted:

"... a major component of expertise is the ability to recognize a very large number of specific relevant cues when they are present in any situation, and then to retrieve from memory information about what to do when those particular cues are noticed. Because of this knowledge and recognition capability, experts can respond to new situations very rapidly—and usually with considerable accuracy."

This view of expertise is based on pattern recognition. Experts match the patterns in the current situation to instances where these patterns have been experienced before. Having found a match in memory, experts retrieve information and solutions from previous experience and then apply those to the current situation. Experts extract from memory much more knowledge, both implicit and explicit, than novices. Furthermore, an expert applies this greater knowledge more efficiently and quickly to the situation at hand than a novice. Therefore, experts are better and more efficient problem solvers than novices. This represents the "skills" portion of the traditional definition of expertise.

The pattern recognition characteristic of expertise is interesting because deep learning and convolutional neural networks are perfect for this kind of processing. As further discussed in Chapter 2 and Chapter 4, deep neural networks detect patterns in data. A stimulus is presented to an artificial neural network, which is allowed to respond. Here the "stimulus" represents the current situation encountered by an expert. Refining and recording a collection of responses is analogous to an expert compiling his or her collection of deep domain knowledge. When the neural network responds in a way similar to a previous response, the neural network has found a match with a previously encountered situation. Pattern recognition such as this is behind many recent achievements of cognitive systems. Classification tasks, such as medical diagnoses, is an exercise in pattern matching and is question answering and knowledge retrieval.

The ability of an expert to quickly jump to the correct solution has been called intuition. A great deal of effort has gone into defining and studying intuition. Dreyfus argued intuition is a holistic human property which cannot be captured by a computer program (Dreyfus, 1972;

Dreyfus and Dreyfus, 1988). However, Simon et al. argued intuition is just efficient matching and retrieval of "chunks" of knowledge and know-how (Gobet and Simon, 2000; Gobet and Chassy, 2009). The debate between Simon and the Dreyfus brothers' interpretations has persisted for decades.

In this book, we side with Simon. Intuitive human experts seem to be able to jump straight to a solution without any obvious thinking being necessary. However, what is really happening is the expert is matching and retrieving solutions they already know will work in a given situation based on their experience. So, while it is true there is less high-level thinking involved, there is no magic here. Intuition is just highly-efficient pattern matching and recall. Today's cognitive systems are already doing the pattern-matching portion of this and achieving results exceeding humans in many domains. We expect the near future will see cognitive systems able to store and retrieve experiences and solutions and mimic intuition.

The idea of a "chunk" of information can be traced back to Miller, a psychologist studying human memory, who famously realized humans can remember seven plus or minus two things (Miller, 1956). This is why social security numbers and telephone numbers are expressed in groups like they are. However, defining what constitutes a "chunk" of information has been elusive over the decades.

de Groot established the importance of perception in expertise in that an expert perceives the most important aspects of a situation faster than novices (de Groot, 1965). Chase and Simon (1973) showed expert chess players recognize board positions and identify the best next move far quicker than novices. In this context, a "chunk" is various positions of pieces encountered before and the sequence of moves one should use in that situation. Simon and Feiganbaum created the EPAM (Elementary Perceiver and Memorizer) to model human learning and concept formation and established the concept of a chunk in cognitive science (Feiganbaum and Simon, 1984). Most cognitive architectures, such as the Soar architecture, described in Chapter 6, include chunks as a feature of knowledge storage and recall (Laird et al., 1986). In Soar, a chunk is a production rule, or set of rules, derived from experiences capturing a piece of knowledge.

Capturing and representing knowledge has been a challenge for artificial intelligence researchers since its beginning. Newell and Simon formulated the notion of symbol manipulation early as the Physical Symbol System hypothesis (PSS) which maintains knowledge can be represented by symbols which can be combined into structured expressions such as predicate statements and production rules (Newell and Simon, 1976).

Various other mechanisms have been created to capture knowledge. Minsky famously created the concept of a frame to capture and represent knowledge (Minsky, 1977). A frame is simply a collection of named fields, called slots, with values stored in the slots. A frame, or even a collection of related frames, can be considered a chunk of knowledge.

Gobet defined a chunk as "a collection of elements having strong associations with one another, but weak associations with elements within other chunks" (Gobet et al., 2001). Gobet has argued the traditional notion of a "chunk" is too simple and too static for real-world situations. Instead, Gobet introduces a template as a dynamic chunk with static components and variable, or dynamic components, resembling a complex data structure. A template is in many ways similar to Minky's frames. Templates allow an expert to quickly process information at different levels of abstraction yielding the extreme performance consistent with intuition.

The Chunk Hierarchy and Retrieval Structures (CHREST) cognitive architecture allows a system to learn by representing knowledge as network of interconnected nodes where each node is a dynamic chunk (a template). This type of knowledge storage differs from other cognitive architectures using production rules and symbols to represent and store knowledge.

In our Model of Expertise described in Chapter 7, models are primary resources allowing the cog to recognize conditions and situations of interest. We envision a model to be an collection of CHREST-like templates. Tasks and problem-solving methods are also primary resources in our model, so a chunk in our model is a collection of models, tasks, and methods retrieved from memory.

Types of Knowledge and the Knowledge Level

Researchers have developed several ways to represent intelligence and intelligent agents. As shown in Fig. 5-1, Allen Newell recognized computer systems are described at many different levels and defined the Knowledge Level as a means to analyze intelligent agents at an abstract level (Newell, 1982).

The lower levels represent physical elements from which the system is constructed. In describing computer systems, the lower levels are based on electronic devices and circuits. The higher levels represent the logical elements of the system.

In general, a level "abstracts away" the details of the lower level. For example, consider a computer programmer writing a line of code storing a value into a variable. The programmer is operating at the Program/ Symbol Level and never thinks about how the value is stored in registers and ultimately is physically realized as voltage potentials in electronic circuits. The details of the physical levels are abstracted away at the Program Level.

Newell's knowledge level

Fig. 5-1: Newell's knowledge level.

Likewise, at the Knowledge Level, the implementation details of how knowledge is represented in computer programs is abstracted away. This allows us to talk about knowledge in implementation-independent terms thus facilitating generic analysis about intelligent agents. An expert is a kind of intelligent agent, so the Knowledge Level can be used to describe experts and expertise as well.

In addition to knowledge required of experts by Simon and earlier researchers, Steels (1990) identified the following as needed by experts: deep domain knowledge, problem-solving methods, and task models. Problemsolving methods are how one goes about solving a problem. There are generic problem solutions applicable to almost every domain of discourse such as "how to average a list of numbers." However, there are also domain-specific problem-solving methods applicable to only a specific or very small collection of domains.

A task model is knowledge about how to do something. For example, how to remove a faucet is a task an expert plumber would know. As with problem solutions, there are generic tasks and domain-specific tasks. Steels' deep domain knowledge represents the "knowledge" in the traditional definition of expertise and the problem-solving and task models represent the "skills" in the traditional definition of expertise. Steels separates problem-solving as a different kind of skill from performing a task.

Steels (1990) also proposed a Knowledge-Use Level between Newell's Knowledge Level and the Program Level, as shown in Fig. 5-2, to address issues like task decomposition, execution, scheduling, software architecture, and data structure design.

Steele's knowledge-use level

Fig. 5-2: Steele's knowledge-use level.

Whereas the Knowledge Level is implementation independent, the Knowledge-Use Level is geared toward implementation and is quite dependent on implementation details. Steeles felt this level is necessary to bridge the gap between the Knowledge Level and the Program/Symbol Level. Later, we introduce a new level above the Knowledge Level called the Expertise Level.

Bloom’s Taxonomy

Human cognition has been studied for decades and different levels of cognition have been defined. Bloom's Taxonomy is a famous hierarchy from the field of educational pedagogy relating different levels of cognition as shown in Fig. 5-3. Originally published in the 1950s, and revised in the 1990s, Bloom's Taxonomy was developed as a common language about learning and educational goals to aid in the development of course materials and assessments (Bloom et al., 1956; Anderson et al., 2001).

Bloom's Taxonomy addresses the question of demonstrating mastery of a subject. In the education field, instructors design assessments requiring students to demonstrate the skills at each level of the taxonomy. For example, some exam questions may simply be information retrieval questions corresponding to the remember level. Other exam questions and assignments correspond to each of the higher levels in the taxonomy. A student able to exhibit competency at all levels of Bloom's Taxonomy has

Bloom's taxonomy

Fig. 5-3: Bloom's taxonomy.

demonstrated a mastery of the subject matter. It occurs to us demonstrating mastery of a certain subject matter is a qualification, and indeed, a definition of an expert.

Each level in Bloom's Taxonomy represents a cognitive process with the amount of effort (cognitive work) required increasing dramatically as one goes from remember to create. The processes are listed in order from the simplest (remember) to the most complex (create). A system, artificial or natural, performing any of these processes is performing some degree of cognition. Currently, our computers and electronic devices perform only the lowest-level processes. In the coming cog era, cogs will come to perform more of, and indeed, all of the processes at every level. Until such time as an artificial system can perform at every level, synthetic expertise will be achieved by a human/cog ensemble, who together, performs at every level of Bloom's Taxonomy. If a human/cog ensemble is performing at every level in Bloom's Taxonomy, then the human/cog is achieving expertise. Since some of the cognitive processing is done by an artificial entity, the cog, then we say the ensemble is achieving synthetic expertise.

<<   CONTENTS   >>

Related topics