Desktop version

Home arrow Psychology

  • Increase font
  • Decrease font

<<   CONTENTS   >>

The Human/Cog Ensemble

The primary goal of most of artificial intelligence research has been to replicate human intelligence with the idea to ultimately compete with or replace humans. Indeed, much has been written recently about the fear that AI will take over and make humans obsolete. The cognitive augmentation era will be different. Instead of replacing humans, cognitive systems seek to act as partners with and alongside humans. John Kelly, Senior Vice President and Director of Research at IBM describes the coming revolution in cognitive augmentation as follows (Kelly and Hamm, 2013):

"The goal isn't to ... replace human thinking with machine thinking. Rather ... humans and machines will collaborate to produce better results—each bringing their own superior skills to the partnership. The machines will be more rational and analyticand, of course, possess encyclopedic memories and tremendous computational abilities. People will provide judgment, intuition, empathy, a moral compass and human creativity."

Interestingly, over thirty years ago, Apple, Inc. envisioned an intelligent assistant called the Knowledge Navigator (Apple, 1987). The Knowledge Navigator was a vision of an artificial executive assistant capable of natural language understanding, independent knowledge gathering and processing, and high-level reasoning and task execution. Totally fiction, the Knowledge Navigator concept was well ahead of its time and many people derided Apple over the idea. However, we are beginning to see some of the features in current voice-controlled "digital assistants" such as Siri, Cortana, and Amazon Echo and now we are also beginning to see cognitive systems beginning to be able to perform high-level cognition.

In 2014, IBM released a video demonstrating humans collaborating with an advanced version of IBM Watson technology (Gil, 2014). Some aspects of the video are strikingly similar to the Knowledge Navigator video of 1987, particularly the collaborative nature of the dialog. However, the Watson technology shown in the video was real. In the video, Watson communicates with the humans in natural spoken language. Watson also performs various manipulations of data and information presenting the results to the humans. At one point one of the humans ask Watson to form a conclusion and make a recommendation which it does.

The collaborative nature of one or more humans working with artificial entities is key to the coming cognitive systems era as depicted in Fig. 3-2. In the spirit of Engelbart's framework, we draw the human and the artificial entity ("cog") as two components working together as a single system— an ensemble. The dashed line represents the border between the human/ cog ensemble and the outside world. Data, information, knowledge, and wisdom flow from the outside into the ensemble, and transformed data, information, knowledge, and wisdom flow out of the ensemble.

Human/Cog collaboration

Fig. 3-2: Human/Cog collaboration.

The Cognitive Process

Viewing the human/cog ensemble as a transformer of information is an important concept. All cognitive systems, whether they are human or artificial, process and transform information. Information becomes more valuable as it is processed and combined with other pieces of information and we assign these forms of information different names: data, information, knowledge, and wisdom. The DIKW hierarchy, shown in Fig. 3-3, is a well- respected idea from the knowledge management field representing

Tine DIKW hierarchy

Fig. 3-3: Tine DIKW hierarchy.

information as processed data, knowledge as processed information, and wisdom as processed knowledge (Ackoff, 1989).

A similar depiction of the knowledge hierarchy is seen in the National Security Agency (NSA) Reference Model shown in Fig. 3-4 (Hancock et al., 2019).

The RSA reference model

Fig. 3-4: The RSA reference model.

Because this depiction comes from the intelligence community, the NSA Reference model calls "Wisdom" "Intelligence" and also depicts the signal level. Physical disturbances in the environment create detectable signals. Once perceived, these signals can be turned into data feeding the hierarchy.

In the knowledge hierarchy, each level is of a higher value than the lower level because of the processing. Data is considered to be of the lowest value and the closest to the physical world (and therefore the least abstract of the levels). Data is generated when physical phenomena are sensed. Information is the result of processing data. Processing information produces knowledge. Ultimately, knowledge is transformed into wisdom.

The transformation of data, information, knowledge, and wisdom is the essential aspect of a cognitive action called a cognitive process (Fulbright, 2017b; 2018). At an abstract level, we view a cognitive process as receiving data, information, knowledge, or wisdom as an input and producing transformed data, information, knowledge, or wisdom as an output as show in Fig. 3-5.

The execution of every cognitive process requires the expenditure of a quantity of cognitive work, W, to transform the input, Sin into the output, S0M.

A cognitive process as a transformation of information

Fig. 3-5: A cognitive process as a transformation of information.

Cognitive Work and the Augmentation Factor

A human/cog ensemble, shown in Fig. 3-2, is equivalent to the cognitive process shown in Fig. 3-5. The human/cog ensemble is an informationprocessing system performing an overall cognitive process by executing several internal cognitive processes. In a human/cog ensemble, the human performs some of the cognitive processing and expends WH amount of cognitive work. The cog performs some of the cognitive processing expending Wc. Together, the ensemble expends a total amount of cognitive work, W. The total cognitive work performed by the ensemble is at least equal to the sum of the cognitive work done by each component

There is speculation, and reason to believe, W"could actually exceed the sum due to emergent properties arising out of the human/cog interaction. Flowever, this has not be proven and is the subject of future work.

Given we can calculate the individual cognitive contributions of the human and the cog, comparing the efforts yields a metric called the augmentation factor, A*:

Humans working alone without the aid of artificial entities are not augmented at all and have an A* = 0/WH = 0. If humans are performing more cognitive work than artificial entities, A* < 1. This is the world in which we have been living so far. However, when cogs start performing more cognitive work than humans, A* > 1 with no upward bound. That is the coming cognitive systems era.

Cognitive Power

In physics and engineering, it is a common practice to measure the amount of work performed over the amount of time it takes to perform the work. Therefore, cognitive power, P, is the execution of cognitive work over a period of time,

The cognitive power of the human/cog ensemble is the combination of the cognitive power contributions of the human and the cog,

The goal of human cognitive augmentation is for the human to be able to perform a larger amount of cognitive work, and thereby exhibit a greater amount of cognitive power, by collaborating with one or more cogs as shown in Fig. 3-6.

Human cognitive augmentation measured with cognitive work and cognitive

Fig. 3-6: Human cognitive augmentation measured with cognitive work and cognitive


Cognitive Accuracy and Cognitive Precision

Another way to measure the effect of cognitive processing done by the human/cog ensemble is to measure the effect on cognitive accuracy and cognitive precision (Fulbright, 2019).

To define the notions of cognitive accuracy and cognitive precision, we first model the human/cog ensemble as a general information machine

(GIM) (Fulbright, 2002). Formally, a GIM is a stochastic Turing machine accepting information as an input and producing information as an output. In a traditional Turing machine, rules specify symbol transitions dictating a deterministic transformation of an input to a specific output. However, the rules in a GIM are stochastic in nature. Each transition rule is associated with a probability rather than being a deterministic certainty. Therefore, for a given input, the GIM's output may vary with each run. Over a number of runs, given the same input, a set of outputs (C) is created. The pattern of outputs in C is determined by the randomness of the probabilities within the GIM and denoted as X as shown in Fig. 3-7.

If the GIM is truly random, the outputs in C are evenly distributed with the average probability of each output, c, being 1 / I СI where I СI is the cardinality of C or simply the number of different outputs. If the GIM is truly deterministic (such as that of a deterministic Turing machine), one and only one output will be generated 100% of the time. Of course, the probability of that output is 100% and the probability of any other possible output in C is zero.

If, however, the randomness of the GIM is an intermediate value a pattern of outputs will emerge. Some of these outputs will be very similar to other outputs and can be grouped together into subsets of C. The

Cognitive processing modeled as a stochastic process

Fig. 3-7: Cognitive processing modeled as a stochastic process.

Precision and accuracy

Fig. 3-8: Precision and accuracy.

probability of a subset can be calculated by comparing the cardinality of the subset with the cardinality of C as shown in Fig. 3-7.

The distribution pattern of C is critical. If we choose one of the subsets in C as the preferred or desired output, we can characterize any distribution pattern based on the ideas of accuracy and precision as shown in Fig. 3-8. Cognitive accuracy involves the propensity to produce the preferred output. Cognitive precision involves the propensity to produce only the preferred output.

The goal, of course, is for every output to fall within the preferred subset (upper right quadrant). This represents high accuracy and high precision. It is possible for outputs to be very similar to each other (forming a tight cluster) but not falling within the preferred subset (lower right quadrant). This represents high precision but low accuracy. Outputs centered on the preferred subset but not tightly clustered (upper left quadrant) represents high accuracy but low precision. Outputs with low accuracy and low precision (lower left quadrant) have only accidental relationship to the preferred subset.

Since the outputs in the model are the result of cognitive processing, we call these two measures cognitive accuracyл) and cognitive precision (CP). The result of any cognitive process can be either the desired result (or close to it) or an undesired result. We define cognitive accuracyл) as the propensity to produce the desired result. We define cognitive precision (CP) as the propensity to not produce something other than the desired result.

Note, these are not necessarily equivalent to "correct" and "incorrect" results. Often, the result of cognitive processing cannot be labeled as correct or incorrect. For example, asking a person what things in life are important to them will generate a number of answers. It is not possible to determine if one of those answers is correct and the rest incorrect. However, we can identify a particular answer as being the one we desire. Once we have chosen the target, we can calculate accuracy and precision of any set of answers relative to the target.

The goal of human cognitive augmentation is to increase the cognitive accuracy and cognitive precision of the human by collaborating with one or more cogs as shown in Fig. 3-9.

Human cognitive augmentation measured with cognitive accuracy and cognitive

Fig. 3-9: Human cognitive augmentation measured with cognitive accuracy and cognitive


Levels of Cognitive Augmentation

We think it will be many years before fully artificial intelligences become available to the mass market. In the meantime, there will be human/cog ensembles with varying amounts of cognitive augmentation. In some ways, the evolution of cogs is similar to the evolution of self-driving automobile technology. The National Highway Traffic Safety Administration (NHTSA) has defined six levels of automation (NHTSA, 2019). These levels go from Level 0 (no automation) to Level 5 (full automation). Intermediate levels include Level 1, where the driver is assisted by a single automated system, Level 2, where the vehicle controls some functions but the human can intervene at any time, and Level 3, where the vehicle makes decisions and performs complex functions usually without human intervention but the human can intervene if needed.

Taking this as inspiration, we define the following Levels of Cognitive Augmentation ranging from no augmentation at all (all human thinking) to fully artificial intelligence (no human thinking):

Up until this point in time, computers and software we have used represent Level 1 cognitive augmentation—assistive tools. Recent advances in deep learning and unsupervised learning have produced

Levels of cognitive augmentation

Fig. 3-10: Levels of cognitive augmentation.

Level 2 cognitive augmentation. But as the abilities of cogs improves, we will see Level 3 and Level 4 cognitive augmentation. Later, we introduce the Model of Expertise and show how it can produce Level 3 and Level 4 cogs.


Technology has always changed humankind. As new technology is adopted, social, cultural, political, and ethical norms change. Technology augments human capability. Throughout history, most new technology has augmented humans in a physical sense. Information technology augments human cognitive ability. Until now, cognitive augmentation technology has only made humans better thinkers. Humans have still done all of the thinking. However, cognitive systems technology is producing artificial systems capable of performing some of the thinking on their own.

In the coming cognitive systems era, humans will collaborate with these systems. There will be at least two entities doing the thinking (humans will partner with more than one cognitive system). Therefore, cognition will be the product of a collaborative effort. The goal is for humans to be able to perform higher levels and higher amounts of cognition by virtue of this collaboration. We can measure the effect of cognitive augmentation by the amount of cognitive work performed, the amount of cognitive power expended, and increases in the cognitive accuracy and cognitive precision achieved.

The level of performance of the human/cog ensemble will be indistinguishable from, and even exceed, the level of an expert. At that point, we will have achieved synthetic expertise. When synthetic expertise is adopted by the majority cultural and societal changes will follow.

<<   CONTENTS   >>

Related topics