Desktop version

Home arrow Education

What Lies Behind the User Illusion?

To recap, Dennett is trying to account for the intentional behavior of conscious human beings. We are looking for the information-processing software architecture that could explain why conscious human beings behave the way they do. Dennett does not think we will succeed unless we break the mind down and reverse-engineer it into parts, and he is uncompromising in his effort to give a distributed account of mental processes—an account that has no control center. This is how we are to think about the distributed processing of the brain at a software level: there is a profusion of processing threads running simultaneously throughout the brain, much like the processes in the memory space of your modern PC but on a larger parallel scale. Dennett’s model lends itself well to interpretation from a programmer’s point of view. Let us look, for example, at one programming notion called multithreading.

A modern computer is designed to run several programs at the same time. Those programs are, in turn, composed of miniprograms that also run simultaneously. Programmers refer to these miniprograms as threads of execution. Some are important and assigned high priority, while others run at lower priorities. You don’t notice most of them, because they run behind the user interface, but some make themselves visible. When you are typing something, there is a computational thread of execution that puts whatever you type on the screen. How does this work? When you type a character, a keyboard-handling thread of execution wakes up, takes control, and puts the character on the screen through the help of further threads. Perhaps you get an email and there is a sound that notifies you of its arrival—another thread of execution wakes up and generates the sound as part of your email application’s user interface. Multithreaded programming is challenging because programmers manage many threads that interrupt each other in complex ways, which are dependent on events in a global workspace. Different metaphors are in use by programmers to talk about multithreading. One is that of demons—each thread of execution is thought of as a demon process that can lay dormant until something triggers it. Dennett thinks of the programming of the neurocomputer brain in this way.

Dennett adopts Oliver Selfridge’s 1958 demon model,11 “the first model of a competitive, nonhierarchical, computational architecture” (Dennett 2005, p. 135), to explain our brain’s software architecture. Let us look at a simple example of how visual character recognition might work to illustrate Selfridge’s model. How can we discriminate one character from another when we are reading text? We don’t think about it when we read. It is only if you learn a language with foreign characters that you must attempt to recognize them. We have processes that yield preinterpreted [1]

characters. What could they be like? In Selfridge’s view, the recognition process is distributed and realized by many demons that each process information autonomously. The following is a rough characterization of how the letter A could be recognized by several demon processes—threads of computation that lie dormant until triggered by some event.

In the diagram, each demon is a box.[2] Demons are connected through input and output channels, represented by lines. Only a few demons in the network are shown. Each demon is highly specialized. Some are data demons: they take sense data and communicate with computational demons, who look for simple geometric figures, such as lines oriented in different directions. Each computational demon looks for a pattern and “shouts” an acknowledgment when one is found. Cognitive demons listen to computational demons and look for specific character patterns. In the diagram, an “A” demon looks for two tilted lines and a horizontal line by listening to three computational demons that each look for one of those lines. The thick lines in the drawing indicate that these computational demons are shouting to the “A” demon that they have found their target features. The “A” demon then shouts to the decision demon at the top that it has found an “A.” The “E” demon shouts but not as loudly, since it has only a partial match for an “E.” The “O” demon remains quiet, since it hasn’t found anything of interest. So the decision demon shouts that it has found an “A,” and we can imagine further brain demons trying to construct words that are listening to it in turn.

Dennett is attracted to the distributed nature of this so-called pandemonium model, not just for perception but for mental processing in general. How can we understand the conscious mind on the basis of the pandemonium model? Dennett favors a homunculus interpretation of the mind. There is no thinking Cartesian thing or isolatable mind entity in the brain. Think of the mind along the lines of homun?culi, successively decomposable into more stupid homunculi until we reach rock-bottom neural computations. While the mind appears complex, it derives from simple operations.

This is the model of homuncular functionalism—your self and the thinking you do are the work of successively decomposable homunculi.

Dennett offers his so-called multiple drafts model to explain the stream of consciousness. Think of consciousness as a process undergoing neurocomputational revision. We are often mistaken about what we see or hear. You might, for example, think you see an animal on the road as you are driving, but it turns out to be an empty cardboard box. What happened when you first saw it as an animal and then as a cardboard box? The stream of consciousness results from competition between neurocomputational drafts. Many drafts are written, but only some are experienced—moreover, the revision process continues in the stream of consciousness. In our example, the draft of the animal was replaced with one of a cardboard box.

As we have seen, in Dennett’s account, there are many computational threads— mental drafts, demons or homunculi—running in your brain, doing all kinds of information processing, but not all become conscious information states. As we saw in the pandemonium model, there is competition in the brain, and this is emphasized in what Dennett calls the fame-in-the-brain model. To become conscious on this model, threads must win over other threads and gain brain influence. A thread might, for example, make you say something, move your arm, or focus attention. For Dennett, the mind is a multithreaded, complex processing system that controls your body. Consciousness is about achieving effects from within it by gaining access and influence over cognition and behavior. Dennett thinks of conscious states as having political clout.

Dennett warns us against thinking there are two steps involved: first an information state becomes globally accessible, and then this causes consciousness to happen. But it is the very global accessibility, the executive power, and the fame that comes with it that explain consciousness. In this view, consciousness is a functionalist notion.[3]

  • [1] Dennett (1991, p. 189). Oliver Selfridge (1926-2008) was a pioneering AI researcher.
  • [2] The diagram is adapted from Selfridge (1959).
  • [3] See “A Fantasy Echo Theory of Consciousness” in Dennett (2005, p. 159), where Dennett elaborates on the functionalist character of consciousness.
< Prev   CONTENTS   Source   Next >

Related topics