Desktop version

Home arrow Education

Biology of Consciousness?

Noe offers many metaphors for consciousness: something we do, something enacted—like dancing, and like money. We are to understand our conscious selves as being wide, being like economies, being like corporations, and being like information networks. While Noe’s metaphors are colorful, it is unclear how they sup3

port what Noe promises in his book—to explain the biological basis of the mind. The subtitle of Noe’s book is Why You Are Not Your Brain, and Other Lessons from the Biology of Consciousness. Noe notes, “Our goal is to understand the biological basis of the mind.” However, in a section titled “Mind Body Problem for Robots,” he discusses “mass-produced robots,” like replicants in the movie Blade Runner, and there he spells out how there is no essential connection between what we are and what we are made of:

Granted, replicants lack biological innards; they are not composed of the same stuff as we are. But that’s just the point: there is no necessary connection between what we are and what we are made out of. It would be nothing but prejudice to insist that there is such a connection. (Noe 2009, p. 165)

Indeed—and this is the kicker—on the basis of introspection it is impossible for you to tell even of yourself whether or not you are a replicant. Deckard, a cop on the hunt for rebel replicants, refuses to acknowledge that the rebels are genuine conscious agents. . . . That Deckard may himself be a replicant who does not know that he is one drives the point home that what is at stake here is not some kind of biological essence. (Noe 2009, p. 35)

In Noe’s view, biology is inessential to who we are and consciousness. We could be built with nonbiological materials, like the mass-produced replicants in Blade Runner. Perhaps we are—we could never tell, except by opening our skulls. Noe then, like Hurley, is open to the possibility that there could be artificial consciousness and that robots could be conscious:

I don’t rule out the possibility of artificial robot consciousness. But I would not be surprised if the only route to artificial consciousness is through artificial life. (Noe 2009, p. 45)

This thought is followed up when Noe speculates that we might build conscious robots with digital computer brains and muses over whether our brains are computers or not:

It remains an open empirical question whether we could build a conscious robot with a digital computer for a brain. And so it remains an open question whether our brains are, in some sense, computers. (Noe 2009, p. 166)

If there is nothing about our biology that is essential to consciousness, and if Noe is open to the possibility that robots with a digital computer for a brain could be conscious or indeed that our brains could be such computers, what are we to make of the goal of the book—“to understand the biological basis of mind”—and what are the “lessons from the biology of consciousness” that Noe refers to in the subtitle? If we look closer at Francisco Varela’s Principles of Biological Autonomy—the earliest work that Noe guides us to for understanding of the enactive perspective, and the book where Varela claims to have introduced it—we find an abstract systems view of life and the mind. Like Noe’s, it is a view that analyzes life in terms of organization rather than biophysical components:

It is our assumption that there is an organization that is common to all living systems, whichever the nature of their components. Since our subject is this organization, not the particular ways in which it may be realized, we shall not make distinctions between classes or types of living systems. (Varela 1979, p. 6)

Varela goes on to claim that his work falls within cybernetics and systems theory:

By adopting this philosophy, we are in fact just adopting the basic philosophy that animates cybernetics and systems theory, with the qualifications to these names that were discussed in the Preface. This is, I believe, nothing more and nothing less than the essence of a modern mechanicism. (Varela 1979, p. 7)

Then he rehashes the point that living systems are to be understood not in terms of physical matter but in terms of organization:

We are emphasizing that a living system is defined by its organization, and hence that it can be explained as any organization is explained, that is, in terms of relations, not of component properties. (Varela 1979, p. 7)

Varela’s project in Principles of Biological Autonomy was to go beyond modern biology, using cybernetics and systems theory—essentially distributed computational or functional accounts.

The best way to understand Noe’s account, then, is as a variety of functionalism—one extended to the environment. In an article that discusses what makes us conscious, he declares—as a fact—that neurophysiology is a functional notion:

Neurophysiology is a functional, not a physical notion. (Noe 2007, p. 461)

In a footnote, Noe gives—as a validity check on his claim—the work of Hubel and Wiesel on vision, and he describes it as being about solving computational tasks—an information-processing depiction of their work on vision:

This is clear if we look at the Nobel Prize-winning work of Hubel and Wiesel (collected in Hubel and Wiesel 2005). Hubel and Wiesel described the function of cells but they did so crucially by viewing the cells as contributing, in effect, to the performance of a computational task. (Noe 2007, p. 473)

As Noe himself notes in Out of Our Heads:

Hubel and Wiesel were awarded the Nobel Prize “for their discoveries concerning information processing in the visual system.” (Noe 2009, p. 156)

It is puzzling why Noe picks a computational information-processing account as being definitional of work in neurophysiology. While a computational informationprocessing account of the brain is common, it is hardly defining of neurophysiology. Nor was it defining of the work of Ramon y Cajal—perhaps the best neurophysiologist ever and arguably the most important pioneer in the field. His passion for neurophysiology was not about a nonphysical information-processing subject. Cajal was interested in delineating the intricate structure—the fine histology of the brain—and how neurochemistry could somehow explain mental life. He spent countless hours examining samples of brain tissues. The work was about physical properties. He would not have been able to study neurophysiology without tissue samples, and the same is true for researchers in neurophysiology today. When Eric Kandel was awarded the Nobel Prize for his research on the neural basis of memory, it was because it represented such good work on neurochemistry and actual neurophysiology. This is how he characterizes neuroscience and his own biological approach:

An ultimate aim of neuroscience is to provide an intellectually satisfying set of explanations, in molecular terms, of normal mentation, perception, motor coordination, feeling, thought, and memory. (Kandel 2005, p. 193)

Kandel views the brain as, in some sense, performing computations, but he also stresses that the way it computes is not anything we have the slightest grip on from a computer science perspective:

When you sit at a sidewalk cafe and watch people go by, you can, with minimal clues, readily distinguish men from women, friends from strangers. Perceiving and recognizing objects and people seem effortless. However, computer scientists have learned from constructing intelligent machines that these perceptual discriminations require computations that no computer can begin to approach. (Kandel 2006, p. 297)

The way to understand mind and consciousness is, for him, by looking at the biology of the brain:

The new biology of mind . . . suggests that not only the body, but also mind and the specific molecules that underlie our highest mental processes—consciousness of self and of others, consciousness of the past and the future—have evolved from our animal ancestors. Furthermore, the new biology posits that consciousness is a biological process that will eventually be explained in terms of molecular signaling pathways used by interacting populations of nerve cells. (Kandel 2006, p. 8)

To say that neurophysiology is not a physical notion is to say that what is physical is not physical. We can easily conceive of neurophysiological work without functionalist analysis, but we cannot conceive of neurophysiology without physiology. It is an analytic truth that neurophysiology is a physical notion, since physiology is a physical notion. Why would Noe want to deny this? But what is more puzzling is that the same Hubel and Wiesel he uses as a validity check for his claim that neurophysiology is a functional and not a physical notion are criticized by him for having a nonbiological engineering conception:

They took for granted that vision was a process of analysis of information. It is remarkable that their landmark investigations into the biology of vision take as their starting point a startlingly non-biological engineering conception of what seeing is. (Noe 2009, p. 157)

How, then, can Hubel and Wiesel’s work be such a good example of what neurophysiology is if it has nothing to do with biology? Noe tells us that neurophysiology is not a physical notion, and he uses the same work by Hubel and Wiesel to support his position, which he later rejects. Noe devotes a chapter of his book to rejecting their work:

In this chapter I tell the story of Hubel and Wiesel’s Nobel Prize-winning research into vision in mammals. The work rests, I show, on an untenable conception of vision and other mental powers as computational processes taking place in the brain. (Noe 2009, p. 149)

What are we to make of Noe’s suggestion that neurophysiology is not a physical notion—that it could well turn out that a robot with a digital computer for a brain could be conscious, that our brains could be computers and musings of conscious artificial replicants? How is Noe’s approach more biological than Hubel and Wiesel’s? Suppose, for example, that we take musings on replicants and robots with digital computers for brains seriously. We write up a proposal and get funding to build a conscious replicant. We go ahead and build a robot with a digital computer brain that is a replicant of an ordinary human being; let us call him Bob. Moreover, replicant Bob is so good that Noe cannot tell which of them is the replicant. But suppose replicant Bob is not conscious but just behaves as if he were so. What fact about replicant Bob, including his environmentally situated robot body, could prove to Noe that he was not conscious? In Noe’s view, there cannot be such a fact, because neurophysiology is defined functionally and consciousness does not depend on any particular physical matter. All Noe has to go on is behavior. The following expression of his alignment with Dennett and his rejection of Searle elucidates his position:

I agree with Dennett that it is an open question whether computers or robots can one day become conscious . . . Searle seems to think that when it comes to us, we need look no further than the brain for an understanding of the ground of our consciousness. But that’s a mistake—one Dennett warns us against—and it reveals a mistaken assumption in his criticism of the possibility of computational minds. Information processing in the brain does not a mind make, but that’s because nothing in the brain makes the mind. The great insight of AI is that we are, in a way, on a par with machines. If a robot had a mind, it would not be thanks to what is taking place inside it alone (thought of computationally or otherwise). It would be thanks to its dynamic relation to the world around it. But that’s exactly the case for us as well. (Noe 2009, p. 202)

Whether a conscious robot with a digital brain is conscious or not is a question of behavior—of its dynamic relations to the world. Going back to our thought experiment, replicant Bob functions and behaves with dynamic relations on a par with ordinary Bob, but why should it follow that he is conscious?

We get no real biological account of the mind and consciousness. Sometimes we get behaviorism, as when Noe explores the minds of bacteria. A mind is not something other than an entity that we deem on the basis of behavior to have a mind. Minds are behaviorally understood—if bacteria behave as if they want sugar or light or whatever, then they really want sugar, and so on. At other times, we get functionalism, as when Noe defines neurophysiology as a nonphysical, purely functional notion. But most of the time, we get a little bit of both. Whatever goes on in the brain consists of functional processes, as he sees neurophysiology as a functional notion, but what determines consciousness is what we do as environmentally situ- ated—a dance, and so on. Some of Noe’s statements—e.g., that bacteria have minds—are beyond scientific tests. What would count as evidence that my breakfast yogurt did not contain billions of conscious acidophilus minds? As long as we play the game of radical behaviorism, it is hard to see what would count as evidence, because minds, in this game, are defined behaviorally. The situation with Noe’s externalist position with respect to human consciousness is similar. What evidence could prove that consciousness is not something we do? For Noe, it is insufficient evidence that there have been cases of people who have lived through years of total paralysis while remaining fully conscious. Noe reports:

But there are also known cases of total locked-in syndrome . . . Sadly, it is almost certain that until recently ah patients with locked-in syndrome have been mistakenly supposed to be mere vegetables, lacking all sentience, and have probably been allowed to endure slow and painful deaths by starvation. (Noe 2009, p. 16)

These patients that have been mistaken as “vegetables” are paralyzed to the extent that they cannot even move their eyes. One would think they would be a counterexample for Noe’s externalist position. Neurologist Giulio Tononi characterizes locked-in syndrome in the following way in relation to consciousness:

In neurological practice, as well as in everyday Life, we tend to associate consciousness with the presence of a diverse behavioral repertoire. For example, if we ask a lot of different questions and for each of them we obtain an appropriate answer, we generally infer that a person is conscious. Such a criterion is not unreasonable in terms of information integration, given that a wide behavioral repertoire is usually indicative of a large repertoire of internal states that is available to an integrated system. However, it appears that neural activity in motor pathways, which is necessary to bring about such diverse behavioral responses, does not in itself contribute to consciousness. For example, patients with the locked-in syndrome, who are completely paralyzed except for the ability to gaze upward, are fully conscious . . . Even lesions of central motor areas do not impair consciousness. (Tononi 2005, p. 118)

But strangely, Noe turns the tables to discuss total locked-in syndrome as a means of demonstrating how lost neuroscientists are, since they cannot readily diagnose it. So total locked-in syndrome is not evidence against the position from which he says:

It is now clear, as it has not been before, that consciousness, like a work of improvisational music, is achieved in action, by us, thanks to our situation in and access to a world we know around us. We are in the world and of it. We are home sweet home. (Noe 2009, p. 186)

But if total locked-in syndrome, along with lesioned central motor areas and nevertheless sustained consciousness, is not evidence against Noe’s position, then what would Noe take to be evidence against it? The only thing left seems to be an actual brain-in-a-vat experiment. In sum, we end up with an account of consciousness based on functionalism and behaviorism and, as with Dennett, it is just as unclear how this combination could explain consciousness.

< Prev   CONTENTS   Source   Next >

Related topics