Desktop version

Home arrow Education

Consciousness as Recursive Vector Transformations

The neural network we looked at earlier is simple. Information comes in at the input layer, is transformed in the hidden layer, and goes out at the output layer. Because information flows forward, it is called a feed-forward network. But there are also networks where information flows back: recursive networks. These networks are recursive because the output becomes part of the input. Paul believes that recursivity is key for understanding consciousness, and he gives an example.[1]

Here nodes n2, n3, and n4 in the output layer have pathways that go back to the hidden layer. Why are recursive networks so interesting? Paul lists seven features of consciousness they account for:

  • 1. Consciousness involves short-term memory.
  • 2. Consciousness does not require concurrent sensory input.
  • 3. Consciousness involves directable attention.
  • 4. Consciousness can place different interpretations on sensory input.
  • 5. Consciousness disappears in sleep.
  • 6. Consciousness reappears, in a somewhat different form, in dreaming.
  • 7. Consciousness brings our sensory modalities together in unified experience.

Let us go through these points. (1) Consciousness involves short-term memory. According to Paul, recursive networks can explain short-term memory because they can account for processing over time. We get no specific example, but think of the middle layer of the network as short-term memory because information is sent back and remains in a loop, so the activation vector is partly retained in the middle layer—memory is a recursive vector activation pattern. (2) Consciousness does not require concurrent sensory input. The idea is that a recursive network can engage in autonomous cognition, without sensory input, because it feeds on itself. (3) Consciousness involves directable attention. Recursion enables modulation, with tweaking of network kinematics when there is input. (4) Consciousness can place different interpretations on the same sensory input. This feature is not fully articulated, but perhaps the idea is that we can think of an interpretation as a kind of modulation of the input and, over time, the same input can yield different interpretations. (5) Consciousness disappears in dreamless sleep. Paul suggests that recursion is disabled to produce an unconscious feed-forward network. (6) Consciousness reappears, in a somewhat different form, in dreaming. Dreaming is to be explained along the lines of (2) above and is a matter of disabling the input layer so that the network engages in back-propagation activity, involving our memories, weaving neural dream patterns. In some cases, motor connections are also disabled, such as in rapid eye movement (REM) sleep. (7) Consciousness brings diverse sensory modalities together in a single, unified experience. The recursive network has connections to diverse sensory input, and this convergence, along with recursive pathways, explain the unity of consciousness.

If we forget about vector computations for a while, we can see that our conscious thought processes, as they appear to us, often seem to have a character of being recursive. Thinking about what to eat at a restaurant, we might ponder whether to have a salad as a starter, then go on to think about the main course. Perhaps we decide on the trout, and now we come to the question of wine—well, white wine goes best with fish. What about the dessert? We will have a light dinner, so how about chocolate cake? So we have started with a decision to have salad and gone on to build our dinner in a series of steps that took into account the steps before. Paul suggests that it is a feature of conscious processes that they build on what has happened before. But the language we use to describe what we do and how we think is not the native language of the brain. There are no sentences about trout or wine in our brains. Such sentences occur only in our folk-psychological communication.

In the brain, we find only computational processes. To see this, Paul asks us to ponder Leibniz’s idea that we could not account for consciousness in terms of brain parts or processes. We examined this idea of Leibniz earlier. If we could look into the brain, all we would see would be movements of mechanical parts—a kind of mill or clockwork, but nowhere would consciousness be found. Paul believes that this intuition is wrong and constructs a thought experiment to show why. Suppose we lived in the time of the debate between naturalism and vitalism (the idea that life could be explained only in terms of some life-force). A vitalist might have argued that if we could make ourselves small and travel inside living organisms, we would find mechanical processes but not life. Today, we can argue that life is a mechanical process (cashed out in terms of macromolecules such as DNA and RNA). Why couldn’t the vitalist see this? For the same sort of reason, we are reluctant to say that consciousness is really nothing but vector processes in our brain. It was hard for vitalists to accept that life was nothing but cellular processes, because they lacked adequate conceptual understanding. Analogously, it is hard for us to understand that consciousness is nothing but vector processes in our brain, because we also lack adequate conceptual understanding. The idea of consciousness as a matter of recursive computational processes is supposed to be a conceptual aid for reaching a mechanical understanding of consciousness. But how could we explain subjectivity on the basis of such a third-person account?

One way to understand how Paul thinks of conscious experience is to compare his account with those of philosophers who have noted that third-person accounts of consciousness appear to leave out experience. Searle believes we can make a causal reduction of conscious experience but not an ontological one. According to Searle’s account, consciousness is a field caused by the brain. But we cannot say that experiences are simply neural tokens along the lines of materialist identity theory. Paul challenges this position by suggesting that experiences are vector activation patterns within recursive neural networks. If the theory of consciousness as recursive neural network transformations is right, then we might be able to build a silicon brain that is “as truly conscious as you and I are” (Churchland 1995, p. 244). It would be a machine whose consciousness and intelligence lay within vector coding and processing in artificial neural networks. It is true that such a machine would be an electronic machine and would not have the chemical properties of the brain. But we can recreate the relevant properties in the silicon brain. What matters to an account of consciousness is the form and function of the neurocomputational processes, not the specific implementation. This is a formal account, which merely pays lip service to neurobiology. So what matters in the end are computational architectures, not neurobiology. As such, it is unclear how this account gets us further than other forms of computational functionalism on the problem of consciousness.

  • [1] See the sections “The Contents and Character of Consciousness: Some First Steps” and“Reconstructing Consciousness in Neurocomputational Terms” in the chapter “The Puzzle ofConsciousness” in Churchland (1995).
 
Source
< Prev   CONTENTS   Source   Next >

Related topics