Desktop version

Home arrow Education

Computational Neuroscience

This field was born in the mid-1970s with a commitment to explaining cognitive and perceptual brain processes. Vision became an important topic, largely because of the work of David Courtnay Marr (1945-1980), a psychologist and theoretical neuroscientist at the MIT AI lab. Marr’s approach to computational neuroscience has three components (Marr 1982, pp. 24-25): the computing task (what is to be computed); the algorithm (the steps undertaken to solve the task); and the implementation (hardware—computers, brains, individual cells, and so on).

Think of the information processing involved in a cash register. Here we have a computing task (to sum up costs), an algorithmic component (the step-by-step calculations), and an implementation component (the electronics).

Marr frames the problem of vision and other information-processing tasks in the cognitive sciences according to this top-down model. To understand vision, we must explain how the brain is programmed to recognize objects in complicated scenes with varying light conditions. How the computations are implemented—what hardware is used—has nothing essential to do with understanding vision per se. Marr’s approach to understanding vision includes steps that take us from a two-dimensional input array (like that of the light-sensitive surface of the retina[1] or video camera electronics) to a three-dimensional representation (Marr 1982, p. 37). The following is a stepwise outline of his account.

The intensity of light is noted for each point in the two-dimensional input array. A rough sketch is drawn with contours and boundaries between objects. Then a 21/2-dimensional sketch is constructed. This representation has more information than a two-dimensional sketch but less than a three-dimensional one. A threedimensional model is constructed out of geometric shapes that represent the world. All in all, we are given steps for going from a two-dimensional image on our retina or some other sensing device, such as a camera, to a three-dimensional representation.

This project proved difficult and—apart from the difficulty of arriving at a good, three-dimensional representation—there are other issues. It is unclear what the use of such representations would be for a conscious human. The existence of an inner geometric representation of—let us say—an apple, fails to explain why or how we perceive an apple. We duplicate the problem of perception inside the head because we must explain how we see the inner representation.

The mystery of visual perception—how we experience and understand what is in our field of vision—remains. Marr’s model is formal and, on the basis of our earlier discussion of the Chinese room and the background, it is unclear how we could arrive at any understanding of what we see on the basis of formal operations. Nevertheless, Marr’s model became influential as computational neuroscience burgeoned during the 1980s. One reason was that research on the neurobiology of vision indicated that visual processing, like Marr’s model, goes from lower to higher levels of abstraction. With the rise of computational neuroscience, researchers talked increasingly about the brain as an information-processing system, composed of functions with input-output relations between them. During the past two decades, computational neuroscience has turned increasingly to neural network models and efforts to reveal how the brain is interconnected.

  • [1] The retina is a three-dimensional cell jungle with an astonishing complexity, so this is a grosssimplification for the sake of illustrating Marr’s account.
 
Source
< Prev   CONTENTS   Source   Next >

Related topics