Desktop version

Home arrow Education

The Background

There is a problem with formal approaches to intelligence: the problem of the background. We explain mental life such as what we think, feel, believe, and so on, with words in sentences. Moreover, we often explain one sentence with other sentences, so it might seem that what is necessary for thinking are sentences, perhaps hidden ones in a language of thought, as philosopher Jerry Fodor (1935—) suggested in The Language of Thought (Fodor 1975). But it is unclear how sentences could be constitutive of human thought. Think of the following example. Suppose there is a child asking me questions. He asks, “Is it going to rain?” I answer, “No, there are no clouds.” Then he asks, “What is a cloud?” I answer, “Clouds are vapor formations.” The child asks, “What is vapor?” This procedure goes on until I don’t know how to answer. The child just keeps on asking. Does this mean the child doesn’t understand? The child could have understood more than he let me know—perhaps the child was pulling my leg. How could I know? As symbolic representations, sentences do not automatically attract meanings, and it doesn’t matter if they are printed, vocalized, digitized, or in a hypothetical language of thought. So there is no way of proving that the child was pulling my leg by appealing to the sentences I used in my explanations, because the sentences are not enough to cause understanding to happen; they do not force understanding.

In the Chinese room, Searle shuffles symbols into sentences so his replies are as good as those of a Chinese speaker, but he does not comprehend them. He has rules for constructing sentences; however, those rules give no meaning. What is needed to understand sentences are prerepresentational capacities because sentences and other representations don’t interpret themselves. Searle collectively calls these capacities the background. The background is a necessary condition for intentionality—it is a condition for sentences to be about something, to mean something.[1]

The strong AI approach presupposed that sentences and other symbolic representations sufficiently accounted for understanding, and so did cognitive science when it adopted the computer as its mind laboratory. Minsky attempted to solve the problem of understanding by capturing contextual knowledge representations, or what he called frames, and Roger Schank (1946-) attempted to do so with scripts—procedural knowledge representations. Finally, attempts were made to overcome the problem of understanding, using gigantic collections of “facts.” One example is the Cyc (encyclopedia) database, with millions of representations. The Cyc development stems from a discussion between AI researchers Douglas Lenat (1950-), Minsky, and Alan Kay (1940-), who thought a computer system with one million basic facts should be able to reason with common sense (Henderson 2007, p. 95). Lenat went on and tried to create such a system. The idea was to manually enter basic “concepts” and “axioms,” and to use machine learning techniques to generate more “knowledge” representations. But if the thesis of the background is correct, trying to capture understanding in formal representations won’t work.

The sentences, rules, or representation of strong AI are not, in and of themselves, about anything. There is no intrinsic intentionality involved. To illustrate this, if a plasma cloud emerged a nanosecond after the Big Bang with a luminous pulsating pattern that would look to us like the message “let there be light,” at that stage of the evolution of the universe, it would lack meaning. If God had made it the case that this message should have so materialized to glorify the act of creation, then it would have intentionality derived from God. Similarly, if what we think of as a laptop materialized in the plasma cloud through a process of quantum randomness, it would not carry intentionality—it would simply be a pattern of particles at a certain stage of the evolving universe. Again, if it was an act of God, then it would be more than a pattern of particles—it would be God’s laptop, and whatever intentionality it had would be derived from God. The intentionality would not be intrinsic to the physics of the laptop. It wouldn’t matter how powerful the laptop was or how God had programmed it.

We have no reason to think that digital electronics and formal systems can support intrinsic intentionality. In contrast, humans have intrinsic intentionality as a fact of biology. We have evolved as intentional beings and have created books, computers, and signs to augment our intellects. But these creations lack intrinsic intentionality.

  • [1] In a nutshell, intentionality is “aboutness,” and it is difficult to see how formal representationscould have it in themselves.
 
Source
< Prev   CONTENTS   Source   Next >

Related topics