Desktop version

Home arrow Communication

  • Increase font
  • Decrease font


<<   CONTENTS   >>

The Chinese Room

The Chinese Room is a though experiment proposed by John Searle as a critique of the behaviouristic and functional account proposed by the TT. Searle (1999) describes the Chinese Room experiment as follows:

Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.

As this passage suggests, the “man in the room” manipulating unknown symbols according to a program (the book of instructions) is the equivalent of a machine. And, if a machine able to perform this task existed, it - just like the English- speaking human - will not understand anything, even though it provides the correct output. The core of Searle’s critique is that the simple execution of a program manipulating symbols at the syntactic level (i.e., without understanding anything at the semantic one) does not constitute proof of the actual intelligence of the system manifesting that behaviour.[1] The obtained behaviour can, in fact, only be considered the result of a simulation of intelligence. As a consequence of this state of affairs, it is improper to attribute to any computing machine any kind of “thinking” or “understanding” capacity. What is lacking in such machines, according to Searle, is indeed the intrinsic “intentionality” of performing the tasks. Only in that case could we say that a machine has a mind, is intelligent, etc. Such intentionality, however, is not grasped by an AI program performing the task. In fact, as in the case of the teleological explanation seen in Chapter 3, it is only externally “attributed” by the programmer (who decides the meaning of the symbols the program is going to manipulate) or by an external observer who adopts an “intentional stance” towards the observed behaviour of the system. On the basis of these considerations, Searle introduced the notorious distinction between “Strong AI” (i.e., the position assuming that computational models, embodied or not, can possess a “mind”, a “consciousness”, etc. in the same way as human beings)4 and “Weak AI” (the position according to which computational models can simulate human behaviour and thinking but cannot pretend, as the Chinese Room argument shows, to possess any kind of “real” cognitive state).

Searle’s argument has received both critiques and endorsements. I will not detail here all the different disputes and replies to the argument since they will drive us too far from the purposes of this book (a good, synthetic, overview is provided in the entry of the Stanford Encyclopaedia of Philosophy - https:// plato.stanford.edu/entries/chinese-room/); it is important however, to point out that current AI and cognitive modelling research are perfectly aligned with the Weak AI hypothesis. This does not make weaker either of the disciplines: AI researchers, indeed, continue to build better systems with the purpose of them being useful for human beings and being able to, in principle, perform better than humans in specific tasks; computational cognitive scientists, on the other hand, continue to build “structural” computational simulations of cognitive processes without pretending to build any system able to really be described as “intelligent” or “conscious” in the proper human sense. This latter assumption is also the underlying one of this entire book and is in contrast to the popular (but incorrect) vulgata that see computational models of cognition as belonging to the class of systems espousing the “Strong AI” hypothesis and, as such, assuming that such models (of minds or brains) can actually be considered “minds” or “brains” in the very same way as human ones. As described hitherto: artificial models of minds/brains can simulate the human-like mechanisms that determine a given

applies to computational approaches in general. In fact, despite some attempts of connectionists posing counter-objections (e.g., Paul and Patricia Churchland, 1990), a neural AI system able to perform the same tasks described by Searle would still need to “manipulate” (by adjusting the weights in the net) a distributed pattern of representations that, at the highest levels of the network, would have a symbol-like function (and this abstractive capacity and functionality is exactly the target of deep learning; see Goodfellow et al., 2016).

4 Searle’s exact formulation of the Strong AI hypothesis is: "the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states” (Searle, 1980).

behaviour, and this can enable the understanding of some hidden mechanistic dynamics. This understanding can eventually be exported in the context of non-computational investigations (e.g., in psychology, biology, or neuroscience). Such artificial models of the mind, therefore, can be used to understand mental phenomena without pretending that that they are the real phenomena that they are modelling. Using a famous analogy proposed by Searle himself: just as a model of the weather is not the weather, a model of the human mind is not a human mind. As a consequence, and as stated repeatedly since the preface, artificial models of minds/brains can be seen as representative of mental/brain activities only to a certain degree, depending on their level of structural accuracy. And that’s perfectly fine.

  • [1] It is worth noting that the Searle’s critique is not only directed to the “symbolic approach”or the “Physical Symbol System Hypothesis” as a superficial analysis could suggest (of course,these were the main available approaches used in the AI of the 1980s). His critique, however.
 
<<   CONTENTS   >>

Related topics