Home Communication
|
|
|||||
The Newell test for a theory of cognitionNewell (1980, 1990) proposed a set of criteria to evaluate to what extent an artificial system can be said to provide the computational background for a theory of cognition (or better a simulative account of such a theory). In a 2003 paper that appeared in Behavioural and Brain Science, John Anderson and Christian Lebiere (Anderson and Lebiere, 2003) distilled this list into 12 criteria that they called the “Newell Test for a Theory of Cognition”. As the authors write, these criteria are functional constraints on the cognitive architecture. The first nine reflect things that the architecture must achieve to implement human intellectual capacity, and the last three reflect constraints on how these functions are to be achieved. As such, they do not reflect everything that one should ask of a cognitive theory. The distilled criteria are the following ones: [1] capabilities that human beings are able to exhibit: induction, deduction, abduction, analogy, etc. This aspect partially overlaps with the “heterogeneity” criterion proposed in our previous analysis, since Newell assumed that all such inferences could have been tackled with different types of integrated “symbolic systems”. On the other hand, the heterogeneity criterion calls for a heterogeneous approach for knowledge integration, coupling symbols with diverse representations and reasoning procedures.
These criteria, very similar in nature to some of the desiderata for the development of cognitive architectures presented in Chapter 3, were used to evaluate two different theories of cognition (i.e., ACT-R and classical connectionism) in a qualitative way. Given the type of defined criteria it was, indeed, impossible to make a stricter comparison. For most of the criteria, in fact, there is not a direct way to assess the extent to which a given computational cognitive theory can actually be compared or ranked with respect to them. In other words, such criteria have been called “general”, since they cover all the major aspects of the cognitive spectrum, but in most cases they are generic (i.e., highly underspecified) and, as such, affected by subjective judgements. As Anderson and Lebiere (2003) also pointed out, “Regrettably, we were not able to state the Newell criteria in such a way that their satisfaction would be entirely a matter of objective fact” (597). Summing up: the Newell test can be used to evaluate, in general, the human-likeness of cognitively inspired computational models and the extent to which such models can be said to have a “theory of cognition”. In principle, if we accept the assumption that adequately plausible human-like models can also converge towards human-level results (as has been proven for many computational models in the history of cognitive science), this test could be also seen as an indirect way of assessing the human-level intelligence (in the “weak” sense) of integrated systems. It is, however, difficult to envisage its use in evaluating and comparing specific computational models of cognition (e.g., let’s say models of semantic ambiguity resolutions) and, finally, the subjective assignment of the ratings for many of the criteria represents one of its biggest weaknesses.
|
<< | CONTENTS | >> |
---|
Related topics |