Desktop version

Home arrow Communication

  • Increase font
  • Decrease font


<<   CONTENTS   >>

The Newell test for a theory of cognition

Newell (1980, 1990) proposed a set of criteria to evaluate to what extent an artificial system can be said to provide the computational background for a theory of cognition (or better a simulative account of such a theory). In a 2003 paper that appeared in Behavioural and Brain Science, John Anderson and Christian Lebiere (Anderson and Lebiere, 2003) distilled this list into 12 criteria that they called the “Newell Test for a Theory of Cognition”. As the authors write, these criteria are functional constraints on the cognitive architecture. The first nine reflect things that the architecture must achieve to implement human intellectual capacity, and the last three reflect constraints on how these functions are to be achieved. As such, they do not reflect everything that one should ask of a cognitive theory.

The distilled criteria are the following ones: [1]

capabilities that human beings are able to exhibit: induction, deduction, abduction, analogy, etc. This aspect partially overlaps with the “heterogeneity” criterion proposed in our previous analysis, since Newell assumed that all such inferences could have been tackled with different types of integrated “symbolic systems”. On the other hand, the heterogeneity criterion calls for a heterogeneous approach for knowledge integration, coupling symbols with diverse representations and reasoning procedures.

  • 7 Natural language: a cognitive architecture must be able to deal with natural language interaction;
  • 8 learning: a cognitive architecture should be able to learn and acquire competences;
  • 9 consciousness: a computational theory of cognition should possess a theory of consciousness and model it. This aspect is controversial since, for human beings as well, there is no consensus about a unifying theory of consciousness; nonetheless, many computational models trying to address this issue exist. As mentioned, the last three criteria correspond to different types of constraints through which the above-mentioned skills should develop:
  • 10 development: the overall abilities described above should unfold and grow over time;
  • 11 evolution: a cognitive architecture should reflect the evolutionary processes that have selected certain mechanisms and heuristics; and
  • 12 brain: Newell suggests that the components of the cognitive architectures should be mapped onto brain structures and, in addition, that these matches should develop in a neural implementation such that the computation of the neural structures match the one of the assigned components. The latter criterion evidences how Newell, after an initial period ofindifference towards neuroscience, started to consider the “biological band” an important aspect to enrich the constraints posed by its systems (in particular SOAR) at the highest bands.

These criteria, very similar in nature to some of the desiderata for the development of cognitive architectures presented in Chapter 3, were used to evaluate two different theories of cognition (i.e., ACT-R and classical connectionism) in a qualitative way. Given the type of defined criteria it was, indeed, impossible to make a stricter comparison. For most of the criteria, in fact, there is not a direct way to assess the extent to which a given computational cognitive theory can actually be compared or ranked with respect to them. In other words, such criteria have been called “general”, since they cover all the major aspects of the cognitive spectrum, but in most cases they are generic (i.e., highly underspecified) and, as such, affected by subjective judgements. As Anderson and Lebiere (2003) also pointed out, “Regrettably, we were not able to state the Newell criteria in such a way that their satisfaction would be entirely a matter of objective fact” (597). Summing up: the Newell test can be used to evaluate, in general, the human-likeness of cognitively inspired computational models and the extent to which such models can be said to have a “theory of cognition”. In principle, if we accept the assumption that adequately plausible human-like models can also converge towards human-level results (as has been proven for many computational models in the history of cognitive science), this test could be also seen as an indirect way of assessing the human-level intelligence (in the “weak” sense) of integrated systems. It is, however, difficult to envisage its use in evaluating and comparing specific computational models of cognition (e.g., let’s say models of semantic ambiguity resolutions) and, finally, the subjective assignment of the ratings for many of the criteria represents one of its biggest weaknesses.

  • [1] flexible behaviour: i.e., for Newell, a system with a theory of cognition shouldbe flexible enough to learn and perform almost arbitrary cognitive tasks witha high degree of expertise; 2 real-time performance: this criterion calls for systems able to solve the task in“human time”, which Newell calls “real-time”; 3 adaptive behaviour: cognitive architectures should have mechanisms thatenable their adaptivity; 4 vast knowledge base: this criterion calls for the same necessity of the “size”requirement that was considered in the previous chapter for the analysis ofthe knowledge level in cognitive architectures. Having a system with hugeknowledge base, indeed, immediately poses computational and cognitiveproblems concerning the retrieval of the correct knowledge, given a taskto solve, that are neglected or hidden under the carpet when such systemsemploy only toy knowledge bases. 5 Dynamic behaviour: for Newell, a system with a theory of cognition must beable to deal with the unexpected and with a changing environment; 6 knowledge integration: different kinds of knowledge should be integrated inorder to endow a cognitive architecture with the full range of inferential
 
<<   CONTENTS   >>

Related topics