Desktop version

Home arrow Communication

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Levels of plausibility and the minimal cognitive grid (MCG)

Before introducing a pragmatic tool that can be useful in evaluating and comparing the explanatory levels of cognitively inspired systems, it is necessary to consider in greater detail the aspect of the different levels of “plausibility” that can be achieved in an artificial system. As mentioned in the previous chapter, an element that is worth considering in this respect concerns the irrelevance, with respect to the “plausibility” issue, of the level of abstraction adopted to model a given cognitive behaviour. In addition, it is worth noting that the notions of both cognitive and biological plausibility, in the context of computational Cognitive Science and computational modelling, refer to the level of accuracy obtained by the realization of an artificial system, with respect to the corresponding natural mechanisms (and their interactions) they are assumed to model. In particular, cognitive and biological plausibility of an artificial system asks for the development of artificial models (i) that are consistent (from a cognitive or biological point of view) with the current state-of-the-art knowledge about the modelled phenomenon and (ii) that adequately represent (at different levels of abstractions) the actual mechanisms operating in the target natural system and determining a certain behaviour.

Some of the key questions to answer in this respect are: what are the elements (e.g., the processes, the mechanisms, the structures, etc.) in the inspiring natural system that enable the rise of the desired behaviour? To what extent does the obtained behaviour depend on such elements? Definitive answers to these questions are still not available. However, in the context of biologically inspired artificial systems, different general criteria have been proposed to characterize the design of biologically plausible models. In this respect, the roboticist Barbara Webb (2001) identified a list of dimensions for the characterization of different design aspects of bio-inspired models. Here are the seven dimensions she identified: [1]

confused with the “level” dimension. A more abstract model of a cognitive process could, indeed, contain more details and be more complex than the corresponding lower-level brain model of the same mechanism.

  • 5 Structural accuracy: This feature intends to measure the similarity between the mechanisms behind the behaviour of an artificial model with respect to those of the target biological system. This aspect is directly affected by the state-of-the-art knowledge of the actual mechanisms in biological systems and is not necessarily proportional to how many details are included in the model.
  • 6 Performance match: This dimension is intended to account for the similarity of the performances of the model with respect to the performances obtained by the target biological system.
  • 7 Medium: this dimension refers to the physical medium used to implement the model.

Despite the huge influence of Webb’s characterization of the dimensions to take into account when designing and evaluating bio-inspired systems, however, this proposal is limited in a number of ways. First, it explicitly targets biologically plausible constraints. It does not consider, for example, different types of higher-level cognitive constraints that one could indeed consider in a “plausible” model of human (or natural) cognition. In addition, it does not consider non-embodied agents/simulations, thus neglecting a huge class of models developed within the cognitive modelling and AI communities. Furthermore, some dimensions do not appear to be self-explanatory. For example, the concepts of “biological relevance” or “structural accuracy” are highly overlapping and there is no clearly defined method that one could use to determine how such elements are/can be operationally defined. Similarly the concept of “medium” is assumed to consider the physical instantiation carrying out the computations of the computational model and is evidently related to the physical level in Marr’s hierarchy (described in the previous chapter). However, the Webb proposal explicitly limits the considerations on this aspect to the presence (or lack thereof) of an embodied agent. The “medium”, in her view, is the physical body of the agent (a robot). This view is, however, quite restrictive since it does not consider, for example, alternative physical models of computations based on quantum computers or hybrid biological/ artificial neural networks realized in the field of neuromorphing computing.

In the following section, I propose a much more synthetic list of elements that subsumes some of Webb’s dimensions and that, additionally, can be applied not only to biologically inspired systems but also to cognitively inspired ones. This latter aspect is important since, as we saw in the previous chapter, artificial plausibility can be obtained at different levels of abstractions (not only at the neurophysiological or biological level), using different formalisms and modelling approaches. In addition, the proposed characterization has the merit of providing a set of characteristics that can be directly used to compare different biologically or cognitively inspired systems and as a tool to project their explanatory power.

The proposed minimal set of analytic dimensions to consider, which I call the

“Minimal Cognitive Grid” (MCG), comprises the following aspects:

  • 1 Functional/Structural Ratio: this dimension concerns the individuation of the elements upon which the artificial model/system is built. For example, in a complex artificial system (embodied or not) it is possible to model, in a “functional” way, some elements of the system whose internal structure and mechanisms are not considered important with respect to one’s explanatory goals and, on the other hand, it is possible to build structural models of other components of the same system. In other words, this dimension looks at the ratio between functional and structural components (and heuristics) considered in the design and implementation of an artificial system. This ratio depends on the actual focus and goal of the model and can be used for both integrated systems performing different types of tasks and for narrow and task-specific systems. This dimension synthesizes and subsumes the “biological relevance” and “structural accuracy” individuated by Webb by enabling, in principle, the possibility of performing both a quantitative and qualitative comparison between different artificial systems (whether they are cognitively inspired or not). Of course, in this case, the lower the ratio, the better. The “system dissection” required by this dimension of analysis is also useful to individuate the kind of explanations that can be ascribed to different components of the systems (e.g., a mechanistic explanation would make sense only for the “structurally modelled” components).
  • 2 Generality: as in the Webb proposal, this feature aims at evaluating to what extent a given system/architecture can be used in different tasks, i.e., how general is the model and whether it can be used to simulate a set of cognitive functions and not just a narrow one. Also this element can be considered both from a quantitative (e.g., by counting how many cognitive faculties can be modelled within a single system) and qualitative point of view.
  • 3 Performance Match: as in the Webb proposal, this dimension involves a direct comparison between natural and artificial systems in terms of the obtained results for specific or general tasks. With respect to Webb’s account, however, I propose a more precise characterization of this dimension when we consider human beings the “natural system” used as a reference point. In particular, I suggest taking into account some of the main hints of the Psychometric AI movement (Bringsjord, 2011) that asks for the use of a battery of validated tests to assess the effective “match” between artificial systems and human beings. Along this line, thus, I also propose considering two additional specific requirements that refer to such an aspect: (1) the analysis of system errors (which, in human-like artificial systems, should be similar to those committed by humans) and (2) the execution time of the tasks (which, again, should converge towards human performances). Therefore, in this configuration, the degree of accuracy obtained in certain performances is not sufficient to claim any kind of biological or cognitive plausibility. Of course, the inclusion of the two additional requirements (if considered in isolation) similarly does not guarantee any plausibility claim (since a system could match these additional psychometric measures without being a “structural model”). However, it is worth noting that all three dimensions conceived for the MCG, considered together, can provide an objective evaluation of the structural accuracy of a model. As with the first two dimensions, the rating assumed on the third dimension can be also, in principle, determined with both quantitative (e.g., by considering the difference in terms of results, errors, and execution times between the natural and artificial systems) or qualitative means.

In qua descriptors of the “structural accuracy” of a given system, the MCG dimensions allow the de facto operationalization of the x-axis on the Enriched 2D Space of Cognitive Systems.

A further dimension of analysis, not included in the MCG but one that could be useful to consider, is the “modelling paradigm” adopted in the development of a given system (this dimension only partially overlaps with the “level” dimension in Barbara Webb’s account). Such a criterion, however, does not lie on the same ground as the previous ones, since it does not play any role in regards to the individuation of the structural adequacy and the explanatory capability of the analyzed systems. On the other hand, it can be useful as a qualitative dimension to analyze the commitments (if any) of systems adopting different modelling paradigms (symbolic, hybrid, and connectionist) to the cognitive research agenda.

Summing up: by starting from the original proposal from Barbara Webb, we have individuated a minimal set of dimensions (which we have called the “Minimal Cognitive Grid”) that can be used as an analytical tool to compare different kinds of cognitive artificial systems and their degree of structural accuracy with respect to human performances and abilities. This tool is general enough to include both biological and cognitive modelling approaches and allows a comparison between them in terms of their explanatory capacity with respect to the natural system that is taken as source of inspiration.

4

  • [1] Biological relevance: this dimension shows if, and eventually to what extent, a computational model can be used to generate and test hypothesesabout a given biological system that is taken as a source of inspiration. 2 Level: this dimension aims at individuating “the basic elements of the modelthat have no internal structure or whose internal structures are ignored”. Inother words, it identifies the modelling focus. For example: an atomic modelcould be focused on the internal structures of atoms or could ignore thisissue by focusing on the interactions between atoms (of course, the choice ofthe “level” also usually determines what class of formalisms can be adopted). 3 Generality: this feature aims at individuating how many different biological systems can be represented by a model. 4 Abstraction: this dimension considers how many details are included in theartificial model with respect to the natural system that is taken as a source ofinspiration. According to Webb’s terminology, “abstraction” should not be
 
<<   CONTENTS   >>

Related topics