Desktop version

Home arrow Communication

  • Increase font
  • Decrease font


<<   CONTENTS   >>

The space of cognitive systems

In his 2014 book Artificial Cognitive Systems, the cognitive roboticist David Vernon proposed a 2D space for the classification of different types of cognitive artificial systems. Its original 2D space considered on the x-axis the level of natural (i.e., biological or cognitive) inspiration assumed for the design of an artificial system (with the extremes of “machine-oriented” and “natural inspiration”) and, on the у-axis, the level of abstraction of the modelling tools adopted for modelling the behaviour of an artificial system (with the extremes being “high” and “low”). In light of the arguments presented so far, that original schema has been “enriched” by considering and making explicit two other elements that were left implicit in Vernon’s analysis: the kind of design approach considered (functionalist vs. structuralist) and the type of modelling approach considered (symbolic vs. emergentist).

This “Enriched 2D Space of Cognitive Systems” is depicted in Figure 2.1.

The proposed enrichment serves the scope of showing that it is possible to have functionalist or structural systems using both the symbolic and emergentist

Enriched 2D space of cognitive systems

FIGURE 2.1 Enriched 2D space of cognitive systems.

approaches. Therefore, selecting the modelling paradigms to adopt in the realization phase involves design choices at the representational/algorithmic level (level 2 in Marr’s terms) but is, in principle, not relevant for the explanatory ability of the artificial systems with respect to their analogous nature.[1] It is important to point this aspect out because, as indicated in the previous chapter, the last few decades of research in the fields of Cognitive Science and AI have shown empirically that both emergentist and symbolic approaches are useful in modelling different aspects of cognitive capacities in artificial systems. Indeed, low-level (e.g., perceptual, motor, etc.) capabilities of artificial systems are usually better modelled by using emergent approaches, while high-level (e.g., reasoning) cognitive capacities are better modelled by adopting a symbolic approach. Therefore, the cognitive design perspective, and the corresponding explanatory power of the behaviour of artificial systems, is in principle agnostic with respect to the classes of formalisms applied to model a given phenomenon; it can be applied to both the symbolic and emergentist research agendas.

A natural consequence of this state of affairs is that it becomes possible to realize “structurally plausible” cognitive artificial systems (or artificial models of cognition) by adopting modelling frameworks focusing on different levels of abstraction. This aspect is important because there is nowadays an implicit vulgata in considering as “structurally valid models” only those adopting some of the emergentist modelling approaches. However, based on what we have discussed so far, it should be clear by now that this assumption is methodologically wrong. As we will see, in fact, adopting an emergentist modelling framework (e.g. let’s say a connectionist one) does not in itself imply satisfying the requirement of biological plausibility. We can have models based on the functional organization of neural nets with suitable learning algorithms without such models being biologically plausible or interesting as explanatory tools. As pointed out by Cordeschi (2002: 255):

The very question of “model indeterminacy”, raised by the functional equivalence by the symbolic approaches to the way in which our brain processes information, could be similarly raised by the functional equivalence at the neural net level, if the appropriate constraints are not identified.

For the very same argument, it is not possible to exclude a priori symbolic systems from the list of potentially plausible models of cognition. In fact, even if nowadays there is no neuroscientific evidence of the existence of “symbols” in our brain a la PSSH, it is true that from a functional point of view that our biological neural architecture is able to organize in a hierarchical and efficient way its neural representations such that the more abstract neural layers of this hierarchy behave like a symbol (and therefore can account for all types of manipulations and combinations executed and described by symbolic systems). In addition, such symbol-like structures can still be combined among them in a cognitively compliant way (i.e., according to cognitively plausible information processing mechanisms), thus contributing to the realization of structurally plausible artificial models. We will examine this aspect in the following section with some examples.

  • [1] As we will see, the “explanatory power” depends on the types of structural constraints considered during the modelling phase.
 
<<   CONTENTS   >>

Related topics