Desktop version

Home arrow Communication

  • Increase font
  • Decrease font

<<   CONTENTS   >>

Kinds of explanations

The assessment of the explanatory role of computational artefacts is a debated problem in the field of computational Cognitive Science. Some relevant


questions related to this issue include, among the others, the following: to what extent it is appropriate to say that a given artificial system (whose simulation and behaviour is used as a mean to provide an explanans - "what explains something”) has an explanatory role with respect to a phenomenon/behaviour of a natural system (the explanandum - "what needs to be explained”)'? How can we compare different types of “cognitively inspired” systems (e.g., different types of explanans) trying to model the same phenomenon? And, at the end, which kind of explanation do we need in the context of cognitively inspired artificial systems?

Let’s start with the latter question, since there are different views from the philosophy of science that are relevant in the context of cognitive modelling as well. First of all, intuitively, the notion of “explanation” is strictly linked to ones of causality and prediction. A good “explanation”, whatever this term means, should be able to provide causal and predictive models of a certain phenomenon. However, different types of theories have been proposed to define what is a correct “explanation” from a scientific viewpoint (a recent reference dealing with an explanation and computational models of cognition is Minkowski, 2013). We will briefly review some of them, without pretending that there is any exhaustiv- ity of this monumental topic here. The first type of theory about explanation is the so called Deductive-Nomological (DN) Explanation. According to this view, introduced by Hempel and Oppenheim (1948), there are some strict characteristics that an explanans have to satisfy in order to explain a given phenomenon (the explanandum). In particular, the explanandum is seen as something that needs to be logically derived, via deduction, from the explanans. While, intuitively, this theory adequately addresses a normative notion of explanation (since it assumes that the explanans provides the causes, i.e., the necessary and sufficient conditions, to understand the explanandum), such a requirement is very strict since there are many good explanations, in the scientific fields also, where the elements with an explanatory role (the explanans) are not able to completely “derive” a deductive account of the explanandum phenomenon (i.e., all empirical laws do not work in this way). DN Explanations, therefore, are impossible to obtain via empirical research done with computation models of cognition."1 Other


explanatory theories developed in the literature concern so-called “teleological”, “evolutionistic”, and “mechanistic” explanations. We will briefly describe them by using a running example from the biological domain. Let us suppose that our aim is to explain the phenomenon according to which chameleons change their skin colour. This usually happens in the presence of a predator (they assume different colour configurations based on the predator they perceive) or potential mating partners. Now, if we are interested in an explanation about why chameleons assume more often the colour configuration associated with a particular predator (e.g., birds) a possible answer could be that “the number of bird predators is greater than the number of other animals and, thus, this has determined a stronger selective pressure”. This is a typical example of an evolutionistic explanation, a type of explanation that plays an important role in many scientific theories. If we suppose, however, that the focus of our interest is just to understand why chameleons, in general, change their skin colour we could have other explanations. For example, a teleological explanation (from the Greek “telos”, meaning scope) assumes that, in order to explain a phenomenon F one has to point out which is the ultimate scope that F allows one to achieve. In the example, if someone tells us that “chameleons change their skin colour to camouflage themselves and escape from predators”, she simply provides an explanation about the scope of the phenomenon intended to explain. However, can we say that this kind of explanation helps us understand “why” chameleons change their colour? Of course this depends on the informative goal that we intend to satisfy with our question (i.e., what kind of “why” are we talking about) but, if we suppose that we are interested in the mechanisms that determine that phenomenon, we cannot really declare ourselves satisfied by that answer. On the other hand, if we receive the following explanation: “The skin colour change in chameleons is due to the response of some cells contained in the animal pigments (chromatophores) to nervous and endocrinous stimuli”, we would probably be satisfied by this answer. In particular, our satisfaction would probably by derived from the fact that this kind of explanation shows the “mechanisms” that determine the phenomenon we want to understand. This kind of explanation is called “mechanistic” and represents the kind of explanation that is important to target when building artificial models of cognition. The crucial point in cognitive modelling, indeed, is exactly one of trying to build artificial artefacts able to shed light on the inner, unexplained mechanisms determining the behaviour of a given natural system. In the example provided, the very simple mechanistic explanation was also a causal explanation. However, in many cases, we are forced to use the so-called “inference to the best explanation” (IBE), i.e., a sort of empirically sound induc- tive/abductive procedure that can reasonably explain - given the state-of-the-art knowledge on a given phenomenon - certain mechanisms. Of course this may also mean that there are explanatory theories of a given phenomenon that can turn out to be false. The scattering of alpha particles explained by Rutherford’s theory of the atom or Lorentz’s theory explaining clock retardation, for example, now are not thought to be true. In general, according to the IBE principle, it holds that, out of the class of potential explanations that we may have of some phenomenon, we should infer that the best explanation is the true one or the one that contextually seems to be “most true”.[1]

In general, all these kinds of explanatory accounts can be useful in cognitive modelling and, since the investigated phenomena are complex ones, a wise move is to maintain an attitude of an “explicative pluralism” (even if, as we briefly have seen, mechanistic explanations seem to be the most important).

In order to further develop this “ecumenic” argument, let us consider with some additional detail the role of the above-mentioned teleological explanation. As the reader has probably already guessed, this kind of explanation assumes the ascription of some mental attitudes (and therefore the use of some “folk psychology” terms like “belief”, “desire”, and “goal”) to the non-human system (either natural or, in our case, artificial) involved in the phenomenon one intends to explain. Indeed, if we assume an “intentional stance”, to use Dennett’s terminology that was introduced above, towards that system whose behaviour we aim to explain and predict, we are required to make exactly that ascription as an external observer. Now, why should we be interested in preserving this kind of observer-based explanation within a computationally grounded science of the mind? On this point, Valentino Braitenberg is clear when he warns,

It is pleasurable and easy to create little machines that do certain tricks. It is also quite easy to observe the full repertoire of behaviour of these machines - even if it goes beyond what we originally planned - as it often does. But it is much more difficult to start from the outside and try to guess internal structure just from the observation of behaviour. It is actually impossible in theory to determine exactly what the hidden mechanisms [arej without opening the box, since there are always many different mechanisms with identical behavior. Quite apart from this, analysis is more difficult than invention in the sense in which, generally, invention takes more time to perform than deduction: in induction one has to search for the way whereas, in deduction, one follows a straightforward path.

(Braitenberg, 1986: 20)

So, why should we consider legitimate the use of a teleological explanation in a computational account of the study of the mind? The answer can be found, again, in the cybernetic tradition. Already by 1943, Rosenblueth, Wiener, and Bigelow had proposed in their famous paper, “Behavior, purpose and teleology”, the inclusion of teleologic vocabulary to describe the behaviour offeedback machines (in particular those equipped with negative feedback[2]). In particular, they pointed out how it was justified to claim that certain classes of living organisms and machines have “purposeful” behaviour since their behaviour is teleologically guided by continuous (negative) feedback aimed at reaching a certain goal state. As a consequence, all the servomechanisms can display, to different degrees, teleological behaviour. In addition, it is worth noting that holding this kind of teleological “stance” can be considered a pragmatic option for the observer, and one that is justifiable if a system behaves as a rational agent and if there are not better, i.e., mechanistic, explanations available. As (Cordeschi, 2002) points out:

If one considers the observer as model builder, we can see that his third- person viewpoint is the current one in scientific practice, where one constructs theories and models by selecting the functions believed to be essential to the phenomenon under study. The observer’s role cannot be sidestepped, any more than the process of identifying hypothetical constructs in theorybuilding. As we have seen, models that include biological constraints are a priori no better or worse than models that include different constraints. These models identify functions like any other model, and it seems unlikely that identifying biological functions is per se more secure, more testable and, above all, less observer-relative than identifying other functions.


After this rapid, and partial, overview of the different kinds of explanations, in the following section we will consider what we really mean when we talk about “cognitive/biological” plausibility and how this multifaceted element can be useful to determine a comparison between different types of artificial systems (cognitively inspired or not).

  • [1] In this respect, a proponent of the contextual account of explanation like, Van Fraassen, claims,“A success of explanation is a success of adequate and informative description” (Van Fraassen,1980: 156—157). It is worth noting that there are many other types of explanations (for example,the “functional account” proposed by Cunnings, where an explanation consists of providing “afunction that a structure or system is believed to possess”). Such types of explanations are alsouseful but seem to be less relevant with respect to the aims of computational methods appliedto the cognitive research agenda. To use the words of Piccinini (2007: 125): “Computationalmodels that embody functional explanations explain the capacities of a system in terms of itssub-capacities. But this explanation is given by the assumptions embodied in the model, not bythe computations performed by the model on the grounds of the assumptions.”
  • [2] In the cybernetic tradition, machines capable of adapting themselves actively to the environment via trial-and-error processes based on negative feedback (used for autocorrection) werecalled “servomechanisms” or “negative feedback automata”. An example of this type of machineis a radar-controlled cannon, where the radar provides constant information and feedback abouta moving target (e.g., an airplane) so as to alter the gun’s aim. This example should not surprisethe reader, since research in cybernetics received a lot of impetus from World War II and itsmain early applications were in the military domain.
<<   CONTENTS   >>

Related topics