Desktop version

Home arrow Communication

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Levels of analysis of computational systems

A further important and influential conceptual tool that is useful to introduce for the analysis of different types of artificial systems (including cognitive ones) is represented by some notorious proposals about the levels of organization through which to observe the behaviour of such systems. In this respect, different systematizations have been proposed. The first and most influential one was made by David Marr (Marr, 1977, 1982). According to Marr, the computational account of a cognitive phenomenon can be formulated at three different levels: the level of the computational theory (also called level 1), the level of algorithms and representations (level 2), and the implementation level (level 3) (Marr, 1982, Chapter 1). The level of the computational theory is the most abstract; it concerns the specification of the task associated with a given cognitive phenomenon. At this level, cognitive tasks are characterized only in terms of their input data, the results they produce, and the overall aim (the “goal”) of the computation, without any reference to the specific processes and cognitive mechanisms involved. In other words, at the level of the computational theory, a cognitive task is accounted in terms ofa purely functional correspondence (mapping) between inputs and outputs. The algorithmic and implementation level concerns, to different degrees of abstraction, the realization of the task described at the level of the computational theory. The aim of the algorithmic level is to specify “how” a certain task is performed: it deals with the particular procedures that are carried out and with the representation structures on which such procedures operate. The implementation level is concerned with the features of the physical device (e.g., the structures of the nervous system or of a particular artificial neural network architecture) that implement the representations and the procedures singled out at the algorithmic level. The relation that exists between a theory expressed at the computational level and the underlying algorithmic level can be regarded as the relation between a function (in a mathematical sense) that is computable and a specific algorithm for calculating its values. The aim of a computational theory is to single out a function that models the cognitive phenomenon to be studied. Within the framework of a computational approach, such a function must be effectively computable. However, at the level of the computational theory, no assumption is made about the nature of the algorithms and their implementation.

It is worth noting that Marr’s levels are not specifically concerned with cognitive systems; rather, they pertain to the analysis of a computational system whatsoever. They can be applied to any system that can be studied in computational terms. Let us consider Marr’s example of a machine computing arithmetic addition; Marr illustrates his tripartite analysis resorting to the example of a device whose functioning is well understood: a cash register. At the computational level, the functioning of the register can be accounted for in terms of arithmetic and, in particular, the theory of addition; at this level are relevant the computed function (addition) and abstract properties of it, such as commutativity or associativity (Marr, 1982: 23). The level of representation and algorithm specify the form of the representations and the processes elaborating them: “We might choose Arabic numerals for the representations, and for the algorithm we could follow the usual rules about adding the least significant digits first and ‘carrying’ if the sum exceeds 9” (ibid.). Finally, the level of implementation has to do with how such representations and processes are physically realized; for example, the digits could be represented as positions on a metal wheel or, alternatively, as binary numbers coded by the electrical states of digital circuitry.

Marr’s distinctions are clear, theoretically well-grounded, and necessary in the context of our discourse because such dimension of analysis can be naturally mapped into the functional/structural dichotomy by extending the understanding of different types of artificial systems (whether they are cognitively inspired or not). However, in the field of cognitive science, Marr’s levels have been sometimes misinterpreted and confused with other distinctions and with other “levels”.

Indeed, the first confusion that has sometimes been made between Marr’s levels is with the three levels proposed by Zenon Pylyshyn in his book Cognition and Computation (1984). In particular, Pylyshyn claims that “explaining cognitive behavior requires that we advert to three distinct levels of a system” (xviii); he terms them the semantic, syntactic (or symbolic), and physical (or neurophysiological) levels.

According to Pylyshyn,

Organisms and artifacts that need to be described at the semantic level, i.e., by attributing to them goals, beliefs and desires, are those whose representations are physically instantiated, in the brain or in a hardware, as codes, i.e., as symbol structures capable of causing behavior.

(Cordeschi, 2002)

From Pylyshyn’s point of view, systems of this kind include human minds and software. The semantic level, according to Pylyshyn (1989), “explains why people, or appropriately programmed computers, do certain things by saying what they know and what their goals are and by showing that these are connected in certain meaningful or even rational ways” (57). The semantic level, then, is the one concerned with the “semantic content” of representations. The syntactic (or symbolic) level, on the other hand, concerns the way in which such semantic content of knowledge and goals is encoded and - in the Pylyshyn’s view (1989: 57) - it is assumed to be encoded by symbols such as the ones assumed in the Physical Symbol Systems Hypothesis (PSSH). Finally, the physical level concerns the implementation details of the symbolic level in both brains and machines. Pylyshyn also identifies three kinds of explanations corresponding to each of his levels, namely, intentional, functional, and biological explanations.

In spite of some superficial similarities, however, Marr and Pylyshyn’s proposals are situated at “different levels”. In the first place, there is no reason to assume that the level of computational theory has something to do with semantic content or with “intentional” explanations.[1] The highest level of Marr’s hierarchy, in fact, can be characterized in terms of the well-understood mathematical concept of function, without any need to involve semantic or intentional notions. It must be also noted that Marr’s levels do not pertain to the overall structure of the mind, nor do they constitute a general point of view on the mind as a whole. In other words, Marr’s levels are local levels, which do not aspire to individuate a general structure of cognition; rather, they are aimed at explaining the functioning of specific cognitive or computational components.

Moreover, it is not granted that a mental process can always be studied in computational terms, i.e., according to the three levels of Marr’s methodology. Marr (1977) explicitly admits that in some cases it is unlikely that an abstract characterization of the computed function (corresponding to the computational level) could be separated from a detailed description of the processes calculating its values (corresponding to the level of algorithms and of representations). For example, “this can happen when a problem is solved by the simultaneous action of a considerable number of processes, whose interaction is its own simplest description” (Marr, 1977).

Furthermore, it is important to stress that, in the field of the study of the mind, there is no reason to exclude that Marr’s approach can be applied to both “macroscopic” (e.g., phenomena concerning natural language processing, planning, or reasoning) and “microscopic” cognitive phenomena (for example, at the level of the behaviour of single neurons). What is crucial is that they are suitable for a computational analysis.

Finally, as mentioned, Marr’s levels are local levels. As the philosopher Jose Luis Bermudez correctly observes: “Marr’s account ... would still fall a long way short of providing a picture of the mind as a whole” (Bermudez, 2005: 27) and “Marr’s analysis ... is not itself pitched at the right sort of level to provide a model of how we might understand the general idea of a hierarchy of explanation applied to the mind as a whole” (ibid.). From the viewpoint of Bermudez, this is a limitation of Marr’s proposal. In this respect, my evaluation is antithetical: the interest of Marr’s levels lies exactly in the fact that they do not offer a questionable and premature picture of the general structure of the mind; rather, they provide a solid methodological tool for the local analysis of many cognitive phenomena. It is exactly these kinds of conceptual tools that are needed in the fields of cognitive AI and computational cognitive science.

Pylyshyn’s levels of analysis are better associated with, and inspired by, another well-known “levels-based” characterization of the behaviour and analysis of artificial systems: the Knowledge, Symbol and Physical levels proposed by Allen Newell (1982). In particular, Newell posited that such a hierarchy of levels characterizes the PSSH architecture of an intelligent system. In his own words (Newell, 1982: 98): “The system at the knowledge level is the agent. The components at the knowledge level are goals, actions, and bodies. Thus, an agent is composed of a set of actions, a set of goals and a body”. The medium at the knowledge level is knowledge. Thus, the agent processes its knowledge to determine the actions to take. Finally, the behaviour law is the principle of rationality: actions are selected to attain the agent’s goals. According to Newell,

To treat a system at the knowledge level is to treat it as having some knowledge and some goals, and believing it will do whatever is within its power to attain its goals, in so far as its knowledge indicates.

(Newell, 1982)

Newell sees the knowledge level, and the “knowledge level analysis” that an AI designer has to do in his building efforts, as a central construct for understanding intelligent systems. This level needs to be used in conceit with lower-level descriptions: the Symbol Level that, as the reader can probably guess, concerns the actual representational format (the symbols) and the information-processing mechanisms used by the system in order to reach its goal (given the knowledge that it possesses), and, finally, the physical level concerning the actual physical implementation of such symbolic structures/’ The knowledge, symbol, and physical levels of Newell clearly correspond to the semantic, syntactic, and physical (or neurophysiological) levels of Pylyshyn. As in that case, however, it is not possible to equate Newell’s knowledge level with Marr’s computational level since they are different dimensions through which to analyze (and also design) intelligent behaviour. Similarly, a mapping between the Marr’s representation [2]

and algorithmic level and the syntactic (a /a Pylyshyn) or the symbolic (a la Newell) levels is misleading, since Marr’s analytical tool does not make any assumption about the nature of the representations and algorithms, while Pylyshyn and Newell are explicitly rooted in the PSSH. Finally, a similar discourse holds also for the physical or implementation levels in all three different accounts. More specifically, this level seems to be the most “compliant” one between all the proposals, but this is only due to the fact that it is highly underspecified in both Pylyshyn’s and Newell’s theoretical constructs.

Interestingly enough, however, both Pylyshyn’s and Newell’s levels of analysis can be (partially) related to another important piece of cognitive literature about the strategies used for explaining and understanding the behaviour of an intelligent system (be it natural or artificial): the different “stances”, i.e., explanatory attitudes, individuated by Daniel Dennett.

Dennett, in particular, individuates the so-called physical, design, and intentional stance for explaining the behaviour of a system (Dennett, 1976, 1988). An example to explain these different strategies is the following: if we consider chemists or physicists in their laboratories, studying certain kinds of molecules, we can imagine that they try to explain (or predict) the molecules’ behaviour through the laws of physics. This is what Dennett calls the “physical stance”. There are cases, however, in which such laws are an inadequate (or not the most efficient) way to predict the behaviour of a system. For example, when we ride a bike, we can fairly predict that the speed will decrease if we push on the brakes, since the bike itself is designed this way. To make this kind of prediction, however, we do not need to know the precise physical mechanisms behind all atoms and molecules in the braking system of the bike, but it is sufficient to rely on our experience and knowledge of how the bike is designed. Dennett describes this as the “design stance”. The third strategy proposed by him, the intentional stance, corresponds to the attitude - assumed by a human observer - of attributing “in- tentionality” to a given system, whether biological or artificial, as an option for the observer to understand and explain its behaviour. As Dennett states:

There is another stance or strategy that one can adopt: the intentional stance. Here is how it works: first you decide to treat the object whose behavior is to be predicted as a rational agent; then you figure out what beliefs that agent ought to have, given its place in the world and its purpose. Then you figure out what desires it ought to have, on the same considerations, and finally you predict that this rational agent will act to further its goals in the light of its beliefs. A little practical reasoning from the chosen set of beliefs and desires will in many - but not in all - instances yield a decision about what the agent ought to do; that is what you predict the agent will do.

(Dennett, 1988: 17)

In Dennett’s view, “there is no difference between attributing intentionality to a car, a thermostat or a program: the intentionality of any artifact is always ‘derived’ from that, ‘intrinsic’ or ‘original,’ of its designer” (Cordeschi, 2002: 261). In addition, according to him, a system does not necessarily have to have representations in order to be “intentionally” described in terms of beliefs, purposes, and desires; it is sufficient for it to behave as if it were a rational agent. This assumption was critiqued by computationalists[3] a la Fodor (Fodor, 1986) and, as should be evident, also contrasts with the underlying symbolic-grounded assumptions made by both Newell and Pylyshyn.

A point of contact, however, can be for sure traced between the intentional stance and both the “semantic level” proposed by Pylyshyn - explicitly asking for an “intentional” explanation,[4] and therefore an “intentional stance” assumed by the observer with respect to the observed system - and Newell’s “knowledge level”. As Newell himself admits (Newell, 1988: 2):

At about the time that Brainstorms was published, I discussed a construct that I called the knowledge level (Newell, 1982). I did this in the context of a AAAI presidential address, as pan of my general view that computer science and AI, by their practice, have homed in on essentially correct notions of several central concepts in cognitive science, in particular symbols (Newell & Simon, 1976; Newell, 1980) and knowledge (Newell, 1982; Newell, 1986), but perhaps others as well, such as the architecture. Before I finished preparing that address, I found and read Brainstorms. It was instantly clear that the knowledge level and the intentional stance are fundamentally the same, and I indicated as much in the talk. Dan’s assessment seems to agree (Dennett, 1986). Thus, there is no question about the basic relationship between the two they are formulations of the same solution.

Despite this conceptual alignment at the surface level, however, it is Newell himself who pointed out some differences between these two theoretical formulations (Newell, 1988). Among those individuated by him (for details I recommend reading Newell, 1988), the most relevant ones, for the purposes of this book, are: the “stance versus system” view and the “technical development” argument. In regards to the former, Newell argues that Dennett’s theory refers to “stances” - and therefore take an “observer” point of view - while his theory refers to “systems”. Newell’s levels, in fact, are intrinsically system-oriented: i.e., they describe a hierarchy of a general system architecture. These are obviously different aspects of situations where analysts, who take stances, describe parts of reality as systems of various types. This aspect is related also to the second point raised by Newell: the technical development issue. He correctly points out that the knowledge level view has had a concrete technical impact in the field of AI, while Dennett’s intentional stance has had no technical development and impact at all within the AI community, in both the design and development of AI systems (I assume that I will not be too far from the truth if I say that the youngest AI researchers do not know at all what the “intentional stance” is). Nonetheless, Dennett’s contribution (which, by the way, was the first to come from a historical perspective) has provided, as we will see better in the Chapter 3, important insights for aspects concerning the explanatory role ascribed to both cognitive and non-cognitive artificial systems.

  • [1] Intentional explanations can be described as explanations in which the behaviour of a systemis characterized by attributing to it some “intentionality” to perform a certain task (given itsknowledge about the world). As we will see, this aspect represents a crucial part of Dennet’s“intentional stance” proposal.
  • [2] Following the PSSH (introduced in Chapter 1), the physical or “hardware” level is consideredless important by Newell in order to understand the emergence of intelligent behaviours inartificial systems.
  • [3] Computational ism is a philosophical view that claims that the mental processes of the mind arecomputations or, more precisely, that “the theoretical constructs of a theory of the mind areboth the computational processes that are supposed to occur in the mind and the data structures(‘representations’) that such processes manipulate” (Cordeschi and Frixione, 2007). Some of itsmain assumptions are borrowed from informational processing psychology but is should not beconfused, as it has been unfortunately done, with both Newell and Simon’s PSSH and Fodor’stheory of Language of Thought (see Cordeschi and Frixione, 2007, for a detailed account ofthese aspects).
  • [4] We anticipate here that intentional explanations can be seen as a type of so-called “teleologicalexplanations”, which will be introduced in Chapter 3.
 
<<   CONTENTS   >>

Related topics