Desktop version

Home arrow Communication

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Functionalist vs. structuralist design approaches

The distinction between functionalist and structuralist design approaches is important in the context of the debate over the explanatory role played by artificial models (and systems) with respect to their analogous natural cognitive systems that are eventually taken as source of inspiration.

Functionalism was introduced in the philosophy of mind by Hillary Putnam in his seminal article Minds and Machines (Putnam, 1960) as a position on the type identity of “mental states”. In this context, mental states can be understood and characterized on the basis of their functional role. In particular, two tokens are assumed to belong to the same mental state if they are in the same functional relation with the other mental states and with the input/output of the system.[1]

This approach had a direct influence on AI since, in this context, it led to the definition of what we can call a design approach based on the notion “ functional equivalence” between some cognitive faculties to model and the corresponding mechanisms implemented in AI programs. Indeed, in that context, its more radical formulation postulated the sufficiency, from an epistemological perspective, of a weak equivalence (i.e., the equivalence in terms of functional organization) between cognitive processes and AI procedures. In other words, it posited that, from an explanatory point of view, the relation between “natural mind” and “artificial software” could have been based purely on a macroscopic equivalence of the functional organization of the two systems and their input-output specification. This position has been widely criticized in the literature over the last few decades. In particular, models and systems designed according to the “functionalist” perspective are not good candidates for advancing the science of cognitive AI since, as in the case of the airplane, the mechanisms and the overall design choices adopted to build such artefacts prevent them from having any kind of explanatory role with respect to their analogous systems available in nature. This is the case, for example, with recent AI technologies, including some self-proclaimed “cognitive computing” systems like IBM Watson or Alpha Go.[2] [3] Despite the propaganda both in the media and the scientific literature, such systems, in fact, cannot be qualified as “cognitive” since they do not have any kind of explanatory role with respect to: (i) how humans organize, retrieve, and reason (on) the information stored in their minds when answering questions (in the case of IBM Watson) or (ii) how people make decisions when planning their next move in the game of Go (in the case of Alpha Go). In other words, such systems, just like the vast majority of current AI systems (including very popular ones, from Siri to Alexa or Cortana), are “functional” systems. They “function as” a natural system (in terms of the provided output, given the input that they process and in terms of the surface organization of their internal components) but the internal mechanisms determining that output are completely different with respect to what we do as humans. Therefore, a mere artificial imitation of cognitive capabilities does not necessarily function according to the same principles. As mentioned, this is an important aspect to point out since, in many cases, attempts to ascribe cognitive explanations to functional systems are numerous (as described in the above-mentioned IBM and Google systems). This confusion also arises due to the improper use nowadays of expressions like “cognitive computing”, which is usually intended as wide umbrella term to indicate the entire field of systems able to provide some kind of interaction with humans.[4]

Diverging from functionalism, there is another possible method to follow, from a design point of view: the structural approach. This approach claims the necessity of a stronger, constrained connection between the designed artificial systems, and their internal architecture and procedures, and the corresponding ones available in nature. According to such an approach, structurally constrained artificial models and systems can be useful both to advance the science of AI in terms of technological achievements (e.g., in tasks that are easily solvable for humans but very hard to solve for machines using non-cognitive inspired approaches, for instance in commonsense reasoning) and to play the role of “computational experiments”, able to provide insights and results useful in refining or rethinking theoretical aspects concerning the target natural system used as source of inspiration (Cordeschi, 2002; Minkowski, 2013).

An immediate problem arising in this view is that it is not possible to build a completely structural and constrained artificial model, since it is not possible to reproduce a realistic artificial “replica” of a natural system.[5] The search for increasing structural models, in fact, produces an asymptotic regression to the microscopic physical world until it reaches the well-known Wiener paradox summarized in his sentence, “The best material model of a cat is another, or preferably the same, cat” (Rosenblueth and Wiener, 1945). In short, this “paradox” advocates for the realization of proxy-models, not replicas, of a given natural system by pointing out the difficulty of such a challenge. In a similar way, Pylyshyn asserted, about cognitive modelling (Pylyshyn, 1979: 49), that if we do not formulate any restriction about a model, we obtain the functionalism of Turing machine and, if we apply all possible restrictions, we reproduce a whole human being.

Thus, the key point here is the research of the right level of description and of the corresponding enforcement of constraints to this level to carry out human-like computation. In this view, the only way to make progress is through developing plausible structural models of our cognition, based on a more constrained equivalence between AI procedures and their corresponding cognitive processes.

A suitable solution for this state of affairs consists of considering the function- alism/structuralism dichotomy as extremes ofa continuum. Between the explanatory lack of usefulness of pure functional artificial models (which, nonetheless, can achieve impressive performances in specific tasks, as shown by systems like Watson and Alpha Go) and the unfeasible realizability of pure structural models, it is possible to individuate - in between - a plethora of plausible artificial proxy-models with different degrees of explanatory power with respect to the natural systems taken as sources of inspiration.

  • [1] Within the philosophical tradition, functionalism has been proposed in many different forms(for example, Jerry Fodor posed stricter requirements with respect to the functionalist analysisof the mind proposed by Putnam). We will not dwell here on these different proposals. An important theoretical notion of some functionalist accounts that, however, had an indirect impact
  • [2] in the context of AI is the notion of “multiple realizability”. This notion concerns the fact thatthe functional organization of different mental states can be “realized”, e.g., implemented, indifferent physical systems (including, for example, the human brain or computer hardware).This notion was used to distinguish the functional level with respect to the physical one and,as a consequence, to point out that in order to analyze mental states, only the functional level isimportant. As we will see in the following sections, a similar argument concerning the minorimportance of the “physical level” was expressed, although starting on different premises andfocusing on “artificial” rather than “natural” systems, by Allen Newell in his “levels-basedcharacterisation” of intelligent behaviour in artificial systems (please refer to the followingpages for a more precise introduction to this issue).
  • [3] IBM Watson is a question-answering system that defeated the human champions of a gameknown as Jeopardy! while Alpha Go is a system developed by Deep Mind (an innovative spin-offfrom Google) that defeated the human world champion of Go, a popular strategic game (mostlyknown in Asia). We will analyze these two systems in more detail in Chapter 4.
  • [4] We will discuss in more detail in the following chapters why this kind of a usage of the term isincorrect.
  • [5] And also when this investigation is possible, e.g., in the field of so-called “synthetic completemodels” (available for very simple organisms), the aspect of interpreting the model is problematic. A famous example is the case of the nematode known as Cacnorhabdilis elegans: a very simpleorganism endowed with about 300 neurons, whose DNA and expression pattern mapping havebeen completely described by biologists. An early project aimed at building a detailed simulation model of this organism; it showed how, despite the completeness of much of the empiricaldata about this simple organism, the complexity of genetic and cellular interactions made a fullunderstanding and testing of biological hypotheses extremely problematic (Kitano, Hamahashi,and Luke, 1998: 142). Based on this example, it should not be surprising that the outcome ofsimilar projects repeated on a larger scale, like the recent Human Brain Project (aimed at providing a “whole brain computational simulation” by 2023), face the very same difficulties andrepresent, nowadays, an example of myopic research investment and scientific failure.
 
<<   CONTENTS   >>

Related topics