Desktop version

Home arrow Communication

  • Increase font
  • Decrease font

<<   CONTENTS   >>

Functional and structural symbolic systems

Symbolic approaches, as mentioned in Chapter 1, are a family of different approaches relying on a plethora of problem-solving strategies and representational assumptions ranging from classical logic to probabilistic, fuzzy, default, and different nonmonotonic extensions. As for connectionist modelling frameworks, the adoption of such a family of approaches to model cognitive faculties in artificial systems can be also seen through the functional/structural continuum. As mentioned, different types of novel and more flexible symbolic formalisms have been proposed over the last few decades, in order to overcome a foundational problem of symbolic approaches relying on classical logic: dealing with commonsense reasoning.

In a commonsense situation, in fact, agents do not have access to complete information about the environment and about its changes. This can lead to situations in which, when new knowledge is acquired, the previously drawn conclusions need to be withdrawn and revised (technically these are called defeasible or non-monotonic inferences). This process cannot happen in classical or monotonic logic where, once a conclusion is deductively derived from certain premises, it continues to hold even if new premises (i.e., new knowledge) is added. A classic example of a simple non-monotonic inference is the following: if x is a bird (premise), then x can fly (conclusion). But if one comes to know that x is a penguin (a further premise), one has to reconsider the conclusion previously accepted. These kinds of inferences are usually tackled in non-monotonic approaches by resorting to so-called “defaults” (i.e., established generalizations about certain states of affairs presumed to be true until proven otherwise, introduced in Reiter, 1980). For example: one can typically assume that the “default” number of legs of a dog is four, but a dog with three legs would still belong to the “dog frame” (to use Minsky’s terminology).

Within the class of non-monotonic approaches to reasoning, John McCarthy[1] proposed the idea of a logical system with “circumscription” (McCarthy, 1980). His idea was to circumscribe as “anomalous” potential exceptions to a typical situation, like the one stated by the sentence: “If x is a bird, then x can fly”. In this case, the property of “non-flying” is anomalous with respect to “being a bird” and, thus, such an anomalous property is circumscribed. In other words, this property is assumed to have the smallest possible extension with respect to the information at one’s disposal. The sentence in the example, therefore, is reformulated as follows: “If x is a bird, and x is not an anomalous bird, then x can fly”. The investigation of such problems provided the background for a whole series of research projects — which were then called “logicist” - on the use of logic as a medium for representing the commonsense knowledge that is at the core of the agent’s model of the world. However,

these investigations rarely provided suggestions for actual implementation or, in general, for the solution of heuristic reasoning problems. Thus one often witnessed a proliferation of investigations into various forms of circumscription and non-monotonic rules, which also led to some defections.

(Cordeschi, 2002: 202)

Similarly, other important contributions came from the development of other types of logic: fuzzy, modal, temporal, etc. Within these approaches, the fuzzy logic introduced by Lofti Zadeh (1988), rejecting the idea of having a rigid characterization of conceptual structures, was a particularly interesting way to deal with the problem of commonsense reasoning. Fuzzy logic methods, however, despite initial expectations and despite having been used successfully in many real-world applications, have not proven to be an adequate foundation for many commonsense reasoning tasks. As evidence of this state of affairs, it is worth noting that almost all successful fuzzy logic applications are applied embedded controllers, while most theoretical papers on fuzzy methods dealing with knowledge representation and reasoning have not provided the expected results.

All these different types of symbolic approaches can be seen as approaches collocating in different positions in the functional vs. structural continuum.

Probabilistic, fuzzy, and non-monotonic reasoning formalisms, in fact, augment the functional similarity between human and artificial capabilities (since they are more flexible than classical logics) but there is no evidence that they are a structurally adequate model of human reasoning and understanding (in many cases there is direct or indirect evidence to the contrary).

What is important to note, however, is that even in the most classical formulations, symbolic systems - even if structurally inadequate from a neuroscientific point of view — can be a useful means to model a computational enquiry able to discover a structural hypothesis of our reasoning mechanisms.

Again, another classical example (reported also in Minkowski, 2013) can be taken from the past. Let us consider the classical context of well-known crypta- rithmetic problems having the form: DONALD + GERALD = ROBERT (Newell and Simon, 1972). In this case, ten distinct digits must be substituted for the ten distinct letters in such a way that the resulting expression is a correct arithmetic sum (526485 + 197485 = 723970). As in the usual proposal of the problem, the hint is given that D = 5. Almost all subjects who solve the problem find the values for the individual letters in a particular sequence: T = 0, E = 9, R = 7, A = 4, and so on. The reason is that only if this order is followed can each value be found definitely without considering possible combinations with the values of the other letters. With this order, in fact, the solver does not have to remember what alternative values he has assigned to other variables, nor would he have to back up if he finds that a combination of assignments has led to a contradiction. In other words, the search behaviour of the informationprocessing system derives directly from the system’s small short-term memory capacity. In addition, the empirical fact that solvers do make the assignments in roughly this same order provides an important piece of evidence (others can be obtained by analyzing thinking-aloud protocols and eye movements) that the human information-processing system operates, in certain situations, as a serial system with limited short-term memory. In this case, in fact, the performance of the information processing system matches the verbal protocol. Furthermore, when the comparison was done with eye movements, the match between the system behaviour and human data is also higher than the agreement with verbal protocol (since verbal protocols do not mirror thinking exactly). In other words, the symbolic model usually adopted for this problem describes heuristic as “transformation in a problem space”. Such model does not consider neurological constraints and, as such, cannot be considered a structural model of brain processing. The system, in fact, is compared with human solvers in a functional way. However, it makes assumptions about the algorithmic level of the problem from an information processing perspective (e.g., the constraints about the space and memory limits, the sequential processing, and so on) and, as shown, can be useful in providing structural information about the processing modes and mechanisms of the overall system. As a consequence, as in the neural models, it has its own place within the cognitive AI and cognitive modelling research agendas.


  • [1] John McCarthy was the first researcher to propose the use of logic to endow machines withcommonsense reasoning capabilities. His first proposal was for a hypothetical logical system thathe named the “Advice Taker” (McCarthy, 1960). The Advice Taker was conceived as a generalor multi-purpose problem-solving system, formulating plans and drawing inferences based ona sufficiently extensive body of knowledge, while also making use of “advice” provided by itsprogrammer. The Advice Taker, just like the GPS, aimed at being a “general” system. Themain, crucial difference among the two systems was that, in the McCarthy proposal, the logic -and, in particular, the first order logic - was assumed to be the only language to representknowledge and heuristics to be applied on such knowledge. In GPS, on the other hand, logicalrepresentations were only one of the possible “symbolic” accounts handled by the heuristicprogram.
<<   CONTENTS   >>

Related topics