Introduction: Fundamental Uncertainty and Plausible Reasoning
Silva Marzetti Dall'Aste Brandolini and Roberto Scazzieri
Uncertainty, plausible reasoning and the continuum of inductive methods
Uncertainty and rationality are closely related features of human decision making. Many practical decisions are traditionally reconstructed as attempts to frame uncertain outcomes within the domain of rule-constrained reasoning, and much established literature explores the manifold ramifications of rationality when choice among uncertain outcomes has to be made (as with choice criteria associated with maximization of expected utility). However, this overall picture is changing rapidly as a result of recent work in a variety of related disciplines. Research in cognitive science, artificial intelligence, philosophy and economics has called attention to the open-ended structure of rationality. This point of view stresses the active role of the human mind in developing conceptual possibilities relevant to problem solving under contingent sets of constraints. Rationality is conceived of as a pragmatic attitude that is nonetheless conducive to rigorous investigation of decision making. In particular, conditions for rational decision are moved back to its cognitive frame (the collection of concepts and predicates that makes any given representation of problem space possible), and the cognitive frame is associated with the context-dependent utilization of cognitive abilities. This view of rationality distances itself from received conceptions of deductive and inductive inference as it is related to a situational conception of reasoning. This means that reasoning comes to be considered as a mental state in which a prior (and only partially structured) set of cognitive abilities takes definite shape, as shifts from one context to another activate one particular set of cognitive procedures after another. As a result, rationality appears to be intertwined with utilization of justified procedures. However, the reduction of rational procedure to any single-minded criterion of instrumental rationality (or rational success) is avoided. Any given problem space is considered, at least partly, as a mental construction, so that identification of problem setting and selection of justified procedure go hand in hand (see also Galavotti, Scazzieri and Suppes, 2008a, 2008b).
The above view of problem spaces and solution procedures suggests a description of rationality as a specific configuration of capabilities and procedures, rather than as a particular selection of choice strategies and actions. In particular, a rational cognitive agent is considered as an agent capable of effectively reconfiguring itself after a cognitive shock. In other words, rationality is associated not only with effective utilization of a given set of cognitive rules but also with the effective use of cognitive abilities. As regards cognitive abilities, however, not all abilities are used at the same time, and new abilities may become available as the cognitive process unfolds. If this point of view is adopted, rationality appears to presuppose a cognitive system capable of self-reference and reflective states. In other words, rational cognitive systems should be endowed with the ability to make sense of their own structure and behaviour. At the same time, a rational cognitive system should be open to the possibility of self-correction and structural change (see above). Reflective rationality is inherently dynamic, due to the emphasis upon reconfiguration ability. It may also be conjectured that reconfiguration is associated with feedback processes (primarily of the non-linear type). The above view of rationality suggests a pragmatic approach highlighting the variety of patterns of reasoning by means of which it is possible to identify effective strategies. It also suggests a cognitive and experimental twist in the analysis of decisions. In this connection, the available bundles of concepts and predicates ( frames) and the active principles calling attention to particular sets of concepts and predicates ( focal points) may be more relevant than standard computational skills in reaching satisfactory (not necessarily optimal) solutions (see Bacharach, 2003, pp. 63-70).
More specifically, research work in cognitive science and artificial intelligence, decision science and economics suggests an understanding of rationality through a reformulation of the relationship between cognitive states and their material (physical) conditions (the classical mind-body problem1). In particular, rationality appears to be grounded in the recognition of associative structures to which the human mind is disposed but which cannot be reduced to any deterministic model of its neural structure. This calls attention to the open-ended configuration of justified procedures, in which the standards of classical epistemology and rational choice decision theory are complemented by close attention for interactive outcomes, analogical reasoning and pattern identification. Contingent constraints and situational reasoning are often associated with uncertainty of individual outcomes. The shift from one set of constraints to another (from one space of events to another) could make it difficult to rely upon any fixed set of inferential rules. It also suggests that cognitive (and pragmatic) success may reflect the individual (or collective) ability to make use of a diversity of cognitive frames and to switch from one frame to another as the need arises.
Reasoning under uncertainty is the most important field of human cognition in which the active role of the human mind is clearly in view. This is primarily because assessment of greater or lower likelihood is critically dependent on the way in which alternative conceptual spaces may give rise to alternative configurations of strategies and outcomes. Uncertainty itself may be assessed in terms of the degree to which the cognitive agent is free to 'structure' the situations associated with it. This manipulative view of uncertainty implies that, for any given state of nature, any given situation would be more uncertain (or, respectively, less uncertain) depending on whether agents are more (or, respectively, less) capable of configuring (or reconfiguring) that situation, its antecedents and its likely successors. Consideration of the specific domain in which the 'active power' of cognitive agents may be exerted is closely associated with a classification of situations ranging from lower to higher uncertainty. A situation in which the cognitive agent has virtually no freedom in giving shape to the configuration of possible events is one of lower uncertainty. Maximum uncertainty is associated with situations in which the cognitive agent is completely unconstrained in terms of which configuration of possible events he might reasonably consider.
The above point of view is consistent with a primary research avenue in cognitive science, which is to assess the formation of categories and its roots in dispositional attitudes concerning the detection of similarity (see Tversky, 1977; Gardenfors, 2000; Scazzieri, 2001; 2008). It is also consistent with research work in artificial intelligence addressing the interplay of ontological and epistemic abilities (see Gardenfors, 1990; Giunchiglia, 1993; Sanchez, Cavero and Marcos, 2009) as well as with a well established tradition in decision theory and economics recognizing the pervasive and multidimensional character of situations characterized by lack of certainty (see Hishiyama, Chapter 10, Vercelli, Chapter 7, and Zadeh, Chapter 6, in this volume). Among the latter contributions, we mention the classical distinction between risk and uncertainty (see Kregel and Nasica, Chapter 11, this volume), introduced by John Maynard Keynes in 1907 and in 1921 by pointing out that 'if we are given two distinct arguments, there is no general presumption that their two probabilities and certainty can be placed in an order' (Keynes, 1973 , p. 41); Frank H. Knight's view that 'a measurable uncertainty, or "risk" proper, [...] is so far different from an unmeasurable one that it is not in effect uncertainty at all' (Knight, 1946 , p. 20; author's emphasis); and John Hicks's belief that 'of two alternatives, on given evidence, either A is more probable than B, or B [is] more probable than A, or they are equally probable, or [...] they are not comparable' (Hicks, 1979, p. 114; author's emphasis). Following those acknowledgements, it is increasingly recognized that reasoning under lack of certainty is inherently associated with a complex mix of inferential and representational abilities, and that plausible judgements under those conditions presuppose first the ability to identify the cognitive context (most) suitable for the situation and problem(s) in view (see, for example, Suppes, 1981; Gilboa and Schmeidler, 2001, pp. 29-61; Drolet and Suppes, 2008; Suppes, 2010).
The aim of this volume is to address lack of certainty on the basis of the most general conditions for plausible reasoning, that is, reasoning that is defensible but not 'beyond controversy' (Polya, 1990 , p. v; see also Polya, 1968; Collins and Michalski, 1989). Fundamental uncertainty provides the framework of the cases of plausible reasoning considered in the following chapters. It is associated with probabilistic ignorance (no relevant probability distribution is known, nor can it be assumed) and, in particular, ex ante structural ignorance (the space of events is unknown, or only partially known). In particular, lack of certainty is examined from both the ontic and the epistemic viewpoints, reliability of evidence is assigned a central role, and similarity judgements are considered a necessary condition of probability judgements. The above setting lends itself to a theory of uncertainty associated with the analysis of concept formation, likeness and distance more than with the inferential structure of probabilistic reasoning. The latter part of the volume explores the implications of the above point of view in the analysis of economic decisions, and carries out that investigation by examining the weight of rational arguments under uncertainty, the objective conditions of stochastic equilibrium, and the general structure of economic and social laws in a universe of interacting decision makers. Finally, the volume examines the character of plausible reasoning about moral issues in the light of the theory of uncertainty outlined in the previous chapters.