Desktop version

Home arrow History arrow The Art of World-Making: Nicholas Greenwood Onuf and his Critics

Institutionalizing ethical systems

By beginning the final chapter of World of Our Making with a discussion of rationality, Onuf directs our attention to the way people make choices about how to behave. When we reason about these choices, Onuf notes, we make comparisons among states of affairs based on how we feel about those states of affairs. The preceding section has focused on the importance of feelings as contributors to moral judgments about different states of affairs.

Generally speaking, there are three different states of affairs about which we may have feelings and make reasoned judgments. First, there are situations that concern only oneself. My decision about whether to buy a blue car or a silver car, for example, is a decision about one such state of affairs. Second, there are situations that involve oneself and another person. My decision about whether I will pay the price demanded by my local car dealer is an example of this second state of affairs. The decision will depend on how I feel about the car, but also on how the car dealer feels, and on how we negotiate. Finally, third, there are situations that involve oneself and multiple others. I may decide, for example, to support legislation prohibiting car dealers from engaging in certain financing practices. Onuf sums up these three states of affairs as follows:

What people compare are states of affairs that must already have been constructed to allow comparison. These I call the grounds of comparison. They may be constructed to include oneself and exclude all others, to include oneself and only one other, and to include oneself with a number of others. These three possibilities constitute the grounds of comparison into three categories: internal comparison (or, in the language of social choice, intrapersonal comparison), binary (or interpersonal) comparison, and global comparison.

(1989, 266)

It comes as no surprise that the construction of these states of affairs is social. It is social in the same way reason, cognition, and memory are social.

But that there are three possibilities is not a social phenomenon. Rather, it is a consequence of a world that includes speaking, thinking, believing, and reasoning subjects as individuals, as interacting pairs, and as groups. This much, at least, is not a social construction. Still, perhaps these categories appear arbitrary It might at least be supposed that we could simplify this typology to consider only two states of affairs, those involving individuals and those involving more than one person. But there is a good reason to make a tripartite distinction. Decisions about each of these states of affairs entail distinctive solution concepts. Let us consider each of these states of affairs in turn.

Individual

The field of economics elaborates extensively on the precept that individuals compare states of affairs with respect to their own purposes and preferences. To achieve one’s purpose is to attain utility Utility, in other words, is the advantage conferred by a system defined by individual choice. The complete system, although inevitably a social construct, is one that defines a state of affairs pertaining only to individuals and to actions of the individual self.

Economics provides a definitive solution concept for this state of affairs: individuals should act in order to maximize subjective expected utility (Fishburn 1968). Utility is subjective in that it is defined by the individual. It is expected in that the choice may involve risk. And the normative mechanism itself, in this case, is maximization. This tells us that individuals should pursue the things that they want. This claim may seem trivial, but it does at least stipulate that the goal of individuals, in individual choice settings, is not to set aside desire itself. It rejects, in other words, the alternative solution concept proposed by stoics and Buddhists.

The inclusion of subjective preferences over risk itself and the recognition that individuals prefer many different states of affairs simultaneously both serve to complicate utility equations. When I choose a car, as in the above example, I prefer to attain many different things at once: performance, style, reliability, and so on. My choice may be sufficiently complicated that I struggle to make it. The Von Neumann-Morgenstern (VNM) expected utility theorem postulates, however, that there is a maximizing solution to all such dilemmas provided that certain axiomatic conditions are met (Von Neumann and Morgenstern 1953). In practice, individuals do not always satisfy the requirements of VNM rationality For example, my preferences over different states of affairs may be incomplete. Yet VNM rationality is not a descriptive account, but rather a normative standard serving as the foundation for a decision theory in what Onuf calls the individual state of affairs - or what Elster (1986), as noted earlier, called parametric choice problems.

Utility maximization may be a normative solution, in that it prescribes certain behavior, but it is not a statement of ethics. If only I am in a position to care about a certain state of affairs, then utility maximizing is sufficient warrant for my choice. An ethical prescription, on the other hand, presumes that others are also in a position to care about outcomes relevant to my decision. Thus, the color of my car is not an ethical problem, even if I have strong feelings on the subject. For this same reason, binary states of affairs also entail normative solution concepts that are not ethical statements.

Binary

A binary state of affairs is defined jointly by two agents. Even when the two agents agree to a large extent on how to specify the state of affairs relevant to their (joint) decisions, the problem cannot be reduced to individual choice, since the outcome depends on what both do (in defining, comparing, and choosing). Each agent must compare states of affairs in order to make a choice. Yet, because each must also take the other agent’s comparisons among states into account, the decision problem they face is not reducible to two individual choices. Elster (1986) says that these problems involve strategic rather than parametric choices.

As with individual choices, decision theory provides a normative framework for such choices. The relevant body of scholarship is game theory rather than expected utility theory, and the entailed solution concept is the equilibrium. In game theory, equilibria are both predictions and prescriptions. If, as the term implies, they are outcomes where a game comes to rest, then it makes sense to regard them as predictions. In practice, games tend to converge on these points. It is less obvious, however, that equilibria should be regarded normatively as prescriptions.

The best-known example of an equilibrium concept in game theory is the Nash equilibrium. An outcome is in (Nash) equilibrium if no party to the game can benefit by changing strategy unilaterally. Yet Nash equilibria may not benefit any party to the game at the individual level, as the prisoner’s dilemma famously illustrates. It is hard to see why we should accept the Nash equilibrium as a behavioral prescription for one-time strategic actions if it leads, in the most extensively studied of all strategic games, to an outcome that is worse for everyone. Yet the Nash equilibrium works as a solution concept not because it enables individuals to do better mutually - it does not - but precisely because it identifies a point from which individual deviations can only cause harm. Equilibria need not maximize either individual or joint utility. They are not intrinsically preferable, although they are only achieved (paradoxically) through the pursuit of preferable states of affairs. When an equilibrium outcome has been achieved, however, no unilateral efforts to improve the situation can bear fruit. The Nash equilibrium of mutual defection in the prisoner’s dilemma may be bad for everyone, but a unilateral change of strategy is even worse. Thus, equilibria are states of affairs in which efforts to redefine the state of affairs cease. Better states of affairs are understood to be unattainable, but they are unattainable only because they are so understood by the parties involved. As a normative statement, the equilibrium solution concept thus relies on shared understanding and cannot be reduced to purely individual choices.

Equilibria, like utility maximization solutions, function as normative statements but not as statements of ethics. Because they do involve outcomes that affect another person, this is harder to see in the binary state of affairs than it was in the individual state. Leaving aside for the moment applications of game theory to multiplayer strategic problems, it would seem that each party to a binary strategic interaction has a moral as well as a practical interest in the actions of the other. Each does, no doubt - but only because of the operation of ethical systems made possible by the global state of affairs and institutionalized as guides for action even in binary circumstances.

To illustrate this, imagine an apocalyptic scenario in which only two human beings survive to live out their lives together. Although their pre-apocalyptic lives will have socialized them to accept certain moral convictions, and although these may very well influence their subsequent behavior, they no longer face any relevant moral constraints, only practical ones. Let us also suppose that the two survivors are both women, and so, if they are the last humans left alive, the survival of the species is neither an option nor a moral duty How should they behave toward each other? Reciprocal altruism may be useful as a survival strategy, but it is hard to explain why the survivors should have any moral duties to each other. If one chooses to take actions that harm the other, there is no one else to offer moral condemnation, and the one who is harmed need not invoke a moral principle. It suffices, for her, that her own interests are jeopardized. In truly binary circumstances such as these, we may still intelligibly ask how the two agents should behave. Yet the only conceivable answers will be either individual (utility maximizing) or - should they face dilemmas described by games such as the stag hunt or the prisoner’s dilemma - binary (converging on equilibria).

Global

The final state of affairs about which the individual may render judgment and make choices is that involving multiple others. And it is at this point that Onuf’s (1989) discussion of rationality comes thoroughly unhinged from microeconomic decision theory. As a normative undertaking, rational choice theory tells us exactly how to render judgments and make choices in individual and binary states of affairs. It says scarcely a word, however, about how to make choices comparing global states of affairs.

There is no corresponding body of global microeconomic theory because the Condorcet paradox forbids extending utility functions to global systems. In an individual system, or in a binary system comprised by two individuals, all agents have utility functions. But in systems comprised of as few as three individuals, the Con- dorcet paradox asserts that circumstances may arise when there is no global utility function (Gehrlein 1983). The Arrow impossibility theorem further specifies that, under such circumstances, there is no intrinsically fair voting system for aggregating individual preferences into a global utility function (Arrow 1950). Moreover, even in circumstances that do not produce a voting paradox, mathematical solutions to n-body problems quickly become intractable. Solutions in systems involving as few as three bodies are notoriously complex (Barrow-Green 1996).

Another, somewhat different reason why there is no formal literature to speak of offering a solution concept to the problem of global comparisons is that Weberian notions of rationality have ruled it out of bounds. Weber stipulated that we cannot agree on what is actually preferable collectively, so what he called value rationality (Wertrationalitat) is inadmissible, at least in modern society Social solutions are premodern because they were based on either divine or natural law conceptions of the common good. The modern turn to a human-centered cosmology both establishes the problem of judgment and comparison at multiple levels, to which Onuf draws our attention, and seems to render it intractable at the global level.

The choice problem for an individual in such a global (multi-body) system is to maximize utility not with respect to a state of affairs (individual) or a joint state of affairs (binary), but with regards to multiple states of affairs and the multiple subjectivities that constitute a social system. If we resort to treating the social system itself as a generalized “state of affairs,” then the choice becomes individual and a solution concept is available in the form of utility. If we reduce all others to a generalized “other,” then the choice becomes binary and game theory remains relevant. Yet both of these solutions are retreats from the third possibility - the n-agent problem - that are intended to make it manageable in utility or game theoretic terms.

N-agent problems are those in which the individual’s own purposive behavior affects the purposes of multiple other agents. Choosing the appropriate course of action in such a global or social context defines the problem of devising ethical solutions. In much of the preceding discussion, I have followed Onuf - and in some places extended his argument - in offering an account of the various human capacities necessary for the social construction of ethical designs. And ethical designs are surely social constructs. A discussion of the human capacities relevant to their construction is partly a discussion of ontology and partly one of methodology It is a statement about the things that are necessary to give rise to ethical solutions in social systems, and about the ways, in practice, that such solutions are built up through the operation of reasoned comparisons based on valent, emotional reactions to the world. This discussion misses the reason that we need ethical constructs. This is a need that is so often assumed that we scarcely know we need it.

Ethical arguments are solution concepts for global social comparisons. And often, they are poorly specified solutions. Kant’s categorical imperative has the virtue of serving as a good illustration, although I intend to grant it no special pride of place among ethical solution concepts (cf. Onuf 1998a). To act only in such a way that I can rationally will all to act makes plain the irreducibility of this solution to individuals, except in their totality. It also presumes, of course, that we can agree on how we would wish for everyone to act. And so, the ethical heavy lifting remains: to specify desirable (commendable, virtuous, etc.) forms of action. Desirable behavior that affects only me is simply a utility problem. Desirable behavior that affects me and one other person derives from an equilibrium. Because the categorical imperative pertains to everyone, it is instead an ethical statement. What the categorical imperative distills is not the form of behavior that is required (this is left to rational deliberation), but the comparison that is at issue (global, rather than individual or binary). By virtue of this, it is an example of the third sort of solution concept.

Ethical systems generate statements about right behavior when that behavior is evaluated in a global context. Without making any claim about which particular ethical system is beneficial in a global context, ethical systems in general are beneficial. They are the only means available, within a modern and human-centered cosmology, to handle the problem of normativity in global comparisons. The evolutionary approach of moral psychologists hints at the “virtue of virtue” (that is, the virtue of ethical systems): societies with effective ethical systems do better than those without. In these societies, selfless behavior and cooperation in general is more likely, and such societies are more likely to generate the two lower-order goods that Onuf describes as wealth and security (Onuf 1989, 278). The remaining general good, standing, is defined by ethical systems themselves. We attain standing by acting in accordance with social prescriptions for virtuous behavior. And so, “standing, security, and wealth are the controlling interests of humanity” (Onuf 1989), each derived from its proper context ofjudgment and comparison.

If the quest for ethical origins is sometimes taken to be a quest for virtue itself, for an ur-theory of ethics, it should be clear that it is no such thing. The origins of ethics rest in the human need for solutions to the problem of global comparisons, not (for moderns, anyway) in the correspondence of any particular theory to the divine or the good. When we are able to describe the way ethical systems work to accomplish this purpose, we have given the only satisfactory account there is to be given of the virtue of virtue.

It is hard to let go of the quest for something more, for a definitive ethical principle. At this point, Onuf (1989, 289) invokes Wittgenstein, who recognized the same problem:

Here we come up against a remarkable and characteristic phenomenon in philosophical investigation: the difficulty - I might say - is not that of finding the solution but rather that of recognizing as the solution something that looks as if it were only a preliminary to it. ‘We have already said everything. . . .’ The difficulty here is: to stop.

(Wittgenstein 1967, $314)

Ethical systems accomplish their purpose by existing, by providing some sort of solution, when one is desperately needed, to the problem of normativity in a global context. They are, to put it another way, social equilibria that work, when no mathematical equilibrium can be deduced, because their presence is better than their absence.

 
Source
< Prev   CONTENTS   Source   Next >

Related topics