Desktop version

Home arrow Sociology arrow Homo Prospectus

Source

Prospection and Accuracy

What is the goal of thinking about the future? And how are thoughts about the future shaped to serve that goal? The answer may be more complicated than one might assume.

The pragmatic prospection principle assumes that knowing the future is useful for shaping one's actions. This assumption leads in a fairly straightforward manner to the conclusion that forecasting the future is a matter of struggling to be accurate. After all, accurate information furnishes a much more useful basis for action than distorted information. When you leave home in the morning, should you take an umbrella and raincoat? An accurate forecast of the day's weather will provide a good basis for making the right choice. If it is going to rain, you would be sad to be out and about without any ra- ingear. Meanwhile, if the day is to hold beaming sunshine all day and not a drop of rain, you will be uncomfortable and look silly wearing your raincoat and carrying the umbrella.

The assumption that the goal of prediction is accuracy informed our research. Indeed, we are scientists and philosophers, and so the quest for an accurate understanding of the world is central to our daily lives and underlies most of our activities. Hence, it is not surprising that we assumed that most people, likewise, want to predict the future accurately.

Yet that logical and compelling assumption must grapple with some contrary findings. In particular, researchers have known for a long time that people are not objectively, coldly accurate in their predictions and forecasts. On the contrary, most studies have found that people are relentlessly and unrealistically optimistic. The influential studies by Weinstein (1980) surveyed college students (and others) and asked them to predict the chances that varying things would happen to them, as well as predicting whether these would happen to other people similar to them (e.g., other students in the same class). In general, people predicted that more good things and fewer bad things would happen to them, as compared to the average person.

For example, asked how likely it was that you will someday have a gifted child, receive a major promotion at work, or have a long happy marriage, you will tend to rate your chances as above average (assuming you are like most people who respond to these surveys). In contrast, you rate your chances as lower-than-average for unpleasant future possibilities, such as having a mentally retarded child, being fired from work, or getting a divorce.

The notion that people have unrealistic optimism was even enshrined as one of the three "positive illusions" that characterize the mental outlook of normal, healthy, well-adjusted people. This view was developed in a classic article by Taylor and Brown (1988). That article dispelled the view that people seek above all to achieve an accurate understanding of the world. Instead, it proposed that people have various biases and illusions that make them feel good but depart from objective reality. They overestimate their good traits and overlook their failings. They think they have plenty of control over events when they don't have much. And they tend to predict that their future will be filled with lovely, positive things rather than misfortune and failure.

But how is that pragmatic? One elegant solution was proposed by Peter Gollwitzer and his colleagues (Gollwitzer & Kinney, 1989; Taylor & Gollwitzer, 1995). He showed that people hold to positive illusions and optimistic predictions most of the time, but not when they face a decision. At choice points, people suddenly become much more realistic, seeing themselves and their prospects in a less optimistic, less distorted manner. Once the decision is made, they resume their optimistic outlook.

Why maintain these distorted views of the future most of the time? One explanation is that these views feel good. Another is that they are actually helpful and useful, because they lend confidence and they inspire trying harder.

The effects of thinking about the future on trust and risk were explored in a series of studies by Andrew Monroe and colleagues (Monroe, Ainsworth, Vohs, & Baumeister, 2015). They first had people reflect on the future. In one procedure, they had people write about the person they expected to be in 10 years and what would be important to that person. (In the control condition, they wrote about their current selves and important activities.) In principle, contemplating the future with no decisions in sight ought to have bolstered their optimism, whereas focusing on the present should have highlighted choices and obligations, producing greater realism.

After the writing task, participants were asked to make some (hypothetical) decisions about investments. Some were risky and others were safer. As is generally true with investments, the riskier ones offered higher possible payoffs, but also greater possible losses. Monroe's group predicted that thinking about the future would engender optimism, so people would pick the riskier investments, hoping to score a big gain. But that's not what happened. Instead, the people who thought about the future tended to avoid risky investments and prefer the safer, duller options. That was in contrast to the people who had thought about the present. They were more open to risk.

What happened? Why did thinking about the future lead to avoiding risk? Instead of eliciting a rosy, optimistic outlook that downplayed risk, thinking about the future seems to have attuned people to uncertainty. They saw the future as full of multiple possibilities, some good, some bad, and they worried about the bad ones.

Monroe's group then turned to studying trust. Perhaps thinking about the future made people recognize that good and bad things are both possible, and so they want to avoid the bad ones. Indeed, the notion that people's financial and investment decisions are more strongly affected by possible losses than possible gains has become a basic theme in much decision work. It is called "loss aversion," based on the notion that people would rather be sure of avoiding a big loss than have a chance at a big gain (Kahneman, 2011; Kahneman & Tversky, 1979). This is likely part of an even broader pattern: The human mind (and as far as we can tell, animal minds are the same) is more affected by bad things than by good things (Baumeister, Bratslavsky, Finkenauer, & Vohs, 2001; Rozin & Royzman, 2001).

People do, however, trust others. Indeed, some have argued that people are innately predisposed to trust others and cooperate (Dunning, Anderson, Schlosser, Ehlebracht, & Fetchenhauer, 2014; Rand, Greene, & Nowak, 2012; Tomasello, 2014).

One research tool for studying trust has been dubbed the "trust game," which was developed by behavioral economists (Berg, Dickhaut, & McCabe, 1995), a group of researchers who adapted the small-experiment methods of social psychology for use in economics research. The trust game works roughly like this. You take part in a research study, and at some point, you are told that you will receive a certain amount of money, perhaps $10. You are told that you can do a couple things with this money: You can simply keep it all, or you can invest any part of it, up to the full amount. Whatever you invest will be automatically tripled in value. (This is done to mimic the rewards of cooperative investment, which has made the economic progress of capitalist societies possible.) That tripled amount will be given to another person, your "partner," who is not someone you know. That person will then be able to divide the money in any manner between him- or herself and you.

So, for example, if you decide to invest the entire $10—the maximum amount of trust—then your partner will receive $30 and can decide what to do with it. Your partner might keep it all or give it all back to you, in which case you have really made out well. Even if your partner decides to split it down the middle, you are better off than if you had kept the initial stake. Instead of going home with $10, you go home with $15, which is a nice profit earned by your willingness to trust someone. Sometimes partners will even give more back, perhaps letting you have $20 and keeping only $10 for themselves. After all, if you hadn't trusted your partner with all that money, he or she might have got nothing, so it's much better to have $10 than nothing.

Psychologists working with the trust game have concluded that there is a general willingness to trust people, although not because they are confident that all people are trustworthy (Dunning et al., 2014). Rather, people seem to operate on the assumption that one should never start off an interaction with a stranger by showing that they think the person cannot be trusted. To put this in a metaphoric manner, if a human being meets a total stranger in the forest or the big city, the proper thing to do at first is to act as if the person deserves to be trusted and is willing to cooperate. If the first interaction goes badly, one can quickly dump that strategy and be wary. But the norm is to show respect to the stranger by presuming that he or she has good intentions. Operate on that basis until the person reveals him- or herself to be unworthy of trust.

There were two additional twists to how Monroe and his group used the trust game. The first was that they manipulated whether people thought about the future or the present. This was done by giving people a series of statements that they were assigned to rewrite in their own words. For half the people in the study, these thoughts were about the future. For the rest, they were about the present. Thus, they had to use their minds to articulate a thought referring to either now or the future. This has been an effective way of manipulating people's focus.

The other twist was that people were given some cues about whether the partner should be trusted. Previous researchers noticed that some people simply look more trustworthy than others. After poring over countless headshots in published and online sources, they came up with a variety of pictures of human faces, and they had research subjects rate whether each person looked to be more or less trustworthy. From those ratings they extracted some of the faces with the most pronounced and consistent ratings. Basically, they produced a set of pictures of people who looked really trustworthy and another set of people who looked shady.

Monroe's group selected pictures from each group to use in their study. For each subject ready to play the trust game, they selected one picture from either group (by random assignment) and said, "This is the other person playing the game with you. Whatever money you decide to invest will be automatically tripled in value and given to that person. Here is his (or her) picture." (People always played with someone having their same gender.) Of course, these pictures were not those of the actual partner. The point was simply to make participants believe that they were playing with someone who looked trustworthy—or the opposite.

What would you predict was the effect of thinking about the future? The researchers predicted that the future-thought condition would make people rely much more heavily on the relevant information, as compared to people just thinking about the present. They should exhibit more trust (as indicated by investing more money) when the partner looked trustworthy, but they would do the opposite when the partner appeared to be a shady, unreliable character.

But that's not what happened. Thinking about the future reduced trust toward everyone, and as predicted, prospection led to giving less money to the questionable partner. But it also led to giving less money to the person with the honest face. Thus, once again, thinking about the future led to a general avoidance of risk.

Subjects in the study acknowledged the differences among the faces. There was a general pattern of investing less money when the partner looked shady and unreliable than when the partner had an honest, trustworthy face. But the prior exercise in thinking about the future failed to make people put more trust in the trustworthy person. On the contrary, the subjects who had thought about the future invested less money in the trustworthy person, as compared to subjects who had thought about the present. This was the same response they had to the untrustworthy person: Thinking about the future reduced trust.

What can we make of these findings? This section began with the question of whether thinking about the future bolsters optimism. Plenty of previous research has shown that when people are asked to predict the future, they are highly, indeed unrealistically optimistic. Yet in these studies, when people think generally about the future, they seem to become less optimistic, at least in the sense that they look at investments more in terms of what can be lost than what can be gained. They exhibit a cautious avoidance of risk.

These findings cast the notion of pragmatic prospection in a new light. Contemplating the future appears to focus attention on uncertainties, possibilities, and dangers. It does not seem to operate as if there is a definite future to be known and anticipated, as in the Slumdog deterministic view. Instead, thinking about the future seems to drive home how very undetermined it is, including how it contains significant possibilities for bad outcomes as well as good ones. Given the greater subjective power of the possible bad outcomes, people shift their strategy toward avoidance of disaster and risk. Even trusting a seemingly trustworthy person with your money becomes less appealing when you have been thinking about the future. The pragmatic priority is not so much assessing objective facts so as to pursue what has the best chances of turning out well. Rather, it is to avoid losses and misfortunes.

Perhaps, then, the purpose of prospection is to find ways of guiding behavior toward desired goals. Accuracy is not what matters most. Instead, getting what you want is what matters most. And what people want most is generally to avoid problems, failures, and other disasters.

This view of prospection as rooted in getting what you want takes us away from the assumption that accuracy is the supreme goal of prospection. It is however quite plausible from an evolutionary standpoint. The reason simple animal minds began to form expectancies about the (usually very near) future was to guide behavior. The animal must decide what to do so as to satisfy its needs and reach its goals. Creating a mental structure that projects into the future and leads to a happy ending is a useful guide for action.

Accuracy is helpful: You can make more effective decisions if your plans are based on a realistic understanding of the world than on false hopes and misguided fears. But accuracy comes later. The first and fundamental task may be to figure out a path from where you are to where you want to be. Once you have done that, then perhaps it is useful to conduct a feasibility study (at least mentally). Accuracy of prospection is relevant to the second stage but not so much in the first.

That would explain the pervasiveness of optimistic bias in people's predictions. Thinking about the future is about getting what you want. So when you imagine the future, you tend to imagine it in a positive light, in which your hopes and aspirations are fulfilled. But when you contemplate the future more generally, you become attuned to knowing that plenty of things can go wrong. The future is uncertain. The path you may sketch out toward your dreams is fraught with risks and threats that could lead you into an outcome that is not what you want. Hence, thinking about the future in general leads to a cautious stance.

In a sense, then, there are two steps to prospection. The first is "What do I want?" The second is "What can go wrong?"

 
Source
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >

Related topics