Desktop version

Home arrow Sociology arrow Homo Prospectus

The Prospective Regulator

Let us return to our hunter-gatherer. He was left out on the ice, poised motionlessly, his feet going numb, looking for his one chance to spear the fish that had ventured into the hole he'd cracked through the ice. Talk of retouching memory or perception in light of desired goals might suggest a brain that is operating by wish fulfillment, without adequate constraint from reality. But nothing is more real than starvation or freezing, and a furless primate in the depths of a northern winter does not have the luxury of filling his visual field or tampering with his memories in whatever ways he finds most agreeable. Human perception and memory must pay their way by increasing the individual's chance of succeeding in the face of scarce resources and limited time. They, therefore, operate not as autonomous branches of government—the intelligence services, the national archive, and so forth—but in a coordinated way that pulls in information from wherever it can be found and then uses it as efficiently and effectively as possible to guide behavior.

This raises the question: What kind of architecture promotes the most efficient and effective control of behavior? Here we come to the problem of action control and one of the most dynamic and exciting bodies of current science and engineering. Again, we can draw on advances in knowledge of how other systems operate to understand better how we operate. According to the good regulator theorem of the systems theorists Roger Conant and Ross Ashby (1970), a good regulator for a system—one that is both effective and efficient—is a model of that system. What does this mean for our hunter-gatherer? Ask first, what is the "system" in question? Is it his brain or his organism as a whole? The answer must be neither. The system to be regulated is the hunter-gatherer in his environment—an individual with needs, goals, and capacities embedded in an environment with some potential to provide for or obstruct the meeting of these needs or the advancing of these goals, depending on how he uses his capacities. The brain and body of the hunter-gatherer are, then, not the system but the regulator for this system, the organism-environment interchange. And the good regulator theorem suggests that in his brain and body there must be built a model of that interchange—a model that maps out which causes lead to which effects, with what reliability, and with what implications for drawing down or building up his resources, or for achieving his goals or risking his neck. And, of course, prominent among the causes represented in the model will be the array of possible actions he might take, and prominent among the effects will be the likely outcomes of those actions.

What is a regulator? This is, in fact, a broad but technical notion. Think of a home thermostat. It regulates the room's temperature because it sits in the middle of the circuit that turns the heating and cooling system on or off. When the occupant selects a given temperature setting, the thermostat functions as a switch, turning the cooling or heating functions on or off depending on whether the set point chosen is above or below room temperature. Once the room temperature reaches the set point value, it opens the switch, turning off the heating or cooling function, at least until the room temperature or setting changes again.

But such a simple regulator is not very efficient. If occupants return to a stifling house on a hot afternoon and set the thermostat many degrees below room temperature, the cooling function will be instructed to turn on with maximum force. Had the thermostat been able to predict that the occupants usually arrive around this time of day and rarely if ever want to find the house at 85 degrees, then it could cool the house more gradually and efficiently by starting well in advance of their arrival and running at low power. Moreover, the occupants would be spared being uncomfortably hot until the house cools down. Similarly, if the thermostat could predict when they typically leave the house in the morning, then it could begin easing the cooling toward their time of departure, and turn itself down should they forget, avoiding wasted energy. In winter, if the thermostat could predict that external temperature is likely to rise during the day, it could reduce the force of heating gradually as the sun rose, and increase it gradually as the sun was about to set, and so on. The smarter and better informed the thermostat, the less discomfort the occupants will suffer, the more they will save on their utility bill, and the smaller their carbon footprint will be. Notice that these gains in effectiveness and efficiency come because the smart thermostat is building a structured set of expectations—a model, in effect—of the behavior of the small world within which it operates.

How could it learn this model? If the occupants don't typically leave the house on weekend mornings, then when they manually adjust the heating or cooling to override the thermostat's gradual cutback, the thermostat will record this override, noting the time and day of week, and modify its expectations accordingly. After they have lived some time with a smart thermostat, a detective could scrutinize its self-reprogrammed expectations and get an idea of the rhythm of their week or the severity of the recent weather. Note that the thermostat learns by using its internal model to guide its action and generate expectations, and then uses discrepancies between these expectations and actual outcomes to readjust its own subsequent actions—using the same model in an "inverse" direction to determine what action would have led to a smaller discrepancy. Thus we can say that

An effective and efficient regulator of behavior is also an effective and efficient learner—both functions have at their core the building of a working model, encompassing both itself and its environment, and the interactions between them.

Model-based control is an especially powerful way to learn because acting upon the world is an especially good way to give the world a chance to send us a signal if we are in error. Animals that forage for food in a risky and changing environment—as natural environments typically are—are simultaneously foraging for information. Their hunger for information is as potent a force in their behavior as their hunger for food.

We should expect intelligent animals to be more like smart machines than like the preprogrammed machines typical of today. Behavior is likely to be largely model-based and flexible rather than instinctual, hardwired, or based on mere stimulus-response associations and rigid habits. If feedforward-feedback learning of, and guidance by, a model can be implemented by the relatively simple electronic circuitry found in a smart thermostat, should natural selection not have favored the emergence of animal brains at least this complex? Given the relentless pressure for effectiveness and efficiency in fast-metabolizing warm-blooded animals that must forage for their living, and given the evidence that such animals can relatively rapidly develop nearly optimal foraging patterns in a complex and chancy environment (Dugatkin, 2004), it would be surprising if we did not find in such animals refined capacities to learn and use models of their world. In the next chapter, we will see that the evidence for this has mounted impressively in the last decade and a half. As systems theorists Conant and Ashby predicted in 1970, the good regulator theorem

... has the interesting corollary that the living brain, so far as it is to be successful and efficient as a regulator for survival, must proceed, in learning, by the formation of a model (or models) of its environment. (p. 1)

And in the case of intelligent animals that are also social, we can add something more: learning from others' experiences as well as one's own. This greatly accelerates and enriches learning, especially in those animals, such as humans, who can communicate what they have learned directly to one another. Suppose that all of the thermostats in all of the rooms of a large hotel could share information. Then they could each have a model that is more predictive of a wider range of guests, perhaps even breaking the guest population down into types who display particular patterns of preference and behavior. Thus we arrive at the kind of smart system that produces the recommendations you receive from the algorithms at Amazon or Netflix. Consider that

Learning about an environment with many dimensions of variability is enhanced by sharing information across individuals with diverse experiences—so that social animals have the best chance of accurately modeling their environment and interactions with it.

An ability to share information is not enough—the animals must also communicate honestly, at least, in the main. Bees can rely on the dances of other bees in their colony when setting out to find pollen, and thereby pool and use the whole information acquired from all the searching of all their hive-mates, because all the bees in the colony are closely related genetically. Bees are at the extreme end of relatedness, because they all have the same mother, but most highly communicative social animals live in groups with some degree of relatedness. Humans appear to be unique for their ability to exchange information and live mostly peacefully with very large populations of unrelated individuals, coordinating well enough to make towns, cities, and countries possible and productive. And we see some striking results, from the existence of institutions, such as universities, the international market, and the Internet, to the astonishingly rich and predictive models attained by modern science. But powerful models with long time horizons are not merely the result of such large-scale coordination and cooperation. As we will see, they play an indispensable role in making such relationships among strangers possible and reliable in the first place.

Thermostats, even thermostats that share information, live in a restricted world. The tasks they perform are narrowly defined; their ability to act is limited to a few behaviors; their production and reproduction are taken care of by someone else; and they have no goals of their own to decide among. When the electricity fails, this it is not a problem they are called on to fix. Neither do they suffer, die, or go extinct when this occurs, but simply lie dormant, ready to come back to regulative life as soon as power is restored. This is very unlike the organism-environment interchange typical of intelligent animals, who must care for their own maintenance and reproduction, devise novel behaviors for changing environments, and shift their priorities as their needs and goals come and go. In short, they must regulate their behavior in light of the full array of possible costs and benefits of a life, and this makes prospective modeling all the more essential, but at the same time all the more advantageous over preprogrammed or merely reactive, stimulus-bound minds.

What does a model look like? We have spoken of expectations, which can be thought of in an if-then form, for example, connecting actions and contexts to possible outcomes, costs, and benefits. But models have more structure than that. We can start by thinking of a mental map that individuals might form of their environment, showing its features, their location, and the paths or actions available to them. But the model is also evaluative, and so with these paths will be associated whatever benefits, costs, and risks they have learned from taking them, or others like them. Or, from the experience of their parents, siblings, or fellow creatures in taking similar paths themselves.

Unlike the maps with which we are familiar, however, the "paths" in this map do not just correspond to spatial trajectories, but to potential actions leading with some probability not only to places, but to possible goals. Thus the maps will include cause- and-effect relationships that will support both forward-l ooking initiation and monitoring of behavior and inverse inferences from goals to the behaviors requisite to achieve them. All this may seem barely credible when talking about the minds of mice and rats, yet one of the great surprises of recent research is that the more we know about how to challenge and observe the brains of such animals, the more the idea of real-time prospective mapping fits what we observe.

It is a prospective mapping of this kind that explains why our hunter decided, after several days of finding his snares empty, to try fishing in the lake. And why, having made this change of mind, he could work backward from his new goal to the need to repair his spear the night before and to remember to take his pounding stone and dried beetles. Before venturing out into the cold, while still warm in his hut, he could imaginatively explore the paths of his mental map and see that having a working spear improves the odds of catching a fish, that having a pounding stone improves the chance of having a hole in the ice in which to fish, and that using bait improves the chance of luring a fish to that hole. Compared to wandering out onto the frozen lake with no resources—hoping to find a hole in ice that is nonetheless solid enough to stand upon or trying to lure and catch a fish by dangling his hands in the freezing water—this at-home, simulated trial-and-error process is a gain in effectiveness and efficiency he can readily appreciate.

Spear, stone, and beetles are themselves the results of earlier prospective mapping by himself and by those from whom he's learned. Thanks to them, he now has promising paths open before him he would not have had if his time horizon, or theirs, had been shorter. It is true that one's mental map is thus a function of one's past, but the more advanced human culture and technology have become, the more it is past prospection that determines the shape and branching pathways of the map as we will see in detail in Chapter 5. Sitting in their hut the night before, warm in their clothing and blankets, chewing dried meat and berries from the stock accumulated in fall, the family was surrounded and sustained by the work of prospection. That's also how our hunter had grass to braid a string strong enough to mend a spear, just as his mother had shown him. And that's how today he has something of his own invention: Having seen a beetle trapped by pine resin last summer, he stripped back some more bark and saved the captured and desiccated insects as bait to attract fish.

Importantly, his projective ability enabled him not only to contemplate a new behavior, but to motivate it through an imagined future reward. Although he had never before seen or performed such behavior, much less been rewarded by it or punished for failing to do it, this structured, effortful, time-consuming sequence of actions could emerge in the face of incentives to do something that would be more immediately rewarding:

To be effective, prospection requires a motivational system that can give present motivational force to imagined future benefits and costs, and this prospective motivation is what is distinctive about desire: It is not a mere urge, conditioned drive, or magnetic attraction to something immediately tempting, but rather an ability to be moved by images of possibilities we create—to want to take an action because we like the idea of what that action might yield, even if that is remote in time or novel in character.

Such a capacity to mobilize motivation in the present on behalf of the mere idea of a future benefit or cost plays a clear role in underwriting human innovation, as our fisherman shows, but it also lies at the foundation of human social, moral, and economic life (Railton, 2012). For example, it enables us to be trustworthy, to be motivated to keep the agreements we've made, even when facing incentives to cheat. And it enables inspiration by ideals and principles to translate into the force and resolve to hold ourselves to them in the face of costs, challenges, and disappointments. Morality, social norms, and laws thus join technology in adding structure to the future, making actions and outcomes possible, which otherwise would simply be unavailable:

Intelligent action over time involves not only taking choices in light of a causal model of possibilities, but creating possibilities—"working backwards" from distant goals to the proximate actions that are preconditions for them, and "working forward" by conceiving and acting upon ideas and ideals that will sustain new ways of acting in the future.

By now prospection is looking very complicated—not just beyond the minds of mice and rats, but also very unlike the minds of humans as we go through our days. How many of us engage in this sort of active prospection of possible pathways, costs, benefits, and risks in more than a tiny fraction of our lives? As James wrote, "Not one man in a billion, when taking his dinner, ever thinks of utility. He eats because the food tastes good and makes him want more" (1890, Vol. II, p. 386).

However, the response offered by contemporary developmental psychology and cognitive and affective neuroscience is that we all naturally think in these ways, as do our intelligent mammalian relatives. Does this sound preposterous? Thinking, after all, is not simply conscious deliberation, it is information-processing that is done by representations. Very young children construct causal models of their world (Gopnik et al., 2004; Gopnik & Wellman, 2012), as do rats, according to some recent research (Blaisdell, Sawa, Leising, & Waldmann, 2006). And contemporary neuroscientists have found evidence that systems of neurons in the brains of rats and other mammals (as well as related areas of the human brain) generate multidimensional "cognitive maps" and continuously form and update conditional action-outcome expectations (Moser, Kropff, & Moser, 2008; Stern, Gonzalez, Welsh, & Taylor, 2010; Tobler, O'Doherty, Dolan, & Schultz, 2007). These processes take place in areas of the brain not directly accessible to consciousness, but fully capable of representational thought, computation, and simulation. The fascinating story of this research and the representationally rich "prediction engine" it has revealed will be told in the chapters to come.

Our fisherman's distinctive way of creating future options is more than simple tool use, which we find in birds and monkeys. How did humans get into this godforsaken frozen clime in the first place? They certainly did not evolve their way into this niche by natural selection. Compared to the animals around them, they lack the endowment of specialized adaptations for winter life. Except for one endowment, which shortcuts natural selection and makes adaptation possible to wildly diverse environments. Growing fur is not necessary to adapt to the north if one can appropriate the fur of northern animals. Growing sharp claws, spring-like legs, and massive teeth is not necessary to adapt to catching and eating the fish and game in northern winters if one can flake flint into keen blades to carve and tip spears, bend branches and braid grass to make snares, or heat wood shavings with the friction of a fire drill made from sticks and twine, to start a fire to cook meat too tough to chew. Creativity, then, in which humans build actively upon statistical learning, can generate the new ideas that become the tools and other artifacts that make new adaptations possible (even as we grow old, as we'll see in Chapter 11).

The power of creativity is familiar to us all. But less obvious are two features even more basic than the creativity we rightly celebrate.

 
Source
< Prev   CONTENTS   Source   Next >

Related topics