Desktop version

Home arrow Sociology arrow Homo Prospectus

Source

Intuition and Information

We go through much of our lives with a sense of what we're doing and why, a feel for what to make of a situation, an inkling of how well a situation is going, or a hunch of how likely a proposed solution is to work. We often call such states of mind intuitions, a venerable term whose Latin roots mean "seeing within." "Seeing" because intuition seems to be more like immediate perception than like deliberate reasoning or calculation. It is a spontaneous sense, often unbidden and quite quick, of how things look or seem. This is attested by the sensory words we use to describe intuitions: a sense, feeling, or impression. Whereas intuition, like perception, often takes an external object, the sensory impression is to a large extent supplied from "within." This presumably is why intuitions are so often described as "gut feelings," "instinctive responses," or a "feeling in the bones."

Moreover, also unlike the senses, intuitions can occur even with objects that are absent or abstract. We can have an intuition about a proposed action, a hypothesized scenario, or a purported explanation. Similarly, intuition, unlike the five senses, is not confined to a particular palette or range. Intuition can present its objects as seeming to have an indefinite array of descriptive and evaluative features. You can have a reliable intuitive sense that the boss' proposal for boosting employee morale is going to make things worse, that a person is timid but strong underneath, that you should pass the ball rather than drive to the basket, that a choice of words for a condolence letter is a bit too abrupt, that the music you are improvising needs to shift back into the major key, or that a sales pitch is simply too good to be true. Despite the visceral language we use to describe them, and even though intuitions can certainly give us visceral feelings, it is difficult to believe that such specialized and nuanced notions actually originate from the workings of the stomach or intestines.

Intuitions often come unbidden, and we can seldom explain just where they came from or what their basis might be. They seem to come prior to judgment, and although they often inform judgment, they can also stubbornly refuse to line up with our considered opinions. For example, I might appeal to a moral principle to reason my way into taking an action and yet find that the action still does not feel right at all and feel, too, that it would be a mistake to ignore this feeling (we will discuss cases like this in Chapter 9, on morality). Yet if asked to provide a reasoned justification for this intuitive resistance to judgment, I might find I have little or nothing to say. As Aristotle pointed out in his Posterior Analytics (trans. 1941, II.19) and Wittgenstein argued in his Philosophical Investigations (trans. 1953, section 98), reasoning can never be wholly self-standing, because it always starts from premises and relies on inferential rules or relationships. On pain of regress, reasoning itself cannot supply these. So a faculty other than reasoning must supply our sense that some starting points are more legitimate, plausible, or promising than others, or else reasoning will never get underway. Intuition, Aristotle claimed, is the answer to how we obtain these "first premises" and "first principles."

But that forces the question: What are these intuitions, and why should they have any authority to guide how we think or act? Couldn't an intuition simply be an internalized prejudice stemming from one's upbringing or culture, or from a basic, incorrigible impulse of human nature—the product of evolutionary or historical accident?

We do not treat all intuitions as on a par. Some people, we know, have better intuitions than others. We see this in the reliability of these intuitions in guiding their judgments or conduct. Typically, those with good intuitions have had a lot of relevant experience—not just as observers, but as engaged and usually skillful and successful practitioners. And typically they show sensitivity to situational variation as well as some creativity and ability to think ahead. Moreover, someone can have good intuitions in one domain but not another: Political reporters might have good intuitions about whether a bit of legislation will pass, but poor intuitions about how to get on with their fellow journalists. In these respects, intuitions appear to be much like many other forms of knowledge.

What differentiates intuitions from many other forms of knowledge is that we typically acquire them implicitly rather than through explicit instruction, and we often are unable to explain how we arrived at them. Budding mathematicians can be taught a large number of proofs in full detail, along with successful proof techniques. But the intuition they will need to have a reliable sense that that technique can be applied in an unexpected way or to an unanticipated class of problems is not likely to come from classroom instruction alone. Indeed, it often seems that intuitions, rather than explicit inferences, take the lead when one makes a significant breakthrough or achieves a novel synthesis, and the processes underlying the intuition can elude explanation even after the fact. In these respects, intuition seems more like skill, expertise, or creative talent than mastery of method and fact. But intuition isn't only for the most skilled or creative. We all have some measure of intuition, and in nearly all of us, that intuition is more reliable in the areas where you'd expect it to be: where we have the greatest experience and background knowledge.

But surely there must be more to be said about where intuitions come from and why they have the features they do. We believe that the answer is closely linked to the operation of prospection; understanding the one will help us understand the other.

First consider one of the areas where the idea of intuition has been most extensively used, linguistic intuitions, which are the spontaneous judgments of native speakers about the grammaticality or acceptability of sentences. For example, a native speaker will immediately distinguish:

  • (1) Oscar pushed the toy car along the floor.
  • (2) *The floor along the toy car Oscar pushed.

To native speakers, (2) will sound wrong or seem odd, but can they say what principles of language (2) violates? Linguists have argued that speakers' open-ended, spontaneous, intuitive ability to distinguish well-formed from ungrammatical or otherwise anomalous sentences—their implicit knowledge of their native language—must be generative, a tacit mastery of the language's rules and regularities that can extend indefinitely to many novel cases and is not simply a repertoire of acquired sentences or patterns.

A tacit mastery of this kind could explain another notable fact: Native speakers can produce an indefinitely large number of sentences without ever thinking about grammatical rules or principles, and the sentences they produce will be comprehensible to other speakers. So the underlying competency must be similar in important respects across individuals, and it must be something that nearly anyone can acquire and not the preserve of rarefied experts. After all, virtually everyone growing up in a culture becomes fluent and capable of novelty in their speech and understanding. The human mind, it seems, must be set up in such a way that a shared, structured, largely unconscious generative competency can be acquired simply on the basis of ordinary experience. And this complex competency can produce at the conscious level spontaneous novel intuitions and fluent, inventive speech at the speed of thought without special effort or calculation.

Language is perhaps special, and some have argued that it depends on dedicated brain modules or faculties, although increasingly language learning is seen as integrated with other cognitive capacities (for discussion, see Chater & Manning, 2006; Fodor, 1983; Griffiths, Steyvers, & Tenenbaum, 2007; Hauser, Chomsky, & Fitch, 2002; Markson & Bloom, 1997). And what about all the other domains in which people appear able to develop spontaneous intuitions capable of generating or comprehending novel forms of thought and action? Do each of these domains require special modules, or might they be manifestations of a more general capacity to acquire through experience complex, largely unconscious, structured systems of knowledge that confer "generative" competencies upon the agent (Tenenbaum, Kemp, Griffiths, & Goodman, 2011)?

Because intuition is spontaneous and rapid, there has been a tendency to think of intuitions as direct, immediate, non-calculative, non-analytic reactions, not the product of an underlying, complex computation within a structured system of knowledge or information. Certainly intuitions often seem to arrive as completed wholes, but perhaps this is just a "user illusion, a reflection of how they strike the conscious mind, not an indication of their true nature.

To pursue this thought, think first of your reliable companion, your laptop or tablet computer. Let's say you're sitting at the kitchen table, staring at the screen, working through your email. Listed in the crowded inbox is a message from someone you know casually, bearing the subject line, "Invitation to my place on Saturday." Your heart sinks, but you go ahead, click on the message, and read the invitation to a gathering of friends your colleague is hosting. And now you don't know how to respond. You immediately feel both that you'd prefer not to attend and that it would be a problem to say no. Your relation to this person hasn't been easy, making this invitation a matter that requires some delicacy. You sit motionless, letting the question roll around in your mind, with no immediate answer popping up.

What about your computer? Because you are not typing, there is no new information to process, so it should be idle, waiting for you to make something happen. But if your computer has an indicator light for hard drive activations, or if you open the resource monitor, you will see signs of constant activity. Apparently underneath its calm surface and sleek casing, the computer is engaged in some intensive activity (indeed, the machine's cooling fan might just have come on).

What's happening is that the machine's own programs are running, indexing files, updating data and programs, establishing links among related data and programs, reorganizing disk space, allocating resources among the many housekeeping programs, talking to the network, checking for viruses or reminders, and so on. It is a beehive of activity, and like a beehive, the activity is structured and stratified in that many different, interdependent things are going on at once in ways that enable the whole system to continue to function effectively and efficiently. The product of this beehive is not honey but ability, for example, the ability to provide you with prompt answers to search queries you've never made before, to spontaneously send you a reminder of an appointment you've forgotten about, or to make information from one program available to another, seemingly effortlessly.

Still stumped for a response to the invitation, you decide to use the search function to call up past exchanges. The "Loading" icon barely has time to appear before a lengthy, chronologically ordered list of messages, marked for whether you responded to them, and perhaps tagged for what the email program calls "importance" shows up on your screen. (Was the message sent directly to you? Was it sent by someone you often correspond with?) Compare this almost instantaneous search outcome with the old days, when you would have had to rummage around in your papers looking for your past correspondence—if you even kept it. If you were well organized or lucky enough, you might find the relevant folders or piles, and by leafing through them and scanning the pages, locate the messages needed to answer your question about whether or when you had turned down your colleague's previous proposals. And when you'd finished the search, whether you found what you wanted or not, you'd have to refile all the messages and folders in accord with your organizational scheme (to the extent that you had one). Email, seemingly, is a great improvement over your own memory system.

In fact, in the old days, it was much more likely that you wouldn't have tried to undertake a systematic search of the past at all. Instead, you'd have taken a shortcut, relying on what Paul Slovic and colleagues call the "affect heuristic" (Slovic, Finucane, Peters, & MacGregor, 2007). You'd ask how you feel about attending the gathering, but also whether you feel you've been a bit too resistant to the host's previous requests, how you'd feel about saying no this time, and whether it feels on balance as if this time you'd better say yes.

However, let's ask whether email gave you the answer you needed. It served you a well-ordered list of past messages, tagged in a way that's rather unrelated to your current concern. But what are you to do with these messages? It would be just as cumbersome to sort through them as it would be to leaf through your old paper files. You might have better luck figuring out some particular facts, but they are unlikely to tell you whether you've been excessively negative. You will soon tire of trying to sort through exchanges and find you still have to ask yourself how you feel about what you've ascertained, whether it seems adequate for a decision, whether you'd better err on the side of caution, and so on. In short, you're right back with the affect heuristic.

So, what does feel right? The idea of spending an evening with this colleague and his friends holds no appeal; it seems to you that they drink more than they can hold, and the jokes they tell leave you a bit queasy. Your host would soon notice that you're not enjoying yourself, but just try to fake enjoyment to a colleague who already seems to view you with some suspicion. Still, it feels like a bad idea to say no. And then it occurs to you that you recently received a promotion, while he was passed over. It becomes immediately clear that you really can't say no. You feel too keenly how that would look to him, and you lose heart about finding some excuse that you sense he almost certainly wouldn't find convincing. So you type, "Sure— Saturday works. What time were you thinking of?" With a sigh, you push "Send."

In making your decision, you didn't do an elaborate calculation, but rather proceeded in a largely intuitive manner, responding to the thoughts that came to you as they emerged, not working out all the possibilities, costs, and benefits, but following your feelings as an "affective shortcut." But remember your computer's busyness while you mulled over your colleague's invitation. Perhaps your brain was not taking a shortcut after all, but rather the thoughts that came to your conscious mind and the feelings you experienced reflected underlying processes that were keeping track of possibilities, costs, and benefits. When you tossed the question into your mind to let it roll over a bit, perhaps you were letting your mind set itself to work, pulling in relevant information and memories, running scenarios, taking up viewpoints—making you feel at one moment inclined to say no, at the next, to say yes. This tacit churning through relevant, conflicting experience and information could have begun the instant you saw the subject line of your colleague's email and would explain why, almost as soon as you got to the end of the subject line, you already were experiencing a mixture of feelings and hesitancy about responding. This churning, even if not under your direct control, need not be random. It could, like your computer's indexing system, follow connections with "importance," depending on who wrote the message, what relationship you have with that person, what sort of thing the message is asking you to consider, what responses might be available to you, and so on. In this way, while you let your thinking take its own course, your mind could be chasing down paths of strongest relevance and so quickly cut through the mass of data. Within a moment, a range of conflicting considerations is coming into conscious awareness, and soon afterward, a fact emerges from memory with enough weight to settle the matter: his failure to be promoted while you succeeded. This is not bad for aimless mulling.

The personal computing revolution did not arise solely from the development of smaller and smaller chips capable of larger and larger numbers of operations per second, or even from this plus the development of amazingly complex and powerful programs. What opened the door of the computer world to those with no knowledge of engineering or programming was the development of the graphical user interface (GUI), for example, the familiar desktop or start screen of your computer or tablet. In the old days of mainframe computing, conducting a search required writing computer code to set the search parameters and instruct the machine about the sorting or selection function that was to be performed on the data recovered, punching the code onto cards, debugging the code and checking its syntax, adding job-control cards to allocate core processing space and to designate the tapes and drives to be used, physically mounting the tapes (which could not all be mounted at once, and which themselves had to be updated regularly by taking time to run special-purpose maintenance programs), and formatting the desired output. Now the user need only type a word or two in a search box to receive a nearly instant answer.

A good GUI is said to be intuitive, because it allows users to find and focus on what most interests them, getting the results they want with the least frustration or need for insight into the inner workings of the machine while the device handles the rest.

Think of human intuition, then, and the conscious experience in which it appears, as the brain's GUI. An intuitive "sense" provides a readily recognized and easily used summation of what can be synthesized from the resources available to the brain, which include recent experiences, relevant memories, preferences, social norms, degrees of uncertainty, commitments, knowledge of particular persons and situations, and of the social and physical world in general, ability to imagine options, motivational force, and so. These all must be brought together if you are to make the best use of your brain's resources to solve the problems you face, but you do not need to see anything like the totality of this information if instead you can be responsive to its upshot through a sense of what's at stake, what's more important or urgent, what would be a good or bad outcome, what would be risky, and so on.

But isn't it unrealistic to imagine that the non-conscious brain could be racing through large amounts of information, projecting alternatives, keeping track of the relevant benefits and costs, and so on, fast enough to produce in some organized way that distinctive set of feelings and thoughts you experienced as you read your colleague's message?

For a start, the brain need not start from scratch for each search, any more than your computer, or Google, does. It is the job of an intelligent operating system or search engine to keep at work tracing connections, updating information, tracking and predicting the demands made upon it, and the like, even when you are busy on the GUI clicking through mildly distracting blog posts. A tiny hint of this capacity in the human brain emerged in 1984, when Lynn Hasher and Rose Zacks found that the mind, seemingly unconsciously, kept track of relative frequencies in the environment, even while engaged in other tasks (Hasher & Zacks, 1984). The explosion of work on the unconscious mind that has taken place since the early 1980s will be discussed more fully in the "Looking Under the Hood" section, but for now it is worth noting that Hasher and Zacks' finding need not surprise. Our sense that the brain can only work on a small number of tasks at the same time and that it cranks along at the pace and in the manner of conscious thought is an illusion generated by never peering behind the GUI of consciousness.

It is astonishing that if we pose to Google the query, "What do you call the little spinning blue circle that appears on the screen when my computer is searching for a file?," it can search the Web and in 1.49 seconds offer 240,000 results. But Google didn't wait until the question was sent to link together information relevant to answering a query about file searching, waiting, and the icons used to symbolize this. It is the bread and butter of a search engine to work constantly setting up such relations among potentially relevant items while no one is asking, and thereby have answers at the ready when they do. The more queries the search engine receives and processes and the more it tracks the users' responses to the results it delivers, the better it gets at creating links people are most likely to be curious about and supplying answers they are most likely to read. (Another search engine, Bing, posed the same question, arrived at only 94,000 items in about 1.5 seconds.)

But Google's accomplishment is less impressive than what your brain does all the time. For a start, Google wasn't selective enough. The top items proffered were about how to get rid of the spinning blue circle and (of course) related complaints about Windows. The sought-after name, "throbber," wasn't near the head of the list, and it took a lot of picking through the items Google retrieved to find it. If "semantic memory," which is our memory for concepts, words, names, and meanings, were this cumbersome, then we'd never be fluent in speech.

But calling up words and names is a simple matter compared to what ran through your mind when you saw the invitation message in your inbox. In about the time Google arrived at a cumbersome, long list of items with some relevance to our question, your brain recognized the name of the person sending it, called this particular person to mind, interpreted the words in his subject line, began to anticipate the content of the message, set in motion conflicting ideas and feelings reflecting past experience with this person, and generated a sense of uneasiness about how to respond, which cued you to the delicacy of the situation and the need for some thought. That is a tremendous amount of information and evaluative assessment to locate, retrieve, and pack into your ongoing experience in such a short period, and it happened in a much more effectively focused manner than Google's answer to our query about the little blue circle's name.

How could this be possible? The adult human brain has some 80-90 billion neurons, each of which can project to hundreds or even thousands of others, resulting in trillions of synaptic connections. Moreover, because processing can be taking place in many brain regions active at once, the effective cycle time for the "human computer" is estimated to be in the range of 80 billion to 15 trillion action potentials per second. The estimated storage capacity of human memory ranges from 10 to 100 terabytes of data, though some estimates range up to 2.5 petabytes (i.e., 2.5 million gigabytes). For comparison, 2.5 petabytes is sufficient to store 300 years of continuous color television, if that's your idea of a good time (Reber, 2010).1 These estimates are, of course, speculative, but even so, they do not include the additional processing and storage capacity that might depend on astrocytes—glial cells in the brain that outnumber neurons and can have tens of thousands of connections involving dozens of chemical pathways, and which also play a role in neuron activation in long-term potentiation and memory formation (Henneberger, Papouin, Oliet, & Rusakov, 2010; Suzuki et al., 2011). The result is a system for storage and retrieval that is rivaled in capacity and complexity only by supercomputers made up of a thousand or more high-powered individual computers working in parallel.

It is often said that there is a sharp trade-off between speed and complexity in mental processing, and this, of course, has to be right. But a brain that is capable of making billions of synaptic connections in a second can handle a great deal of complexity in what constitutes, by normal human standards, a very short time.

Most human thought and action is guided by implicit or intuitive processes rather than explicit deliberation and decision (Bargh & Chartrand, 1999; but see also Baumeister, Bratslavsky, Muraven, & Tice, 1998). This would be a disaster for us if implicit or intuitive thought relied on simple shortcuts without careful attention to the statistics of the world. Of course, the implicit mind isn't cheap. Whether you are focused on a conscious task or letting your mind wander aimlessly, the metabolic activity in the brain remains remarkably constant, burning through i5%-20% of your body's oxygen and calories, despite constituting only about 2% of your body weight (Raichle & Gusnard, 2005). Why would we grow a brain with this amazing computational speed and storage capacity and keep it active almost constantly at great metabolic expense only to rely, when doing most of what we do all day, on no more than stimulus-response associations, habits, or shortcuts that avoid extensive calculation?

The answer lies in two seemingly unconnected places: the "server farms" that dot our landscape and in the famous remark of Luis Pasteur's that inspiration comes to the prepared mind. Search engines, as we've noted, do not await your query before searching. Unlike a print encyclopedia that offers only fixed, stored answers, search engines are intensely interested in what is going on moment to moment in the world of inquiry, whether this is a matter of monitoring and seeking access to formal publications, informal posts, trends in queries, shopping patterns, or user response patterns. This is possible only by processing, storing, and updating a tremendous amount of information at any one time, and the energy demands of this are fierce. Still, this is the only way to keep up with a rapidly changing virtual community's demands for information. Computers are expensive, and so interlinked systems are designed to allocate and reallocate tasks in a way that keeps the computers operating at a high level, placing a "metabolic demand" on the earth's resources that was recently estimated to consume 10% of the world's electric power (Walsh, 2013).

The implicit mind likewise makes a high, continuous metabolic demand on the body to fuel the large memory and rapid computations needed to keep up with our information demands—not just in answering queries, but in guiding thought and action without grinding to a halt. During its "default" mode of operation, which is the state the brain enters when task demands let up, it has been theorized that the brain is occupied in consolidating, organizing, and anticipatory tasks, just like computers (Bollinger, Rubens, Zanto, & Gazzaley, 2010; Buckner, Andrews-Hanna, & Schacter, 2008; Lewis, Baldassarre, Committeri, Romani, & Corbetta, 2009). (This topic will be discussed in detail in Chapter 4.) The expensive human brain is also a resource far too valuable in coping with the world to waste or leave idle.

So it isn't unreasonable to imagine that the brain—just as much as your computer or search engine—is an informational beehive full of activity while you sit trying to create a response. The idea that we can often solve problems by not thinking about them explicitly, and instead by letting them percolate through our minds while we work on other tasks or "sleep on it," isn't just a bit of mythology, but one of the most familiar facts of life. Rarely, if ever, do we deliberately work our way step by step to a solution to a problem without relying at some point on ideas or possibilities that spontaneously "occur to us" and "seem plausible enough" to be pursued or on memories or analogies with previous experience that "come to mind" without our knowing to go look for them. Such intuitions and "inspirations" might work, not by magic, but through the same sorts of experience- based, data-intensive, conceptually organized processes that have made search engines our regular companions throughout all aspects of contemporary life.

Although they appear quickly and effortlessly on our screens, we do not think of Google's deliverances as based on "gut feelings," "instincts," "habits," "associative reflexes," or "simple heuristics." The search engine, like the GUI, skillfully masks complex mechanisms and algorithms behind a user-friendly, intuitive surface. Why should we think that the implicit mind runs a metabolically expensive, high- powered computer only to ignore it?

It might seem that emphasizing the metabolic demand of the brain runs contrary to our emphasis on effectiveness and efficiency. What's become of energy saving? But the answer is obvious from the old saw: "If you think learning is expensive, try ignorance." If a brain calculation can generate and evaluate a potential action sequence and eliminate it as having lower expected value than other available sequences in a fraction of a second, this takes energy, but much less energy than would enacting that sequence and learning from experience of its low yield. It is a general feature of well-designed regulators that they do consume energy that could be put to other uses, but the amount of energy is tiny compared to the amount they save. Running the circuits of a smart thermostat for an hour consumes much less power than running a cooling system at a higher-than- needed power for a minute. So the efficiency gain of smart technology is very real, and as we enter the world of the "Internet of things," its share of the world's energy footprint will surely grow, for reasons any hard-nosed accountant will understand. The speed in historical time with which the Internet has spread across the world and into virtually every aspect of our waking lives is matched only by the speed in geological time with which Homo sapiens has overrun the planet and even cluttered up nearby space.

What of the second point, which is that "inspiration comes to the prepared mind"? Talk of a prepared mind risks obscuring a subtle but vital distinction. By pausing briefly to consider it, we can perhaps make clearer the difference between Homo sapiens and Homo prospectus as frameworks for understanding the human mind. Consider the most well-prepared intuitive responses, such as those of chess grand masters, elite athletes, top jazz musicians, commanders of firefighting teams, and highly skilled artisans. Psychologists have long been fascinated by the ability of such individuals to size up situations and produce an exceptionally adept response without pausing for conscious deliberation. One influential model of expert intuitive behavior has been that experts acquire, through thousands of hours of experience, a vast repertoire of "pattern recognition" and "motor programming," which is stored in memory. Such pairs are then recalled and put into effect when triggered by the perception of specific features of circumstances. For example, in an influential essay, the Nobel laureate Herbert Simon wrote, "The situation has provided a cue: This cue has given the expert access to information stored in memory, and the information provides the answer. Intuition is nothing more and nothing less than recognition" (1992, p. 155). And according to the psychologists Gary

Klein and Daniel Kahneman (another Nobel laureate), "Intuitions that are available only to a few exceptional individuals are often called creative. Like other intuitions, however, creative intuitions are based on finding valid patterns in memory, a task that some people perform much better than others" (Kahneman & Klein, 2009, p. 521). These stored patterns and motor procedures or routines save the conscious mind from having to build up a response for each situation anew, and that is why the transition from being a novice to being expert is marked by a gradual "off-loading" of control from effortful, self-conscious thought to effortless, implicit processes. We might think of this as a Homo sapiens model of expert intuition. Expertise is knowledge or wisdom, but in the concrete form of patterns stored in memory, accessible on cue by well-trained recognitional abilities.

This idea of expert or exceptional performance is, however, quite unlike the idea of optimal behavior held by contemporary normative decision theory, the most influential form of which is Bayesian decision theory (Berger, 1980; Jeffrey, 1965; Raiffa, 1974). According to normative decision theory, optimal behavioral responses are achieved by calculation and comparison of the costs and benefits of an array of possible actions, drawing on a continuously updated model of one's situation and its prospects and risks, and selection of the action with the highest expected value at the moment. The action selected might never have been previously performed or observed, and the information and evaluations on the basis of which it was chosen, too, are changing in the face of new information. This would be, if you will, a Homo prospectus model of expertise: Expertise lies in a capacity for dynamically evolving prospective modeling, simulation, evaluation, and choice.

It has long been thought that this second, Bayesian model of expert or exceptional performance, while in some formal sense optimal, is simply not psychologically feasible within the tight time constraints imposed on expert decision-making in actual circumstances. In competitive sports or games, for example, the speed and skill of opponents make it necessary that one respond within a fraction of a second, leaving no time for the simulation and evaluation of multiple alternatives, much less optimal selection among them. Hence the idea of turning expertise into recognition: cue-based recall of stored solutions, which could, if well trained, occur within a split second without requiring conscious deliberation. Experts are those who have more stored solutions, are quicker at spotting cues, and more adept at calling them up and putting them into practice.

If we are right, that is, if Homo prospectus is to portray actual humans realistically, then we should expect the second decision- theoretical model to underlie truly expert or exceptional intuitive performance, since it is in fact more optimal if feasible. But is it feasible? The point of the analogy we've been making with smart, dynamic, computationally, and informationally intensive artificial systems is that, increasingly, we are able to see how it could be. Such systems can perform the needed computations and comparisons in real time, given current levels of computational speed and capacity. The flurry of computation is simply hidden behind the simple, intuitive GUI that is seen by the user.2

Let's now try to bring the analogy closer to home by thinking about a naturally occurring example of intuitive expertise, the product of which is precisely an intuitive GUI—the perceptual system. This system is expert at translating a potentially overwhelming stream of noisy sensory information into the well-organized phenomenon that is our normal perception of the external world. A two-dimensional retinal input that changes every few milliseconds is transformed into a relatively stable, three-dimensional world populated by persisting objects, which appear to us to stand out against the background, and to retain their identity, shape, size, and color despite continually changing perspective, proximity, and incident light. We know from attempts to design artificial object-recognition systems that this requires a tremendous amount of computation, and drawing in of nonvisual information about the world or about the movement of the individual in it. Of course, an expert object-identification system, like your own visual system, does not bother you with this vast array of information and probabilistic computations, but instead provides a cleanly organized, largely ambiguity-free intuitive graphic interface with the external world. Could such expertise take the form of cue- based recall of stored visual responses? Hardly, because the variation we encounter in the world is indefinitely large and dynamically evolving into arrays we have never before encountered. Instead, evidence is accumulating that object identification and tracking in the visual system operates in real time through hierarchically structured networks based on learning algorithms similar to Bayesian inference and resolves ambiguities and inconstancies through processes similar to Bayesian decision theory (Kersten, Mamassian, & Yuille, 2004; Najemnik & Geisler, 2005).

If our highly structured visual field is the GUI provided by vision, what is the interface provided by expertise? We speak of seeing that a particular action would not be a good idea or sensing that a particular person is sincere, but such things as bad ideas or sincere behavior do not literally show up sensorily, as special colors or textures. Instead, they show up affectively, in the spontaneous feelings we have as we face choices, meet people, and act, or the degree of confidence we have in a belief we've formed or an analogy we make. But how could affect do this?

 
Source
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >

Related topics