Desktop version

Home arrow Geography arrow Global Perspectives on Human Capital in Early Childhood Education: Reconceptualizing Theory, Policy, and Practice

Source

Statistics and PopulationalReasoning

Statistical Reasoning as a Tool for Understanding Population In order for this shift to become possible, it was necessary to have tools to understand population and economy. This involves moving one’s concept of control from the concrete to the abstract, and from things one can see and count to estimation and prediction. This is what Hacking (1975/2006, 1990) calls the “emergence of probability” or the “taming of chance,” and Poovey (1998) calls the “emergence of modern fact.”

Two important concepts driving educational reforms today are the use of quantitative measures to determine the need for and the success or failure of educational initiatives, and the use of concepts of “normal” and “abnormal” using the concepts of measurement and the “bell-shaped curve.” Both of these technologies, which seem to be inevitable, underlying truths of educational reasoning today, emerged in specific times and intellectual environments.

As Stigler explains, probability is the development of a “logic and methodology for the measurement of uncertainty” (1986, p. 1). He points out that there have been isolated instances of estimation of probability throughout history. Ancient cultures had discussions of chance as it related to gaming—an area that provided a growing interest for mathematicians. From the twelfth century on, the Royal Mint in Britain maintained quality control on its coins through what was called the “trial of Pyx” by collecting a random sample of coins minted each day in a box (Pyx). The average weight was calculated and was required to be in a certain range of tolerance. What marks the difference between such isolated instances of estimation and statistical thinking is that the former is concrete and specific, being applied to one situation only, while statistics is a framework for a “logic of science” (p. 3) that helps ask and answer questions in a multitude of areas. It is the logic and the language of statistics that made possible the quantifying and predicting of resources—not just in concrete terms of what can be seen, touched, and counted but also of abstract and hard to quantify resources such as populational attitudes, behaviors, trends, and characteristics—in other words the very things that comprise “governmentality” or “biopower.” Biopower, for Foucault, is a group of techniques used to control the welfare of a population. Although both governmentality and biopower refer to ways in which the common good is promoted by “encouraging” good behaviors and self-care within a population, governmentality places more emphasis on desirable social, political, and economic behaviors, thoughts, and attitudes, while biopower emphasizes more factors that relate either literally or less directly to the physical welfare of a population, such as public health.

By this I mean . . . the set of mechanisms through which the basic biological features of the human species became the object of a political strategy, of a general strategy of power. (Foucault, 2007, p. 1)

Poovey (1998) argues that statistics grew out of double-entry bookkeeping at the end of the fifteenth century. By the end of the seventeenth century, politicians were beginning to see it as natural to base decisions on numerical reasoning, or “political arithmetic.” She posits that this new form of political reasoning, based on numbers and facts, appeared more objective and less contentious than earlier forms of rhetoric and argument. Although early proponents of “political arithmetic” believed these methodologies to be “atheo- retical,” Poovey argues that statistics emerged as a theoretical discipline in the late eighteenth century, in works such as Adam Smith’s Wealth of Nations. By moving into the theoretical, she means that (proto) statisticians moved from asking “How can one use the statistical information that already exists?” (cited in McConway, 2007, p. 2) to questions of how statistics can generate new information and understandings where none had existed before. This move from the concrete to the abstract, or “theoretical,” involved a shift from interest in documenting “what is” to explanation and prediction—that is, from what is to what will, or could, be. Early statistical proponents like Petty and Leibniz argued that the best way to know and manage a state was to collect information about its inhabitants. Later statisticians like Stewart, Malthus, and McCulloch extended the role of statistical reasoning from merely gathering raw data to using that data to make generalizations and predictions (Hacking, 1975/2006, 1990, 1991).

By the late nineteenth century, it was impossible to think of governing without conceiving of populations in terms of aggregates, generalizations, and probabilities. As Hacking (1990, 1991) argues, the nineteenth century was a time of dual changes in the relationship between statistical data and governing. First, there was an enormous increase in the amount of data collected. Hacking uses the expansion of US census data as an example of how the use of numbers has expanded exponentially.

The enthusiasm for numerical data is reflected by the U.S. census. The first American census asked four questions of each household. The tenth decennial census posed 13,010 questions on various schedules addressed to people, farms, hospitals, churches and so on. (1990, p. 2)

The second, and perhaps most significant change, however, was in the use of the information gathered to make generalizations about aggregate populations and to predict future trends based on statistics. For example, in 1835 the mathematician Poisson first used the term “law of large numbers,” arguing that phenomena that were irregular and unpredictable in small groups of people would become regular and predictable if the sample grew large enough. During the same period, the sociologist Quetelet argued that human characteristics and behaviors could be displayed on a “bell-shaped curve” (Hacking, 1990, 1991).

A final development in understandings of statistics crucial for making human capital theory thinkable is the concept of risk/ben- efit analysis. Donzelot (1991) describes the invention of the concept of “risk” in terms of the insurance industry and actuarial science but the same reasoning applies to education and other social technologies intended to increase social wealth or improve the general well-being of a population being governed. This involves the realization that misfortunes, or undesirable events—again idiosyncratic and unpredictable at the individual level—are calculable and manipulable on a large scale. Donzelot identifies three core components of the idea of risk, which he links to the insurance industry but can equally be applied to other social arenas. According to Donzelot, the first concept underlying risk is that it is calculable. Underlying every risk/ benefit analysis, or attempt at social engineering intended to avoid or minimize undesirable consequences, is the idea that we can predict and manipulate risk. The second idea is that calculable risk is a social or populational phenomenon. Social programs from educational reform to public health initiatives rely on the basic concept that risk can or should be manipulated on an aggregate level. Finally, the idea that risk is a form of capital, with the potential for gain or loss. Again, this can be applied literally to the insurance industry, which gains or loses money directly depending on the ratio of income taken in to indemnities paid out. However, social/educational programs operate on the same calculus. As we will see later, this concept of risk and gain as capital in the educational arena is seen in very explicit terms in the concept of human capital development.

Populational reasoning is a discursive valuation of people through “normalization”: “Scientific research was a critical strategy used to construct truth about who was normal and which children or families were perceived as abnormal and in need of different social interventions” (Bloch, 2003, p. 206). This valuation has become pervasive in late twentieth to early twenty-first-century policies and practice, based on an increase in scientific research and further concentration on “risk” of who might gain/who might lose, the probability and statistics of those ideas for different “categories” of population, and the continuing categorization and assessment of the normal child.

 
Source
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >

Related topics