Desktop version

Home arrow Sociology

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Philosophy and Science Policy: A Report from the Field

One way to conceive of field philosophy is to compare it to other areas of philosophy. Experimental philosophy (x-phi) provides a fine contrast. Both x-phi and field philosophy (f-phi) go beyond the traditional bounds of the discipline of philosophy and may be considered inter- or trans-disciplinary fields (Holbrook 2013). Both may be characterized as experimental. Yet, what it means to be experimental differs markedly between the two approaches. X-phi is experimental insofar as it uses non-philosophical methods (borrowed from science, often cognitive science) to perform (so-called ‘actual’) experiments designed to answer traditional philosophical questions (cf. Appiah 2007). Field philosophy, on the other hand, uses traditional philosophical methods (conceptual analysis, dialogue, hermeneutics, etc.) to answer questions raised by non-philosophers (cf. Frodeman and Briggle 2016). In the case of x-phi, philosophers become quasi-scientists in order to perform experiments. In the case of field philosophy, the experiment is to determine whether the philosopher qua philosopher might be helpful in ways others have not considered.

This chapter discusses an experiment in field philosophy performed by the Center for the Study of Interdisciplinarity (CSID) at the University of North Texas (UNT) from 2008—2012. CSID itself was something of an experiment; it existed between August 2008 and December 2014. I was Assistant Director and one of the editors of this book (Frodeman) was Director (Holbrook and Frodeman 2008). I am far from a neutral observer. In what follows, I aim to present CSID’s experiment as it happened, from my own vantage point. The chapter is not an autobiography, but rather a report from the field. The experiment reported on is a foray on the part of philosophers into the field of science policy.

The Science of Science Policy Meets the Philosophy of Science Policy

At the 30th Annual AAAS Forum on Science and Technology Policy in April 2005, President George W. Bush’s Science Advisor, Jack Marburger, issued a call for “a new interdisciplinary field of quantitative science policy studies” (Marburger III 2005). Marburger’s argument was simple: if science policy studies is a branch of economics, then science policymakers ought to have access to the same types of data, models, and econometric tools that economists routinely provide to policymakers in other realms. Marburger’s argument was also effective: the US National Science Foundation (NSF) soon initiated the Science of Science and Innovation Policy (SciSIP) Program. SciSIP was designed to create a new field—the Science of Science Policy—that would provide policymakers with just the sorts of data, models, and tools that Marburger had requested.

SciSIP officially began soliciting proposals in FY 2006, and the FY 2007 request for proposals (NSF 07-547) explicitly solicited proposals to develop Models (MOD) or Tools (TLS) to assist science policy decision making. All proposals were required to pick one of these two areas of emphasis on which to focus. The then proto-CSID team of Frodeman (as Principal Investigator, PI) and Holbrook (co-PI) was joined by co-PIs Carl Mitcham (Colorado School of Mines), William Moen (UNT), and Warren Burggren (UNT) on a proposal to develop an assessment of existing models for integrating considerations of the broader societal impacts of proposed research into the grant proposal review process. “SciSIP MOD: A Comparative Assessment of Models for Integrating Societal Impacts Concerns into the Peer Review of Grant Proposals” (Award #0830387, $393,688.00) was funded by NSF’s SciSIP Program in September 2008. The now fully-fledged CSID team shortened the name of the project to the Comparative Assessment of Peer Review (CAPR, pronounced like the edible flower bud of Capparis spinosa—caper—and evoking the homonymous playful activity).

CAPR was the natural extension of an earlier grant (Award #0649573, $24,962) that funded a workshop co-organized by Frodeman, Mitcham, and myself on “Assessing the Broader Societal Impact of Funding Techno-Scientific Research,” held in Golden, Colorado in August 2007. The workshop focused on NSF’s own attempt to integrate societal impacts concerns into the peer review of grant proposals using its Broader Impacts Merit Review Criterion. This chapter is not the place to rehash the history of NSF’s Broader Impacts Criterion. That work has already been done (Holbrook 2005, 2012). However, it is important to describe some of the background here in order to understand the issues that CAPR was designed to address.

I had begun working for Frodeman as his postdoc on January 3, 2005, having received my PhD in philosophy from Emory University in August 2004. My training had been in the history of philosophy, and I had just defended my dissertation—a Nietzschean answer to Alasdair MacIntyre’s critique of Nietzsche as a moral philosopher. At the time, I had absolutely no experience or training work with scientists, engineers, and policymakers—qualities that had been specified in the advertisement for the job. Why Frodeman actually hired me may forever remain a mystery, even to him. However, I am firmly convinced that my willingness to throw myself wholeheartedly into the experimental nature of field philosophy set me apart from the other applicants for the position. I would later classify this sort of event under the rubric of serendipity, which for me goes beyond blind luck and entails sagacity—or a sort of practical wisdom—regarding opportunity (Holbrook 2017, 2019).

On my first day of work, Frodeman was busy. He had just begun his second semester as Chair of UNT’s Department of Philosophy and Religion Studies. My job was to make his job easier. We met early that morning and he outlined several topics he wanted to address but, at that time, lacked the leisure to pursue fully. Among the options was what to make of NSF’s Broader Impacts Criterion. Having been on several review panels, Frodeman had seen reviewers who had little difficulty judging the intellectual merits of proposals react with puzzlement when asked to assess proposals on their potential benefits to society. Why was that?

At that time, I was completely ignorant regarding NSF and their review process. However, this issue struck me as interesting, and one of the things Frodeman wanted me to learn was how to put in grant proposals to NSF. So, this topic seemed like a win-win—not only would I strike out on an unexplored path (for me, although no one, as it turned out, had yet paid any scholarly attention to this issue), but I would also learn something about how proposals were reviewed at NSF. I began working on the problem that day.

The first thing I did, of course, was to consult the Philosophers’ Index. Unfortunately, at that time UNT’s library did not have access to it. While I waited to get access, I devoted myself to reading in the fields of Science and Technology Studies and the Policy Sciences. While I was doing all this reading, it took me a couple of weeks to find my way around NSF’s website. What had seemed to me at the time to be a great obstacle—not being able to begin a literature review with my preferred scholarly database—turned out to be an incredible stroke of luck. I did my best to take full advantage of it.

I discovered that NSF was quite aware of the issue Frodeman had noticed. In fact, awareness of the problem of both proposers and reviewers neglecting NSF’s Broader Impacts Criterion and favoring its Intellectual Merit Criterion was widespread throughout the Foundation. The National Academy of Public Administration (NAPA) had issued a report for NSF in 2001 that described the Broader Impacts Criterion as raising “philosophical issues” for many reviewers, but it had concluded that it was too early at that time to issue a final judgment on the effectiveness of the criterion (NAPA 2001; Frodeman and Holbrook

2005). The time seemed ripe for a re-examination of the problem. I suggested to Frodeman that we should put in a grant proposal to get NSF funding to support our research. His first reaction was that it was crazy to put in a proposal that quickly (the due date was February 22nd, about a month from when I suggested it); his second reaction was to tell me to go for it.

I submitted a proposal that February, which was declined. I submitted a revised proposal in August, which was also declined. I submitted a rerevised proposal the following February (2006). That proposal was also declined. All had proposed that I undertake research on the topic of NSF’s Broader Impacts Criterion on my own, with the support of a NSF Scholar’s Award. Finally, the NSF Program Officer (Rachelle Hollander) suggested that, instead, I put in a proposal for a research workshop on the topic. Despite the fact that I had published the first scholarly article on the subject in October 2005, there was some doubt among reviewers that the topic was of sufficient scholarly interest. Having a group of scholars agree to come to a workshop on the Broader Impacts Criterion would lend more scholarly weight to the topic. I submitted a proposal for such a workshop in August 2006 and was happy to hear six months later that the proposal had been funded.

Although I was PI on this grant, with Frodeman as co-PI, both Frodeman and Carl Mitcham took equal parts in planning the workshop. This made sense, given that I had never planned a workshop before. This is not to say that I had no opinions or kept quiet. I did, however, defer to my more experienced colleagues (learning a great deal in the process). We decided together which participants to invite and Mark Frankel, who was Director of the Scientific Freedom, Responsibility, and Law Program of the American Association for the Advancement of Science (AAAS), had agreed to co-sponsor the workshop and also suggested several names.

Looking back on the participants now reveals a who’s who list of stars, veterans, and up-and-comers. In addition to Frankel, Frodeman, and Mitcham, participants included: Sheila Jasanoff Steve Fuller, Nick Steneck, Susan Cozzens, Stephanie Bird, Dave Guston, Clark Miller, Erik Fisher, Heather Douglas, Adam Briggle, and Kristen Intemann, among others. Several of these later contributed articles to a special issue of Social Epistemology dedicated to the topic of NSF’s Broader Impacts Criterion (Holbrook 2009). They were joined in the special issue by Barry Bozeman, Craig Boardman, Bruce MacFadden, Nancy Tuana, Carol Lynn Alpert, Melanie Roberts, Simone van der Burg, and others.

Of particular note is the number of people mentioned above who selfidentify as philosophers—no fewer than seven, and one could plausibly claim a few more of the STS (science and technology studies) types. By 2009, I think it is reasonable to claim that the philosophy of science policy—which had been spearheaded by Frodeman and Mitcham (see, for instance, Frodeman et al. 2003, 2004; Mitcham and Frodeman 2004)—was fully-fledged.1 This is not to denigrate the work of other philosophers who had done what I would term policy-relevant research, some published before 2003; but most other philosophers doing policy-relevant research—e.g., Heather Douglas (1998) wrote a dissertation that was relevant to science policy—directed it mainly toward other philosophers; or they focused on one field in particular, such as bioethics. Field philosophy is, I think, actually an offshoot of the push for the philosophy of science policy (perhaps one that has begun to take root and flourish beyond the original plant). But field philosophy applies everywhere, unlike bioethics or even philosophy of science policy, which are limited to particular areas.

The question of audience looms large for field philosophy, in part because of lessons learned from our experiences in bringing the philosophy of science policy into dialogue with the science of science policy. When I first began researching NSF’s Broader Impacts Criterion, I believed that our audience consisted of scientists and engineers who were putting in and reviewing proposals for NSF grants. The more research I did, the more I heard that scientists and engineers were confused by the Criterion and just could not figure out its meaning. There were, in fact, legitimate philosophical difficulties involved in attempting to come to terms with the notion of Broader Impacts. Perhaps the most obvious, and most vexing, problem was the fact that Vannevar Bush (1945) had defined the type of research that NSF funds—so-called ‘basic’ research—in opposition to its impact on society. Although basic research was necessary for all sorts of benefits to society, Bush (1945) wrote: “Basic research is performed without thought of practical ends.... Basic research is a long-term process—it ceases to be basic if immediate results are expected on short-term support.” So, it was understandable that scientists had difficulty understanding why NSF was asking them to discuss the practical ends of research performed without thought of practical ends.

Yet, whenever I presented on my research at professional conferences, the people who asked the most questions and stayed afterward to talk and give me their business cards were usually not perplexed scientists. One might suppose that this was due to the fact that I was making presentations at philosophy conferences—but one would be wrong. Thinking that our audience was scientists and engineers, I presented at scientific conferences. But the people most interested in our research were actually those who worked at science and technology' funding agencies around the world. What gradually became obvious to us was that it was not scientists and engineers, in general, who wanted to understand issues surrounding NSF’s Broader Impacts Criterion. It was people working at NSF and other funding agencies who had essentially followed NSF’s lead in asking proposers and reviewers about the broader societal impacts of the research they funded!

We had made an easy mistake and misidentified our audience. Once we finally realized that science and technology' policymakers, rather than people working as scientists and engineers, had a problem they wanted our help with, we shifted our focus. We had previously been funded by the Ethics and Values in Science and Technology Program at NSF to do scholarly research on the Broader Impacts Criterion. We now saw clearly that it was policymakers who were asking for our help to understand how best to incorporate societal impacts considerations into the grant proposal review process. So, the idea for CAPR was born, and we decided to submit the proposal not to the Ethics and Values Program, but to SciSIP.

 
<<   CONTENTS   >>

Related topics