Desktop version

Home arrow Sociology

  • Increase font
  • Decrease font

<<   CONTENTS   >>

Where We Are Now: Challenges, New Skills, and Models of Engagement

Our projects have ranged from more theoretical conceptual and ethical work on issues such as privacy, narrative identity, and agency (e.g., Klein et al. 2015; Goering et al. 2017; Klein and Rubel 2018) to more practical policy-oriented work on informed consent (Klein 2016) and ethical guidelines for ВСІ development and neurotechnologies (Yuste 2017). We have also carried out our own empirical research, using focus groups, interviews, and surveys to consult with important neural engineering stakeholders, including ВСІ researchers (Specker Sullivan et al. 2017; Pham et al. 2018), end-users of neural devices (Klein et al. 2016), and individuals with disabilities who are considered prospective endusers (unpublished data).

In our normative and conceptual work, we have found that working in collaboration with neuroscientists and neural engineers has helped us to look at some classic philosophical issues in a new light. Consider the following example, by way of illustration. Philosophers working on the right to privacy tend to understand it as a matter of control over access to one’s personal information (e.g., Moore 2010). In the current era, privacy seems to be a waning concern. Many people agree to surrender much of their personal information online, through social media, internet retail, apps for navigation, etc., and they do so for the sake of convenience and connection. They might, therefore, be said to voluntarily share their data, and to retain wanted privacy (given that they retain control over access; they just do not value reserving access to their data). BCI devices would seem to offer up more of the same, albeit with great informational granularity. BCI devices promise regular access to the contents of a person’s brain (e.g., tracking recognizable motor intentions that drive BCI performance, recording a person’s visual experience, or identifying a person’s emotional state based on patterns of neural activity). Insofar as this access is given voluntarily, no violation of privacy would seem to occur. Yet BCI devices crack open the last, untapped source of information about human beings and their internal states. Through interacting with neuroscientists and machine learning experts, we now understand that individual controls over access may not sufficiently protect a person’s privacy in respect of these most intimate spheres. Big data mining may be able to make use of data from others who voluntarily share access to their brain recordings, combine it with other available data sets (e.g., computer and phone interaction patterns, retail trails, location data), and extrapolate private (and unshared) brain states with increasing accuracy. Perhaps broader social protection for brain privacy is warranted (Yuste et al. 2017).

The empirical work has also informed our philosophical thinking in productive ways. Consider the issue of privacy again. Putting aside worries about the voluntariness of such informational exchanges, our interactions with prospective and actual users of BCI technology raised concerns about the general adequacy of the framework of privacy as information control. For instance, individuals with amyotrophic lateral sclerosis (ALS) contemplating use of BCI for future communication assistance, those with severe depression receiving deep brain stimulation (DBS), or those with epilepsy having patterns of brain activity recorded and analyzed, rarely expressed privacy concerns in terms of information control. For the individuals with whom we interacted, privacy was primarily understood in terms of relationships. To have privacy was to be able to foster meaningful kinds of relationships (with friends, family, caregivers, etc.). For example, privacy in the use of a BCI communication device in ALS was less about keeping others out of one’s inner mental life (in fact, the fear of many with ALS is being trapped with only one’s inner life), and more about using a device to connect differently with different individuals—to banter with friends, bond with loved ones, or problem solve with caregivers. In people using DBS for severe depression, using the device allowed some to get closer to spouses, parents, or friends. For them, privacy was experienced as the freedom to have the kinds of intimate relationships they wanted (and to which depression was an impediment) or to have less intimate relationships with others (e.g., not needing to make employers aware of occurrent depression).

The empirical work was also a challenge. Because philosophers typically are not trained in qualitative methods, we had to find partners to help us learn the basics of designing surveys, interview scripts, and focus group guides. One of our initial graduate student fellows already had such training, and she was able to guide our early efforts in empirical work. After several years of very' part-time support for the group (one month of summer salary for Goering, one day per week of support for Klein) and site visit reports that lauded the ethics group and recognized our limited human resources, we were funded to hire a full-time postdoctoral researcher to take on more of the day-to-day work of the ethics group. In developing our position advertisements, we carefully highlighted our need for someone with qualitative research experience as well as philosophical training.

Although empirical work is not a normal part of a philosopher’s workload, we started the CNT collaboration with a commitment to making sure that the perspectives of people with disabilities would be represented in our group’s (and the Center’s) work. One member of the group is affiliated with the UW Disability Studies Program, an interdisciplinary academic group committed to teaching and doing research that recognizes the sociopolitical nature of disability and the need for justice rather than (only) medical treatment to address the problems faced by people with disabilities (e.g., Scully 2008). Given that commitment, we were initially somewhat hesitant to join a research group focused on “helping” people with disabilities through transformative technology designed to, for instance, reanimate paralyzed limbs. But assistive technologies come in many forms (Aas and Wasserman 2016; Stramondo 2019), and we realized that we would be in a position to work toward ensuring that the views of people with disabilities were attended to during the earliest stages of technology development, as a kind of justice as recognition (Goering and Klein 2018). Doing so would require some empirical data on what those views were, particularly in relation to the kinds of neural technologies under development at the CNT.

We put in our first application to the IKB for a study using human subjects, commissioned a professional facilitator, and ran a focus group with people who have spinal cord injuries, to get their input on the technologies under development and the related ethical issues. We also asked them about their views on the importance of having people with disabilities give input on projects like these. We crafted the focus group guide to cover the content areas in which we were most interested, but we hired an experienced facilitator to conduct the group and do the initial thematic analysis. Once we observed the focus group and had a better sense of what went into the analysis, we were ready to take on more of those roles ourselves, making use of our experience leading philosophy discussions in college classes to keep track of key points, invite others to share their perspectives, and manage time. In addition, our philosophical teaching experience helped us to be sensitive to and able to identify implicit moral claims made by participants, to help participants explore their reasons for their claims, and to encourage some critical discussion of those reasons. We turned to our qualitative colleagues for help with the data analysis, but eventually also learned techniques in coding and discourse analysis.

We have reported our findings from these empirical studies, as well as from our own normative work identifying key ethical issues related to neural devices (Klein et al. 2015, 2016) and working through some of those issues in more depth (e.g., Goering et al. 2017), at CNT monthly leadership meetings, the CNT annual retreat, and the NSF site visit. Still, in the first few years, we struggled to figure out how to integrate our work with the rest of the Center more thoroughly. We applied for internal CNT grants to fund our neuroethics fellows program (paying philosophy graduate students a small stipend for roughly five hours per week of their time, to be done on top of their teaching duties in the philosophy department). The fellows helped with research and writing in the ethics group and were assigned to several key CNT labs, with the idea that they would go to weekly lab meetings and observe—to get a better sense of what each lab was doing and to help identify ethical issues or opportunities for collaboration as they arose.

This model of integration did not always work well. Some fellows (two out of five) emailed the PI of their assigned lab repeatedly and never heard back. Others heard about the meeting times or changes to meeting times, but only at the last minute, or the meetings occurred when the students had teaching duties. Mapping the fellows’ availability and particular interests on to the labs’ practices was difficult and when they were able to attend meetings they often felt like outsiders or had limited understanding of what was discussed, given that they were not privy to the day-to-day issues leading up to the lab meeting discussions. Nonetheless, even sporadic attendance allowed them insight into how responsibilities were delegated within the lab and how researchers communicated their work. The fellows had to be persistent and thick-skinned, learning to take these difficulties in their stride, without losing confidence. Having a weekly group meeting of our own helped to build camaraderie among the fellows, allowed them to share successful strategies (e.g., starting by contacting graduate students in the lab rather than busy Pls), and steeled them to return to the labs. Some of the partnerships worked out very well.

One of the Pls was developing a project that involved participants who had a neural device implanted therapeutically for essential tremor. He was interested in consulting with his research participants about their experiences and views on related ethics issues. Because a philosophy graduate student had already been attending his lab meetings as part of our fellows programs, he suggested that we consider putting in a grant for a full-time 1

We successfully integrated our first postdoctoral student with another lab doing human studies, but we needed a different strategy for integration for the other neuroethics fellows. We still tried to put them in conversation with particular labs, and asked them to help us respond to particular requests for help. For instance, one site visitor requested that we come up with a set of ethical guidelines for neural device developers that could be shared outside of our center. We had neuroethics fellows in our group review the literature for related guidelines, analyze their strengths and limitations, and then begin to develop a set of specific guidelines that could be shared. Recognizing the need for broader input, we also developed a survey to assess our recommended guidelines. We piloted the survey with CNT Pls, and then distributed it to attendees of a large international meeting of ВСІ scientists (the International ВСІ Meeting at Asilomar in 2016). In this way, even fellows who were not explicitly tied to a particular lab were active participants in helping the CNT to meet its goals.

After a successful trial year of having a philosophy graduate student RA for the CNT, we asked for an additional RA, to undertake a specific project of developing our ethics engagement program. We pitched the idea of developing a workshop dialogue tool that would bring researchers together for an ethics workshop to identify and explore their value assumptions in respect of CNT-related work. The RA was funded, and he worked full-time on developing the ethics engagement tool Scientific Perspectives and Ethical Commitments survey (SPECs). In his design, the researchers each individually took a short survey on beliefs about neural devices in relation to privacy, identity, responsibility, the value of species-typical functioning, and enhancement prior to the workshop. Then a facilitator guided a discussion as they worked collectively through the survey questions, highlighting points of agreement and disagreement, considering reasons for their answers, and critically analyzing their perspectives (for related approaches to uncovering epistemological assumptions in interdisciplinary work, see O’Rourke and Crowley 2013). Researchers retook the survey at the end of the workshop, to identify if and how their views might have changed as a result of the process. The RA piloted this tool at all three CNT institutional sites, and recently led a workshop at the 2018 ВСІ conference at Asilomar for a broader group of researchers.

Working in the CNT has been valuable for our neuroethics fellows in a variety of ways. Although not all of our neuroethics fellows have dissertation projects that focus explicitly on neuroethics, the other students have interests in bioethics, or the intersections between science and values, and the CNT experience often influences their philosophical work. A past student wrote a dissertation on accountability in technoscience, using the CNT as a central example of how technoscience develops, and is currently working in a postdoctoral position in neuroethics. Another student is writing on issues of scientific consensus and the value of dissent, and she will use her experience at the CNT to consider how groups with different expertise (e.g., philosophers and humanities scholars) and/or experiences (e.g., end users) can contribute to the group’s knowledge.

<<   CONTENTS   >>

Related topics