Desktop version

Home arrow Philosophy

  • Increase font
  • Decrease font

<<   CONTENTS   >>

The Tentative Character of Epistemic Trust

The interviews I conducted with scientists of the planetary science group suggest that attitudes of trust towards scientific collaborators are akin to hypothetical projections, assumptions that are in part evidenced but not satisfyingly so. The interviews reinforce the notion that trust is inevitable and ubiquitous in scientific practice, but remains, at the same time, tentative and fragile. In fact, the interviews bring a sense of uneasiness to the fore that seems to accompany trust towards collaborators. In part, this sense of uneasiness might be an interview artifact, something that is created through the interview, through the pressure the interview creates to rationalize and verbalize a behavior that most of the time goes without saying. Yet, given the way in which interviewees elaborate upon this uneasiness and the ways in which it surfaces in research practice, it stands to reason that there is, indeed, a sense in which epistemic trust is never quite fully warranted in scientific practice. The interviews do not lend support to the argument that epistemic trust among collaborating scientists is based upon an a priori entitlement. Rather, they suggest that it is a projection that is, over time, specified and adapted to the experiences of collaboration. The following quote from an interview with Laurits, the group’s geologist and spokesperson, illustrates this point:

Q: What do you look for if you work with someone? For example, if it’s a new person. What do you look for to decide whether you really would like to collaborate with that person or it’s maybe a bit risky?

Laurits: Ah, that’s something you find out when you start to collaborate. You can never know it in advance. So, it comes out of the collaboration, I would say. (Laurits, senior scientist, groupl)

When the interview follows up on this statement, Laurits hints at some of the criteria he draws upon when he decides whether to trust and collaborate with a person in the first place:

Q: What would signal to you that this person is maybe not good to collaborate with? What would be a signal of ahm...ah, maybe rather look for someone else?

Laurits: That’s the same when you start up with a new student that you want to have as a PhD student. You see what has the person done so far, and then you start with the collaboration. And it develops from that. And from that you get your opinion on the person. So, that you know if this is someone you would like to, ah, to have as a collaborator in the group in the future or not. (Laurits, senior scientist, groupl)

As he sees it, “[o]ne way of getting to know people well is always if you have them as students" (Laurits, senior scientist, groupl). Student—

teacher relationships are ideally relationships of trust as well as control. The teacher, at least in the initial stages of his or relationship, is able to assess the quality of the student’s work and, in principle, need not make him or herself dependent upon the student. After all, the ultimate proof for the trustworthiness of a collaborator can only be obtained through control, by witnessing or at least tracing all the inferential steps deemed relevant to formulate a particular piece of scientific evidence. If, however, the trusting person were in a position to do so, trust would not be needed.

In the planetary science group, however, core members met relatively late in their careers. There were no teacher-student relationships between them, and so this way of trust-building was not available. In order to learn whether to trust each other, they had to invest some trust to begin with. As the interview with Victor, the planetary science group’s senior biologist, suggests, trust can only be proven on the job:

Now [in the group] I can have access to the [major experimental facility] and there is no way that I can run the [major experimental facility]. So, I would need Adam or Nikolajor both to join to work with me on that, and for that of course I have to trust their expertise. So, if he tells me that the wind speed is ten meters a second, then—because he measures it—I have to believe that. Otherwise if, if I would doubt that, I could not collaborate with him. And that’s a general problem or a challenge in that astrobiology field, because you need to collaborate with people from a different discipline. And ah well you know from your own research that you have, you have been through many years of training and I would be, yeah, of course I can question your expertise, but if you if we want to work together, I have to consider you as an expert in your field. Otherwise it’s not worthwhile. If I question the other person’s expertise every time we did an experiment, we would never get started with an experiment. Of course, I can be disappointed and find out that he is maybe not the right person to team up with, but that’s ah, the only way you can find out is by giving it a try. (Victor, senior biologist, groupl)

Note that Victor focuses on expertise, trying to establish what exactly he can trust his collaborators for—gauging, in other words, the realm of their epistemic trustworthiness. Victor fine-tunes his expectations concerning collaborators' epistemic trustworthiness in the process of collaboration. He projects refined expectations of trustworthiness into the future while assessing past experiences (or, at least, hoping to be able to do so at a later point). After referee reports about a jointly authored paper are filed, after peers corroborate results independently or after having acquired more expertise, Victor explains, he may be able to asses whether or not it was right to trust. Trust, here, is built in iterative cycles in the course of which expectations of trustworthiness are met or disappointed and subsequently refined.

The difficulty of formulating adequate trust expectations is particularly prevalent in the domain of scientific collaboration. For scientists, as Adam, one of the group’s physicists, points out, particular requirements apply in trust relations:

Q: [...] When you work together with others who have a different expertise than you have, ahm to what extent do you at all try to check what they are doing? Find out whether what they are saying is actually ...

Adam: No, we believe them, trust them.

Q: You just take it?

Adam: But then again, well, you don’t as a scientist, you shouldn’t. You should, you should—you should ask a question, you know, I don’t understand this, explain this to me. And then they'll explain it and then if that sounds logical and reasonable, then you’ll accept it obviously. But if it sounds really crazy and weird to you, then you don’t accept it. And this is why sometimes you get into long discussion type arguments [...]. (Adam, senior scientist, group1)

Adam has, as this interview fragment shows, developed strategies to probe the trustworthiness of collaborators, an aspect that I elaborate upon below. For the time being, let me note that the interviewees’ perspective upon trust foregrounds their willingness to engage in epistemic trust, despite their awareness of the limited warrant that their trust attitudes have. All the scientists I interviewed are very aware of the fact that there is always risk attached to trust—the risk of being let down, either by dishonesty, incompetence or negligence. After all, even honest and competent collaborators can make mistakes without realizing their error. Even the trustworthiness of familiar collaborators is not a fact that could be established beyond reasonable doubt. Trustworthiness is a disposition, and how this disposition plays out given the contingency and difficulty of scientific practice, can never be established fully by a person that is epistemically dependent.

Since sufficient evidence for the trustworthiness of a collaborator cannot be established at the point of time when the trusting is done, trust always involves periods of uneasiness. And while epistemic trust does not seem to be something for which a satisfying warrant could be established experientially, it does not appear to be warranted by an a priori entitlement either. Rather, the trustworthiness of collaborators is assumed much in the same way as a promising hypothesis is assumed, only to be gradually corroborated.

This way of thinking about epistemic trust concurs well with the approach that Dan Sperber and collaborators (Sperber et al., 2010) have taken. In their approach, trust expectations are often provisional and the trustworthiness of others is not necessarily something one believes in or feels deeply about. Accordingly, knowledge that p, that one may or may not base upon others’ trustworthiness, is accepted cautiously and with what Sperber and collaborators call “epistemic vigilance," a form of skepticism that does not eclipse trust but accompanies it: “Vigilance (unlike distrust) is not the opposite of trust; it is the opposite of blind trust" (Sperber et al., 2010, p. 363). Without vigilance, so their argument goes, epistemic trust would not be possible—it would lead people astray too many times.

There are various ways for scientists to address the fact that the trustworthiness of collaborators can never be completely evidenced while maintaining a vigilant yet cooperative attitude toward their collaborators. As I will argue, scientists can fine-tune expectations of trust towards collaborators, they can resort to impersonal trust or find ways to reduce the trust relations necessary through hierarchical co-authorship. Referring to fragments of the interviews I have conducted, I will elaborate on these strategies in the following sections. In doing so, I build upon the work of epistemologists such as Sperber and collaborators (2010) and Alvin Goldman (2001) who have discussed a number of ways in which epistemic trustworthiness can be empirically established. I refer to their work in the following whenever appropriate and show how it applies to collaboration in scientific practice.

<<   CONTENTS   >>

Related topics