Desktop version

Home arrow Mathematics

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Interviewer Effects Across Contexts and Modes

The survey context also matters. Although interviewers are instructed to conduct interviews in private, many interviews are conducted in the presence of known others, violating respondents' need for privacy when reporting about sensitive topics (Mneimneh et al. 2015). In community-based studies where in-community interviewers are recruited who may know the respondent personally, these effects may be amplified. Alternatively, third- party presence may facilitate recall when questions are particularly difficult. Mneimneh, de Jong, and Altwaijri (Chapter 9) examine whether conducting an interview with various family or non-family members present affects reports of health behaviors and attitudes in the in-person Saudi National Mental Health Survey. Habecker and Ivanich (Chapter 10) examine how youth's reports of internalizing and externalizing behaviors and other sensitive topics are affected when the youth are interviewed by a member of the community who is known to them in a community-based participatory research study in an American Indian community. Combined, these studies reveal the need for more research into interview privacy and the importance of interviewer training on how to maximize privacy.

Additionally, the mode and/or device for the interview - in-person, landline phone, cellular phone, or audio computer-assisted self-interviewing; and interviewer input into a desktop or laptop, tablet, or smartphone - provides important context and potential for variation in interviewer-related errors (e.g., Childs and Landreth 2006; Timbrook, Smyth and Olson 2018). Notably, the mode or device for the interaction changes the nature of the interaction between interviewers and respondents. In Chapter 12, Ongena and Haan examine differences in the interviewer-respondent interaction across telephone and in-person interviews; in Chapter 13, Schober et al. examine differences in interviewer-respondent interaction across voice and text message-based interviews. Conrad et al. (Chapter 11) replace human interviewers with avatars, examining race-of- interviewer effects when the interviewer is virtual, but their voice is that of an audio- recorded human. These chapters raise important questions about interviewing: Are there contexts in which the presence of the interviewer (live or otherwise) is an important feature of the survey or where tasks are critical for the interviewer to consider? Can virtual interviewers provide some of the benefits of live interviewers while minimizing interviewer error? Are text message-based interviews or avatars particularly beneficial for specific populations?

Interviewers and Nonresponse

While most of the research identified above focuses on measurement error, interviewers affect other error sources. For example, interviewers' nonresponse rates vary extensively (e.g., Campanelli, Sturgis and Purdon 1997; Groves and Couper 1998) - due to both heuristic cues from their voices (e.g, Groves et al. 2008; Schaeffer et al. 2018) and their behaviors during the recruitment interaction (e.g., Couper and Groves 2002; Schaeffer et al. 2013) - and substantially contribute to nonresponse error variance (e.g., West, Kreuter and Jaenichen 2013; West and Olson 2010). Interviewer flexibility and tailoring have been linked to successful recruitment, although there are only limited examples of how tailoring is operationalized (e.g., Groves and Couper 1998). Research about tailoring generally relies on interviewer reports with measures that vary across studies. To address this, Ackermann-Piek, Korbmacher, and Krieger (Chapter 14) predict survey contact and cooperation with the same set of covariates across four different studies conducted by the same survey organization. Because they find little replication in associations between the covariates and the nonresponse outcomes across studies, they emphasize the importance of real-time monitoring of interviewers.

Interviewer flexibility may not always be a good thing. In Chapter 15, Wescott discusses a case management model for a telephone survey in which interviewers "own" cases and make their own decisions about when to call cases. While interviewers report being more satisfied with the autonomy afforded by the case management model, the model yields lower productivity than using a call scheduler. More work is needed to understand how interviewer autonomy and insights from a field data collection approach may be integrated into a telephone survey organization to increase interviewer engagement and ultimately retain interviewers.

Survey interviews increasingly ask respondents to provide blood, saliva, urine, or other biomeasures or for permission to link their survey data to administrative data (e.g., Jaszczak, Lundeen and Smith 2009; Sakshaug 2013; Sakshaug et al. 2012). In Chapter 16, Pashazadeh, Cernat, and Sakshaug use nurse characteristics and paradata to predict different stages of nonresponse for nurses' attempts to collect biomeasures for a general population survey. These stages - participating in the nurse visit, consenting to a blood sample, and obtaining a blood sample - reveal substantial variation in the nonresponse outcomes related to the nurses, and that the predictors of nonresponse vary across the stages. This work and additional workshop discussion suggest that we need more research on the antecedents and consequences of interviewer variation in the ability to successfully collect auxiliary measures.

 
<<   CONTENTS   >>

Related topics