Desktop version

Home arrow Mathematics

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Section V. Interviewers and Nonresponse

Explaining Interviewer Effects on Survey Unit Nonresponse: A Cross-Survey Analysis

Introduction

Interviewer effects on survey unit nonresponse have been detected in interviewer-mediated surveys for many decades (e.g., Durbin and Stuart 1951). Today, we know that interviewers affect the data collection process in various ways, both positively and negatively. On the positive side and in terms of response rates, interviewer-mediated surveys tend to achieve higher response rates than surveys using self-completion questionnaires; and face-to-face surveys, where the interviewers have the strongest involvement in the recruitment process, tend to achieve higher response rates than telephone surveys (Groves, et al. 2009). On the negative side, however, interviewers may have a differential impact on the representativeness of the achieved sample (e.g., Blom et al. 2011, Jackie et al. 2013, Durrant et al. 2010). In addition, there is a growing body of studies attempting to explain the interviewer effects found and to develop methods for reducing them or adjusting for them in analyses (see West and Blom 2017 for an overview).

In this section, we provide an overview of the literature with regard to four types of interviewer characteristics that previous research has identified as predictors of survey unit nonresponse: socio-demographic characteristics, experience, attitudes and personalities, and skills and behaviors.

Many studies that investigate whether socio-demographic interviewer characteristics explain interviewer effects on unit nonresponse focus on age and gender, as this information is typically available from interviewer employment records. However, most of these studies find weak or nonsignificant relationships of these characteristics with unit nonresponse (West and Blom 2017). Some studies detect liking effects (Durrant, et al. 2010; Lord, Friday, and Brennan 2005; Moorman, et al. 1999; Webster 1996). For example, Durrant, et al.

(2010) find that a match between interviewer and respondent gender or education (see also West, et al. 2019) is associated with higher cooperation rates.

Research focusing on interviewer experience finds that interviewers with more experience achieve higher response rates (Couper and Groves 1992; Campanelli and O'Muircheartaigh 1999; Durbin and Stuart 1951; Durrant, et al. 2010; Jackie, et al. 2013). Groves and Couper (1998) suggest that more experienced interviewers have learned the skill of tailoring their approaches at the doorstep, which ultimately leads to higher success rates. Additionally, there might be a self-selection effect, where less successful interviewers are likely to leave the workforce more quickly (Jackie, et al. 2013). However, there are also numerous studies that report curvilinear, negative, or null relationships between interviewer experience and response (e.g., Blom, de Leeuw, and Hox 2011; Couper and Groves 1992; Durrant, et al. 2010). This may be explained by two related mechanisms: experienced interviewers often receive the most difficult cases (e.g., Blom, de Leeuw, and Hox 2011) and are assigned higher workloads (e.g., Japec 2008; Loosveldt, Carton, and Pickery 1998), both of which may lead to lower response rates.

In recent years, there has been an increase in research on interviewer effects on unit nonresponse related to interviewer attitudes and personality. In general, having a positive attitude and being more persuasion-oriented (de Leeuw, et al. 1998; Durrant, et al. 2010; Hox and de Leeuw 2002; Jackie, et al. 2013; Maynard and Schaeffer 2002; Vassallo, et al. 2015), being confident (Blom, de Leeuw, and Hox 2011; Durrant, et al. 2010), and being extroverted (Jackie, et al. 2013) are associated with higher cooperation rates. In contrast, interviewers who believe in stressing the voluntary nature of a survey tend to achieve lower cooperation rates (de Leeuw, et al. 1998).

Interviewer behavior can have both a passive and active effect on the sampled person's decision to participate. Passive influences might occur because interviewers are simply present, while interviewers' actual behavior can actively influence the target person's decision to participate. However, as Jackie, et al. (2013) summarize, interviewer behavior is not often predictive of cooperation. These authors provide three possible reasons for these null findings. First, studies examining this relationship generally have lower statistical power, given that the number of interviewers working on a project is usually limited. Second, there are issues with measurement, as interviewers can forget the "exact components of interaction" when they are asked to report them. Finally, the level of measurement can be problematic as interviewers are typically asked about their usual behavior, even though individualized interactions might be more relevant than a general pattern of how interviewers behave, according to Durrant, et al. (2010).

Some studies have used audio recordings to analyze interviewer behaviors during respondent recruitment. Overall, the findings of these studies suggest higher cooperation rates for interviewers who introduce themselves and tailor their interactions, whereas less successful interviewers give sample persons too much room to escape (Groves and McGonagle 2001; Morton-Williams 1993; Schaeffer, et al. 2013; Snijkers, Hox, and de Leeuw 1999).

Overall, the literature explaining interviewer effects on survey unit nonresponse shows great variability across studies in the significance and even direction of the predictors of interviewer effects (see West and Blom 2017). This diversity in the findings regarding the predictors of survey unit nonresponse might be related to survey characteristics and varying explanatory variables available for analyses. For example, studies differ in the set of interviewers employed, the survey organizations managing the interviewers, the sampling frames used, and the populations and time periods observed. In addition, the explanatory variables available to the researchers examining interviewer effects on unit nonresponse tend to differ greatly across studies, and this may cause differences in the results of studies trying to explain interviewer effects. Few research projects have attempted to explain interviewer effects on survey unit nonresponse across different types of surveys, while keeping the survey organization, interviewer population, explanatory characteristics, and models for estimating interviewer variance constant (Blom, de Leeuw, and Hox 2011; Jackie, et al. 2013).

We aim to address these weaknesses in the literature examining interviewer effects on survey unit nonresponse by analyzing interviewer effects during the recruitment phases of four different surveys while harmonizing measurements and analytical strategies. In our study, we use the same measures of sample composition, interviewer socio-demographic characteristics, interviewer experience, and interviewer attitudes, behaviors, and expectations about the survey in each of the four surveys. We also estimate the same models to explain interviewer effects during the recruitment phase. With this approach, we compare the interviewer effects during the recruitment phase across the four surveys and aim to identify interviewer characteristics influencing contact and cooperation consistently over the observed studies. Such findings could have important implications for interviewer selection, fieldwork monitoring, and future interviewer training sessions.

 
<<   CONTENTS   >>

Related topics