Desktop version

Home arrow Mathematics

  • Increase font
  • Decrease font


<<   CONTENTS   >>

How to Conduct Effective Interviewer Training: A Meta-Analysis and Systematic Review

Introduction

The content and methods of interviewer training are often overlooked factors in minimizing interviewer effects in interviewer-administered surveys (West and Blom 2017). In particular, experimental variation in the content of interviewer training approaches can provide information about the effectiveness of interviewer training and training methods. This chapter evaluates the effectiveness of interviewer training methods using meta-ana- lytic methods to summarize the results of interviewer training experiments.

Interviewers are one of the key actors in the data collection process of interviewer- administered surveys (e.g., Groves et al. 2009; Singer, Frankel, and Glassman 1983). From a total survey error (TSE) perspective (e.g., O'Muircheartaigh and Campanelli 1998; West and Blom 2017), interviewers can influence four sources of survey error - namely, coverage, nonresponse, measurement, and processing error - and interviewer-related error can bias survey estimates, including regression coefficients (e.g., Fischer et al. 2018). Interviewer training is primarily designed to reduce the effects of interviewers on nonresponse and measurement error (e.g., Billiet and Loosveldt 1988; Fowler and Mangione 1990; Lessler, Eyerman, and Wang 2008). Interviewers can be trained to avoid nonresponse, to systematically administer survey questions, to probe only when it is allowed, and to avoid influence by emphasizing certain response options. Kreuter (2008, 371) describes the interviewers' role as "read questions exactly as worded; probe nondirectively; and record answers without interpretation, paraphrasing, or additional inference about the respondent's opinion or behavior."

Ideal interviewer training should focus on two main areas of interviewer activity, namely, gaining respondents' cooperation (reducing nonresponse rates) and adhering to the practices of standardized interviewing (reducing measurement error) (Alcser et al. 2016; Daikeler et al. 2017). Despite the importance of training, experimental examinations of training approaches are rare due to the costs and complexity of designing and implementing such experiments and the separation of methodologists conducting research from field staff who manage interviewers and implement training programs. Experimental variation in fieldwork is expensive, requiring fielding two studies simultaneously, managing them separately but equivalently, and then comparing the outcomes. Alternatively, lab studies can be used but lack generalizability. Furthermore, most people studying interviewers and methods of improving interviewer quality do not actually do the training at the survey organizations. Finally, inexperienced interviewers require extensive training above and beyond simple interviewing techniques, especially when part of an ongoing study with research goals that are not solely methodological. In Germany, one other reason why the effectiveness of general interviewer training is not questioned has to do with the organization of fieldwork. Both large multinational survey programs, such as the Programme for the International Assessment of Adult Competencies (PIAAC; OECD 2014) and the European Social Survey (ESS; Loosveldt et al. 2014), and small survey projects rely on fieldwork agencies to train and manage interviewers. However, the effectiveness and type of this training is still in some cases a "black box."

Interviewer training has always been an integral part of the survey process, yet the available literature on this subject is quite sparse. While there is some research investigating the effect of interview training on specific data quality outcomes (such as unit nonresponse and correct probing, e.g., Durand et al. 2006; Fowler and Mangione 1986) and guidelines for interviewer training (e.g., Alcser et al. 2016; Daikeler et al. 2017), only Lessler, Eyerman, and Wang (2008) provide a comprehensive overview of the literature on interviewer training. However, as their overview is purely narrative, it does not quantitatively evaluate the training concepts and results. This chapter uses meta-analytic methods to estimate the impact of interviewer training approaches on data quality. The aim is to quantify the benefits of interviewer training and, more importantly, to determine what aspects of training (e.g., training length, practice, and feedback sessions) moderate the reduction of interviewer effects.

Conceptual Development of Research Questions

Classical interviewer training consists of training on recruiting sample members, general interviewing techniques, and study-specific training (Daikeler et al. 2017; Loosveldt et al. 2014). The focus of this chapter is on general interviewer training, that is, the basic, crossproject part of interviewer training that aims to impart the knowledge and skills that a successful interviewer needs to achieve high data quality both when recruiting participants and maintaining standardization in the interview (see West and Blom 2017). Literature on experimental interviewer training has reported to date on the influence of interviewer training on measurement error and nonresponse error; for a literature overview see Appendix Table A1 in the online supplemental materials.

The present study examines the impact of interviewer training on data quality. Specifically, it addresses seven breakdowns of the interviewing process that compromise data quality: (1) unit nonresponse (nonresponse error); (2) item nonresponse (measurement error); (3) items that are incorrectly administered[1] (measurement error); (4) items that are incorrectly read out (measurement error); (5) responses that are incorrectly probed (measurement error); (6) responses that are incorrectly recorded (measurement error); and (7) inaccurate responses. The aim is to determine whether these seven breakdowns are influenced by interviewer training and what training aspects contribute to the reduction of these errors, and thus to data quality. In the following, we first discuss nonresponse error and then address measurement error.

  • [1] Administration means adherence to the interview protocol (adherence to breaks in the interview, correctadministration of filter questions, and the order of questions); it includes reading questions but is muchbroader.
 
<<   CONTENTS   >>

Related topics