Desktop version

Home arrow Mathematics

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Behavior Change Techniques for Reducing Interviewer Contributions to Total Survey Error

Introduction

In the total survey error (TSE) paradigm, nonsampling errors can be difficult to quantify, especially errors that occur in the data collection phase of face-to-face surveys. It may seem strange to focus on the face-to-face mode because it is so expensive, and therefore has lost much ground to telephone (in the 1960s through the 1990s) and (more recently) web modes (Groves 2011). However, it is the only way to collect some kinds of survey data, particularly for official government surveys, surveys that collect certain kinds of biomarker or environmental data, and surveys in cultures with no written language.

Field interviewers play "dual roles as recruiters and data collectors" (West et al. 2018, 335), and are therefore potential contributors to both nonresponse error and measurement error. Recent advances in technology, paradata, and performance dashboards offer an opportunity to observe interviewer effects almost as the interviewer engages in the behavior that produces the effect, and to intervene quickly to curtail undesired behaviors. For example, improvements in computer-assisted recorded interviewing (CARI) in the previous decade have greatly increased the efficiency of listening to selected questions across all field interviewers working on a survey. Edwards, Maitland, and Connor (2017) report on an experimental program using rapid feedback of CARI coding results to improve interviewers' question-asking behavior. Mohadjer and Edwards (2018) describe a system for visualizing quality metrics and displaying alerts to inform field supervisors of anomalies (such as very short interviews) detected in data transmitted overnight. These features allow supervisors to quickly investigate, intervene, and correct interviewer behavior that departs from the data collection protocol. From the interviewer's perspective, these interactions can be viewed as a form of experiential learning, consistent with the literature on best practices in adult learning (Kolb, Boyatzis, and Mainemelis 2001). From the survey manager's perspective, they can be important features in a continuous quality improvement program. We build on these initiatives to focus on specific areas where interviewer error can be a major contributor to TSE with major impact on key survey statistics.

Review of Relevant Literature

In the development of survey research as a social science discipline, standardized interviewing looms nearly as large as probability sampling (Fowler and Mangione 1990). Asking all sampled individuals the same question was a major leap forward over interviewers having their own conversations with respondents. Asking all questions exactly as worded is common practice in many telephone centers. It is much less frequently observed in field settings, w'hich are unpredictable and not easily monitored. Interviewers and respondents are social creatures and their conversations - even those as structured as the standardized survey interview - are complex interactions that are difficult to confine to words on a page or a computer screen (Schaeffer 1991). A nuanced view of the interviewer's role has been incorporated in the TSE paradigm, beginning in the late 1970s (Groves 1987; Groves and Lyberg 2010). In TSE, interviewer behavior is only one of a number of error sources, and the sources interact - a decrease in one can lead to an increase in another. For example, Tourangeau, Kreuter, and Eckman (2012) demonstrate the tradeoff between nonresponse error and coverage error in screening for eligibility by age. The errors could be an effect of interviewer behavior, but their findings suggest respondents are the source.

Interviewer training improves skills and has the potential for reducing interviewer effects in TSE. Ackermann-Piek (2019) maps training content for three multinational surveys onto the TSE view of error sources. Best practices for training interviewers are based in part on adult learning theory, which holds that adults learn differently than children. Key tenets are just-in-time content (knowledge or skills they can apply to work activities immediately at hand), peer learning (learners interacting with each other to achieve objectives), and close-to-real-world practice (learners performing activities that are similar to the work required on the job) (Knowles 1980; Knowles, Horton, and Swanson 2005). It has been difficult to incorporate these core concepts into interviewer training. Classroom training is typically staged as closely as possible to the beginning of data collection, and often attempts to train interviewers for rare situations they may encounter months later if at all. Current training often makes minimal use of peer learning or close-to-real-world practice.

Interviewers working in a call center benefit from supervisor feedback in real time. This can be an important component of a continuous training program. It corrects undesirable behavior immediately, and reinforces adherence to standard operating procedures. In contrast, field interviewers typically work in a disconnected mode. They work independently and may not interact with supervisors more than several times a week, through phone calls or emails. Paradoxically, their work is often more challenging than work in a call center. Interviewing face-to-face enables interviewers to communicate with respondents much more fluidly than is possible through voice alone. This contributes to greater success in gaining cooperation and in maintaining rapport in the interview, enabling much longer interviews and collection of biomarker and environmental data efficiently with a relatively high degree of compliance. However, monitoring quality and improving performance in field conditions is difficult.

Smart phones and near-universal internet coverage (at least in the U.S. and Europe) can support constant communication between field interviewers and supervisors, and allow supervisors to observe many aspects of the interviewers' behavior (e.g., doorstop documentation of contact attempts; travel from one respondent's home to another). Quality issues can be visualized for supervisors on dashboards and a system of alerts can inform them overnight about problems that require immediate attention (Mohadjer and Edwards 2018). These tools have the potential to bring field operations under much more rigorous control (Edwards, Maitland, and Connor 2017). Edwards, Maitland, and Connor (2017) reported results from an experiment using CARI behavior coding and rapid feedback (both written and verbal), showing improvements in general interview quality. In this chapter we investigate rapid feedback on specific questions as a tool for on-the-job training of field interviewers that can impact key survey statistics.

 
<<   CONTENTS   >>

Related topics