Desktop version

Home arrow Mathematics

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Section II. Training Interviewers

General Interviewing Techniques: Developing Evidence-based Practices for Standardized Interviewing

Introduction

Interviewers continue to have an important role in collecting data, particularly in studies that require locating or sampling respondents or administering complex survey instruments. General interviewing techniques (GIT) refer to the practices used by face-to-face and telephone survey interviewers for asking questions and obtaining codable answers from respondents. Since Fowler and Mangione (1990) codified a set of practices for standardization, supplementary or complementary interviewing techniques have been proposed to support motivation (Dijkstra 1987), recall (Belli et al. 2001), and comprehension (Schober and Conrad 1997). However, the accumulating evidence on interaction during interviews has not yet led to a comprehensive updating of interviewing techniques. This chapter describes steps we are taking to update GIT based on studies of interaction. After situating current practice and reviewing our reasons for revisiting GIT at this time, we describe our process and goals as well as gaps in current practice. We then outline key concepts and techniques for the first lessons in a revised GIT training. Our proposals are rooted in the structure imposed by the conversational practices associated with the question-answer sequence and the influence of question form on what constitutes a codable answer.

Brief Historical Context

Although most contemporary research interviewing uses standardized interviewing, early interviewing practices were relatively informal (Converse 1987, 95-97, 335; Williams 1942). By the 1940s and 1950s, there was a movement toward standardization, probably motivated by studies that showed the impact of the interviewer on reliability and validity (Converse 1987, 335; Hyman et al. [1954] 1975; see discussion in Schaeffer 1991). Fowler and Mangione (1990) provided an influential codification of standardization, summarized in Table A3A.1 in the online supplementary materials (Online Appendix 3), the first principle of which is that questions are read as worded. Although there is little documentation about how different survey centers implement standardization, all 12 academic survey centers studied by Viterna and Maynard required that questions be read verbatim (2002, 394), as do "conversational" interviewing methods (Schober and Conrad 1997) and the U.S. Bureau of the Census (undated). Other features of standardization varied at the centers Viterna and Maynard examined. Both the University of Wisconsin Survey Center (UWSC) and the Survey Research Operations (SRO) at the University of Michigan's Survey Research Center (SRC), the centers involved in the project reported here, implement fairly rigorous versions of standardization. (For other versions of standardization see Brenner 1982, 139; Dijkstra 1987; Gwartney 2007, 203.)

Reasons to Revisit GIT

There are several reasons to revisit GIT. First, we now have many studies that describe the actual behavior of interviewers and respondents. These studies have benefited from the vocabulary and techniques of conversation analysis and related disciplines. Second, as studies that code interaction have accumulated, the coding systems embody increasingly refined understandings of what should be labeled an "action" and what those labels should be (e.g., compare Cannell et al. 1989; Dykema, Lepkowski, and Blixt 1997; Olson and Smyth 2015; Ongena and Dijkstra 2007; Schaeffer and Maynard 2008). Third, analytic advances have helped clarify the contribution of interviewers, respondents, and questions to error (e.g., West and Blom 2017). Fourth, it is also possible that a comprehensive framework for interviewing techniques could contribute to developing common vocabularies and practices across research centers, thus reducing "house effects." Fifth, recent research both elaborated our understanding of the characteristics of questions that might be involved (e.g., Alwin 2007; Saris and Gallhofer 2007; Schaeffer and Dykema 2011a) and has shown that those characteristics predict both the participants' behavior and measures of data quality or proxies for it (e.g., Dykema et al. 2020; Garbarski et al., Chapter 18, this volume; Holbrook et al., Chapter 17, this volume; Olson and Smyth 2015; Olson, Smyth, and Cochran 2018; Schaeffer and Dykema 2011b). For our purposes, studies that focus on actions (e.g., question reading) and requirements of actions (e.g., exactly as worded), rather than psychological concepts (e.g., the subjective experience of rapport), provide stronger guidance for interviewer training.

A Process for Revising GIT

This chapter describes the process UWSC and SRO used to revise and update GIT. We also describe the first six modules of our training; work on the remaining modules is continuing. Our organizations began with compatible training protocols but somewhat different vocabularies. As we developed each topic in the training, we reviewed published documentation of interviewer training (e.g., Fowler and Mangione 1990; Gwartney 2007) and interviewer training at our organizations. We also considered relevant experiments and observational studies about survey interviews and questions; research in conversation analysis, linguistics, cognitive psychology, and measurement theory; and our own close observations of interaction in the interview over many years. We drew on extensive experience in training and monitoring interviewers to consider what interviewers could be trained to do in the fast-paced environment of an interview. We tried to reach a consensus on proposed techniques during extensive discussion and review of examples.

 
<<   CONTENTS   >>

Related topics