Desktop version

Home arrow Mathematics

  • Increase font
  • Decrease font

<<   CONTENTS   >>

Revisiting Interviewing Techniques

Models of the question-answer process suggest a range of criteria to consider in choosing among candidate interviewing techniques (e.g., Cannell Miller and Oksenberg 1981; Dykema et al. 2020; Ongena and Dijkstra 2007). These models and accompanying research led us to identify a (somewhat aspirational) set of criteria (see Table 3.1). In many cases we had to decide whether to select among interviewing techniques we identified in our current GIT (e.g., always reread the question) or develop new techniques for situations unaccounted for in our current GIT (e.g., rules for responsive follow-up).

General Goals

In selecting interviewing techniques, we aimed to balance the traditional objective of standardization - to reduce interviewer variance - against other goals and to make our reasons explicit. These goals included the following: (1) explicitly acknowledge that a question's response format drives interaction in the interview and tailor training modules (e.g., on how to recognize a codable answer and follow-up) around different question forms (e.g., Dykema et al. 2020; Holbrook, Cho, and Johnson 2006; Olson, Smyth, and Cochran 2018); (2) reduce burden on the respondent and motivate the respondent's engagement by authorizing more responsiveness by the interviewer (e.g., Garbarski, Schaeffer, and Dykema 2016); and (3) fill gaps in current training that left interviewers or quality control supervisors unsure how to proceed. The literature and our transcripts offered the most evidence for the first of these goals, which is emphasized in the discussion below.

To illustrate how some of the criteria and goals were considered in a specific decision, we describe our decisions about how interviewers should repair errors in the initial reading of a question. The current version of training at UWSC and the 1992 edition of the interviewer training at SRO did not train interviewers on how to repair the misreading of a question. The training script (adapted for individual studies) in use at SRO by 2013 read: "If you make a mistake in reading the question, no matter how small, it is your responsibility to start over and re-read the entire question." This guidance may be unclear because what constitutes a "mistake" is ambiguous and it can be difficult for an interviewer to determine quickly what the "entire" question is (e.g., a question may have a preamble).


Criteria Considered in Reviewing Interviewing Techniques

• Interviewer-focused

• Need: Is the procedure that the interviewer should follow ambiguous or undefined under current practice (e.g., how to correctly repair an inadequate reading of the question)?

• Ease of training: Can the technique and the situation when it is to be used be explained clearly? Can interviewers recognize the situation that requires the technique easily and reliably? Can interviewers use the technique to respond consistently to the situation? How complex is the technique? Can a simpler technique accomplish the goal? Is the technique consistent with other techniques?

• Frequency: How common are the situations that require the technique likely to be? Can one technique be used in many situations or are multiple techniques needed?

• Instrument support: How clearly does the instrument convey the technique (e.g., optional parenthetical statements) to interviewers? Does the technique require support in the instrument itself, for example, in instructions to the interviewer or programming of specific follow-up techniques (e.g., repeating response categories in a battery "if needed" or after every third item)?

• Respondent-focused

• Rapport and motivation: Will the technique increase or decrease how responsive the interviewer seems? Will the technique increase or decrease the respondent's engagement?

• Efficient progress and reducing burden: Does the technique lengthen the interview or increase burden for the respondent? Does the technique reduce the occasions on which the interviewer must intervene or follow-up? Is the technique simple for respondent and interviewer to implement? How complex is the technique? Can a simpler technique accomplish the goal? Is the technique consistent with other techniques?

• Training the respondent by communicating the practices of standardization consistently: Does the technique model adequate role behavior for the respondent? Does the technique model inadequate role behavior for the respondent? Is the technique consistent with other techniques?

• Measurement-focused

• Reliability: Is the technique likely to increase or decrease interviewer variance? Is the technique likely to increase or decrease the reliability'- of the respondent's answer?

• Validity: Is the technique likely to increase or decrease the accuracy of the respondent's answer? Is the technique likely to communicate a point of view or expectation that might influence the respondent's answer?

After reviewing multiple questions and interactions, we formulated three more specific rules for question reading: (1) if you omit, add, or change a word, reread the sentence from the beginning; (2) if you have trouble reading a word the first time, back up to reread from that word (e.g., "Would it be mainly manufacturing, retail, wholeta- wholesale trade, or something else?"); (3) if you make a mistake reading even one response category, reread all of the categories from the beginning. Our recommendation gave priority to clarity in training (which affects reliability) and support for the respondent's cognitive processing (which affects both reliability and validity) rather than efficiency (which may affect the respondent's engagement).

Specific Gaps in Existing Training

Advances in understanding come with new distinctions and terminology, and we hoped to adopt a vocabulary that could be shared across question writers, interviewers, and survey organizations. We borrowed vocabulary from other research traditions (e.g., "acknowledgement" rather than "feedback"), and we tried to make distinctions (e.g., among response formats) that are needed by interviewers, trainers, and supervisors. Because both the interaction in the interview and the techniques interviewers need depend on the response format of the question, we developed labels for the most common question forms (see Table A in Online Appendix 3 for examples). We outline below some of the gaps in practices and techniques that we identified; others remain. For example, ideally the conventions used by those who write and program survey instruments (to indicate which parts of questions can be inserted or repeated "as needed," and so forth) would be fully integrated with the techniques of interviewing, so that instructions to interviewers were standardized across questions, instruments, and organizations.

<<   CONTENTS   >>

Related topics