Desktop version

Home arrow Mathematics

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Behavior Coding of Audio-Recordings

Audio-recordings of the calendar series and the provider probes series in 212 interviews were reviewed by two behavior coders, using Westat's CARIcode system. In CARIcode, the coder is able to call up a specific interviewer or question in an interview. A coding screen presents the question as it appeared to the interviewer, the coding form, a button to play the recording, and some paradata items that might be associated with the quality of the interview, such as interview length (see CARIcode screenshots in Appendix 6C in the online supplementary materials). Project-specific training was conducted to ensure that the two coders understood the coding scheme and could apply the behavior codes consistently. Coders evaluated the overall quality of the interview and of each instance of asking the calendar series and the provider probes. The inter-coder agreement rate was .82. The coding scheme for the calendar series and provider probes are presented in Appendix 6D in the online supplementary materials.

The rapid feedback process is diagrammed in Appendix 6E in the online supplementary materials. Following the procedures described by Edwards, Maitland, and Connor (2017), verbal and written feedback was provided to the interviewer quickly (ideally within 72 hours of the interview). The feedback process included positive as well as negative points. The next interview conducted by the interviewer was coded, resulting in each interviewer having a pair of interviews in the data set, one just before and one just after feedback. Because the process was implemented in late fall, only a subset (112) of the MEPS interviewers were available to participate in the study, resulting in 224 interviews in the data set. Data about the feedback interaction was captured (such as whether the interviewer agreed with the feedback or asked for clarification). Selected interviewer characteristics were also available, such as years of MEPS experience. We hypothesized that more desirable behavior (more consistent with the protocol) would be observed after feedback, both for overall interview quality and for each question series.

Coders reviewed more than 3,000 instances (i.e., question series) of interviewer question-asking behavior in the first selected interview for each interviewer. For each instance, the coders assessed behavior using a three-point scale indicating whether the interviewer:

(1) followed protocol exactly; (2) departed from exact protocol but maintained the series meaning; or (3) did not maintain the series meaning. For the calendar series, the exact protocol was to follow the order indicated by the respondent. For the provider probes series, the exact protocol was to ask each question verbatim in the order presented by the CAPI screens. Field directors gave interviewers verbal and written feedback based on the coding. Coders reviewed another 2,200 instances in the interviews conducted right after feedback.

Supervisor Alerts

In addition to the CARIcode data, we reviewed data from alerts generated for supervisors to investigate suspicious interviewer behavior. The alerts were displayed on a dashboard, which also displayed useful production and cost metrics by region and interviewer and offered a simplified way to drill down into interviewer details. The alerts included four operational issues representing outlier behaviors: CARI consent refusal, whether the time of day the interview was completed exceeded boundaries, excessive contact attempts, and cases with high potential to yield completed interviews not worked during reporting period. Alerts also flagged specific data quality concerns such as unusual event date patterns and low record use. (Low record use could reflect interviewers failing to encourage respondents to produce records or failing to follow the calendar series protocol.) Alerts were generated early each morning from data transmitted to the home office overnight, and displayed on the supervisor's dashboard. The supervisor was expected to act on the alert within 24 hours, and might begin by reviewing paradata from various sources: records of contact attempts, CARI recordings, geographic information system (GIS) data, etc. If the anomaly could not be explained by reviewing paradata, the supervisor discussed the issue with the interviewer, determined whether there was a problem, diagnosed it, and took corrective action with the interviewer. Actions included explaining proper protocol or referring the interviewer to other training material. The supervisor documented alert actions and status on the dashboard. Reviewing alerts with interviewers can be viewed as another form of rapid feedback and on-the-job training. We examined alert frequency by type and by interviewer over the course of the field period. We hypothesized (1) alerts would diminish over time and (2) most interviewers who generated an alert would not continue to perform in ways that generated alerts.

 
<<   CONTENTS   >>

Related topics