We review the results of the behavior coding and rapid feedback and then summarize the results for the supervisor alert system.
Behavior Coding and Rapid Feedback
Table 6.1 shows the rapid feedback results at the level of the question series. Interviewers maintained the meaning of the questions but did not follow the protocol exactly in the majority of instances in which the two series of questions were asked (n = 5,259), both before and after feedback. Question-asking behavior that followed the protocol exactly increased from 33.4% before feedback to 43.4% after feedback. Behavior that failed to maintain the question meaning decreased from 9.8% before feedback to 3.7% after feedback. The same direction and magnitude of results were observed for both question series. However, behavior differed somewhat by question series type. More desirable interviewer behavior was observed in the provider probes than in the calendar series, both before and after feedback.
After each feedback session the field director recorded whether the interviewer asked for clarification (a case-level dichotomous variable). Table 6.2 shows the interviewers' behavior by question series before and after feedback, broken out by whether the interviewer asked for clarification. As expected, requests for clarification were more frequent when the interviewer had not followed protocol exactly before feedback. For the calendar series, in more than 14% of the question series in which clarification was sought, the interviewer had not maintained the series' meaning in the interview conducted before feedback. It is striking that clarification was requested in none of the question series in which the interviewer had followed the calendar series protocol exactly before feedback. After feedback sessions
Question-Asking Results Before and After Feedback by Question Series
Interviewer Behavior Before and After Feedback, with and without Request for Clarification, by Question Series
Note: Whether the interviewer asked for clarification during the feedback session was recorded for 88% of the question series.
with clarification requests, 12.7% of the calendar series had question-asking behavior that followed the protocol exactly, a considerable increase from zero but still far below the level observed in sessions with no requests for clarification.
Similar results were found for behavior in the provider probes, but exact protocol was followed much more frequently than in the calendar series, and failure to maintain the series meaning was exhibited less often. For provider probes in sessions with clarification requests, 36.4% had question-asking behavior that followed the protocol exactly before feedback, increasing to 41.9% after feedback; 8.3% failed to maintain the meaning before feedback, decreasing to only 2.2% after.
We estimated multilevel multinomial logistic regressions to examine the role that asking for clarification during feedback might have played (see Table A6F.1 in Appendix 6F in the online materials). Figure 6.1 shows the predicted probability of the interviewer following protocol exactly versus the probability of not maintaining the meaning, taking into account the design effect of the clustering by interviewer. The first set of bars is before feedback and the second set after feedback. The darker bars are interviewers who asked for clarification; the lighter bars are interviewers who did not ask for clarification. (It should be noted that the interviewer interclass correlation coefficients are large due to the within-subject design.)
As expected, interviewers who did not maintain meaning in the first interview and asked for clarification during feedback were more likely to read verbatim after feedback. However, interviewers who did not ask for clarification showed the opposite effect: they were less likely to ask questions verbatim after feedback.
Figure 6.2 shows the results by question series. It is only in the provider probes that we see that interviewers who did not maintain meaning were more likely to read the questions verbatim after feedback if they asked for clarification. On the flexible calendar grid, interviewers who did not maintain the meaning were less likely to follow the protocol exactly, even slightly less likely when they asked for clarification. This relationship held for the predicted probability of interviewers making minor working changes after feedback (data not shown) - it was more likely only if the interviewer asked for clarification.
We added interviewer experience, both as a continuous variable (years of experience on MEPS) and as a dichotomous variable (less than one year versus one year or more) and found it had no effect on the results (see Table A6G.1 in Appendix 6G in the online materials).
Predicted probability of following protocol exactly (versus not maintaining meaning) before and after feedback by whether the interviewer asked for clarification during feedback.
Predicted probability of following protocol exactly (versus not maintaining meaning) before and after feedback by whether the interviewer asked for clarification during feedback, by question series type.
Supervisor Alert System
Supervisor alerts, specifically geared toward identifying issues that influence data quality during the interview, were issued throughout the fall field period from a base of 7,361 interviews. The project has employed a number of operational alerts for years related to efficiency and production with known patterns and tested retraining approaches. While effective, they do not directly influence the collection of key data elements. By contrast, the newly instituted data quality alerts offer a novel, rapid approach to identifying and addressing within-interview behavior that more directly influences key study estimates. The focus specifically on data quality alerts reflects the evolution of this system from monitoring production to assessing data collection. One data-quality alert type dominated the process: record usage for medical events (see Table A6H.1 in Appendix 6H in the online materials). Almost 2,000 alerts (84.4%) were generated because answers to the calendar series indicated the respondent had no records yet reported events later in the interview. The second most common alert, the lack of records for prescribed medicines for those aged 65 and over, occurred less than 250 times (10.4% of alerts). Alerts for hospital stays that started and ended on the same day occurred in the data at less than half that rate, and the fourth type of anomaly (respondents younger than 18) was negligible.
MEPS data collection is not spread evenly over the fall months. Rather, all cases are assigned at the beginning and more interviews are completed in August and September than in October and November. Figure 6.3 shows the distribution of alerts by month, adjusted to reflect the number of interviews each month. More alerts were generated in August even when adjusted for caseload, and the trend is toward fewer alerts per month over the course of the fall. Figure 6.4 shows the distribution of alert volume by interviewer and by alert type. All interviewers were associated with at least one alert. For a given alert, many interviewers had only one, and many others only two. This pattern holds for most alerts, including no prescribed medicine records for the 65+ and hospital stays that started and ended on the same day.
Caseload-adjusted alert distribution over field period.
Alert occurrence among interviews by alert type.