Desktop version

Home arrow Mathematics

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Discussion

Behavior Coding and Rapid Feedback

With rapid verbal and written feedback of the behavior coding results from CARI, interviewer question-asking behavior improved. The behavior changes are consistent with the rapid feedback results for general interview quality reported by Edwards, Maitland, and Conner (2017). Following the protocol exactly for two series of questions increased from 33.4% before feedback to 43.4% afterward. From a total survey error perspective, the increase in adhering to the protocol and the reduction in deviations are important: they suggest the possibility of large gains in data quality when rapid feedback based on CARI is applied as a routine tool in face-to-face interviewing. By adhering to protocol interviewers are more likely to increase respondents' use of records and to ask additional probes for health care events. These strategies in turn have the potential to increase data quality by decreasing underreporting. With a state-of-the-art CARI coding system, it should be possible to achieve major quality improvements with only modest additional effort, by targeting questions that capture key survey statistics and that exhibit the most problematic interviewer behavior.

Asking for clarification during feedback was associated with an increase in the predicted probability of interviewers administering the provider probes series strictly according to protocol. We assume clarification helped them, though it is possible that those who asked for clarification had greater motivation to improve. It is unclear why there was no similar increase in the calendar series. Conventional wisdom holds that successful interviewing requires two core skills: gaining cooperation and maintaining standardization during the interview. Survey practitioners have suggested that experienced interviewers are more successful at gaining cooperation, but paradoxically, inexperienced interviewers are more likely to ask questions exactly as worded, a key component of standardization (Olson, Kirchner, and Smyth 2016). New interviewers may have learned strict question-asking behavior in their initial training, but with experience they learn to cut corners to shorten interviews or maintain rapport. However, interviewers' level of experience was not related to the behavior we observed in these two question series.

It is not surprising that failure to maintain question meaning was greater in the calendar series than in the provider probes. As noted earlier, the calendar series format is more difficult to navigate and requires more cognitive skill to follow correct protocol. We also speculate that some interviewers may perceive the calendar series as less critical to the survey's key statistics. If that is the case, some interviewers may judge that correct protocol can be abandoned for questions like those in this series in favor of speeding up the interview or maintaining rapport with the respondent (Hubbard, Edwards, and Maitland 2016).

Supervisor Alert System

Alerts can be considered another form of rapid feedback to interviewers, but one that draws from data as well as paradata, at least for MEPS. The overwhelmingly dominant alert type - no use of records for medical events - was triggered so often that it became obvious early in the field period that it was not well-suited to the purpose of flagging incorrect interviewer behavior. Many respondents do not keep records of medical events, and thus the respondent's behavior is not fully under the interviewer's control.

The second most frequent alert - no use of records for prescribed medicine use by the elderly - was perhaps more appropriate because most respondents do have such records; for example, many medications for the elderly are for chronic conditions and are therefore current and available when the MEPS interview occurs. Arguably though, this alert and the one for all medical records might be more effective at the interviewer level rather than the interview level, flagging a pattern of no use of records across a number of interviews.

Despite these caveats, the alert system and the corresponding retraining program appear to have been effective in reducing the frequency of these quality-related data anomalies over the course of the field period as evidenced by the low recurrence rate. The other two data alerts were triggered infrequently, and the underlying problems they attempted to address could be classified better as issues in the instrument design rather than in interviewer behavior. Instrument changes in the following year eliminated them.

Conclusion

With its mix of public and private insurance, its fragmentation of medical providers and problems with coordination of care, and its large health care disparities, the U.S. health care system is very complex. A probability-based face-to-face survey provides the highest quality and richest set of metrics needed for investigating and understanding health care utilization and costs at the person and family levels and for modeling the effects of changes in policy, but such a survey requires skilled interviewers with the capability to collect complex data in the home.

In an earlier section of this chapter, we used prescribed medicine utilization and expenditures to illustrate key MEPS statistics, and discussed the effect of high utilizers. Interviewer effects on prescribed medicine data quality were targeted with rapid feedback of CARIcode data from the calendar series and the provider probes, and with overnight alerts to supervisors about interviews with elderly that reported no prescribed medicines obtained during the reference period. We have shown that these techniques can improve interviewers' adherence to correct protocol. One of the biggest problems in surveys of health care utilization is respondent error, particularly the underreporting of events. This error can be compounded by interviewer error. On MEPS, other such targeted efforts in interviewer classroom training have increased event reporting, a positive effect that has persisted for years. Rapid feedback from CARIcode and alerts about data anomalies is another form of interviewer training that can pinpoint problematic behavior when memories are fresh and there is incontrovertible evidence. The combination of interviewer and respondent error may be a large component of total survey error in the MEPS prescribed medicine estimates. If the measures undertaken in fall 2018 succeed in reducing the interviewer's contribution to total survey error, the estimates for total prescribed medicine expenditures for the elderly might increase by billions of dollars.

References

Edwards, B., A. Maitland, and S. Connor. 2017. Measurement error in survey operations management: Detection, quantification, visualization, and reduction. In Total survey error in practice, ed. P. P. Biemer, E. de Leeuw, S. Eckman, B. Edwards, F. Kreuter, L. E. Lyberg, N. C. Tucker, and B. T. West, 255-277. Hoboken, NJ: John Wiley & Sons.

Edwards, B., S. Schneider, and P. D. Brick. 2008. Visual elements of questionnaire design: Experiments with a CATI establishment survey. In Advances in telephone survey methodology, ed. J. Lepkowski, N. C. Tucker, J. M. Brick, E. D. de Leeuw, L. Japec, P. J. Lavrakas, M. W. Link, and R. L. Sangster, 276-296. New York: John Wiley & Sons.

Fowler Jr., F. J., and T. W. Mangione. 1990. Standardized survey interviewing: Minimizing interviewer- related error. Vol. 18. Newbury Park, CA: Sage.

Groves, R. M. 1987. Research on survey data quality. Public opinion quarterly 51:S156-S172.

Groves, R. M. 2011. Three eras of survey research. Public opinion quarterly 75(5):861-871.

Groves, R. M., and L. Lyberg. 2010. Total survey error: Past, present, and future. Public opinion quarterly 74(5):849-879.

Hubbard, R., B. Edwards, and A. Maitland. 2016. The use of CARI and feedback to improve field interviewer performance. Paper Presented at the Annual Meeting of the American Association for Public Opinion Research, Austin, TX.

Kashihara, D. and D. Wobus. 2006. Accuracy of household-reported expenditure data in the Medical Expenditure Panel Survey. Proceedings of the American Statistical Association, 3193-3200. Alexandria, VA: American Statistical Association.

Knowles, M. 1980. The modern practice of adult education: From pedagogy to androgogy. Wilton, CT: Association Press.

Knowles, M., E. F. Holton III, and R. A. Swanson. 2005. The adult learner: The definitive classic in adult education and human resource development (6th ed.). Burlington, NJ: Elsevier.

Kolb, D. A., R. E. Boyatzis, and C. Mainemelis. 2001. Experiential learning theory: Previous research and new directions. Perspectives on thinking, learning, and cognitive styles l(8):227-247.

Medical Expenditure Panel Survey, Agency for Healthcare Research and Quality Accessed 6/8/19 at https://meps.ahrq.gov/mepsweb/.

Mohadjer, L., and B. Edwards. 2018. Paradata and dashboards in PIAAC. Quality assurance in education 26(2):263-277.

Olson, K., A. Kirchner, and J. Smyth. 2016. Do interviewers with high cooperation rates behave differently? Interviewer cooperation rates and interview behaviors. Survey practice 9(2):1—11.

Schaeffer, N. C. 1991. Conversation with a purpose—or conversation? Interaction in the standardized interview. In Measurement errors in surveys, ed. P. P. Biemer, R. M. Groves, L. E. Lyberg, N. A. Mathiowetz, and S. Sudman, 367-391. New York: John Wiley & Sons, Inc.

Silber, H. 2019. Interviewer training programs of multinational survey programs mapped to the total survey error. Paper Presented at the Annual conference of the World Association for Public Opinion Research, Toronto, Ontario.

Sperry, S., B. Edwards, R. Dulaney, and D. E. B. Potter. 1998. Evaluating interviewer use of CAPI navigation features. In Computer assisted survey information collection, ed. M. Couper, R. Baker, J. Bethlehem, C. Clark, J. Martin, W. L. Nicholls II, and J. M. O'Reilly, 351-365. Hoboken, NJ: Wiley.

Tourangeau, R., F. Kreuter, and S. Eckman. 2012. Motivated underreporting in screening interviews. Public opinion quarterly 76(3):453-469.

West, B., F. G. Conrad, F. Kreuter, and F. Mittereder. 2018. Nonresponse and measurement error variance among interviewers in standardized and conversational interviewing, journal of survey statistics and methodology 6(3):335—359.

 
<<   CONTENTS   >>

Related topics