Desktop version

Home arrow Mathematics

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Subsequent Record Check Investigations

A validation study of the economic questions included in the Panel Study of Income Dynamics (PSID) provided the opportunity to test the interviewing techniques within a "full" record check study as well as examine several hypotheses concerning the nature of response error that Canned and colleagues had identified in the earlier health studies (Duncan and Mathiowetz 1985). The nature of the record check study, where responses for all respondents were examined by linking to company records for a single manufacturing entity, allowed for the assessment of both the extent and direction of response error. The study found weak support for a reduction in response error associated with the experimental interviewing techniques, most notably for the reporting of less well-known fringe benefits.

In another record check study, Belli and Lepkowski (1996) looked at the effect of feedback alone. The study design had experimental and control conditions, with interviewer feedback programmed into the experimental questionnaire. But Belli and Lepkowski did not replicate earlier interviewing experiments in this paper. Instead, they relied on a behavior coding analysis of a sample of approximately 500 interviews, apparently selected from both the experimental and control conditions. The dependent variable - constructed by subtracting the number of health care events recorded in records from those reported in interviews - also differed from earlier studies. The outcome variable also was skewed due to undercoverage of health care events in the records employed. The authors suggested that feedback given after responses to immediately preceding questions had a negative impact on reporting accuracy (leading to over-reports). It is difficult to compare this finding to those of earlier studies, which relied on comparisons of experimental treatments as a whole, focusing on the potential effect of a number of feedback statements across questions. While we do not view the findings of this study as persuasive, it seems that behavior coding could be employed as a supplement to between-treatment analysis in future studies, to look at effects of instructions and feedback at a micro, question-by- question level.

Summary and Critique

From 1960 until 1990, the Cannell perspective on response error, and interviewing techniques designed to attack this error, evolved and was tested in a variety of settings. The perspective is a coherent, integrated approach to interviewing. The evidence overall suggests that the techniques may lead to better self-report data. But some objections to the response error assumptions that underlie the research persisted during this time, and the record of effects of interviewing techniques is inconsistent across studies.

The Marquis, Marquis, and Polich (1986) paper raised the question of what can be inferred from a record check study. They sought to make the case that "socially undesirable" phenomena are not under-reported in surveys. Their work dismissed "reverse" record check evidence of reporting bias out of hand. It also attacked "full design" record check evidence that found under-reporting bias (e.g., Madow 1967), speculating that the questionnaires in those studies were flawed, leading to mismatches between the survey and record evidence.

Marquis, Marquis, and Polich's (1986) findings are not definitive, because the execution of their record check study is not fully transparent and their suppositions about evidence contrary to their position are not convincing. Further, they may have found "over-reporting" of events - support for their case - due to record errors or match errors, as they acknowledge. In addition, their dismissal of "under-reporting" findings in the studies they examined as artifacts of poor questionnaire design ignores possible questionnaire flaws in studies that support their hypothesis. Still, this paper casts sharp light on the design and execution of record check studies. The malleability of survey response matches to record data has been noted elsewhere (e.g., Miller and Groves 1985).

Since Cannell and colleagues' work utilized evidence from record check studies to frame tests of methods to improve survey reporting, the limitations of record check studies raised by Marquis, Marquis, and Polich (1986) may be seen to cast doubt on the inferences from their interviewing research. But outcome measures in the interviewing studies include ones that are believed to be both under- and over-reported, and attitudinal measures not subject to record "validation." It is more important to recognize the limitations in the ability of record check studies to establish "truth" and carefully design record data-survey report comparisons to reflect these limitations.

Beyond the issues raised by assumptions about response validity, the record of effects of the Feedback technique is inconsistent. The original experiment (Marquis 1970) showed noteworthy effects of what Marquis called "social reinforcement." A subsequent investigation (Cannell, Marquis, and Laurent 1977) found a more complicated picture, in which effects of Feedback were present for lower education respondents, but not for their counterparts with more education. Oksenberg, Vinokur, and Cannell (1979b) examined Feedback effects when combined with Instructions and with both Instructions and Commitment, for both low and high education respondents. In this study, Feedback effects were present for both education groups. Miller and Cannell (1982) found that there were no incremental Feedback effects when it was combined with Instructions and Commitment in a national telephone study - the first attempt to employ the techniques in this mode. In subsequent telephone research, feedback was always combined with the other techniques, so its independent effects could not be discerned. This inconsistent record of effects of Feedback is a call for further research into the value of what seems intuitively to be a useful approach to orienting respondents to interview tasks.

 
<<   CONTENTS   >>

Related topics