Desktop version

Home arrow Mathematics

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Methods

Data were collected from March to June 2012 in the Netherlands by GfK Panel Services Benelux. Multistage cluster sampling was used (for more details see Haan, Ongena, and Aarts 2014). Of the 3,496 households selected, 824 participated in the survey. Data collection was part of a mixed-mode experiment using a slightly modified version of the ESS Round 5 questionnaire. Three modes were compared: CAPI, CATI, and a web version not included in the current analysis. One-third of the sample members were contacted by phone, and subsequently randomly assigned to CAPI, CATI, or web; one-third of the sample members were contacted by phone and allowed to choose between CATI and web; and one-third of the respondents were contacted face-to-face at their homes, and were allowed to choose between CAPI and web administration. Thus, while one-third of the sample members were randomly assigned to their CAPI, CATI, or non-interview administered condition, two-thirds of the sample members were offered a choice between an interviewer-administered mode and a non-interviewer-administered mode. The overall response rate was 37.5 percent (AAPOR 2016, RR1). Details about the response rates per experimental group can be found in Table A12A.1 in the supplementary online materials. More specific information about the sample can be found in an earlier study by Haan,

Ongena, and Aarts (2014). Similar to Olson and Smyth (2015), we include a control variable for the respondent's sex to control for inadequate representation of female respondents. In general, age and education are considered important factors in determining a respondent's working memory capacity (Salthouse 1991; Yan and Tourangeau 2008), and age and education have been shown to affect response time in CAPI (Couper and Kreuter 2013), CATI (Olson and Smyth 2015), and web surveys (Yan and Tourangeau 2008).

All interviews were audio-recorded; the 54 CAPI and 60 CATI interviews (with 27 interviewers and 81 different questions) comprise about 57 hours of interaction. The first half of the interview was transcribed using the Sequence Viewer Program (Dijkstra 2018). This program allows for division of interviews into separate QA sequences (i.e., all utterances involved in asking and answering one survey item), and subsequently dividing all utterances in a QA sequence into turns (i.e., utterances by the interviewer and those by the respondent), and events within turns (i.e., meaningful actions by the interviewer or respondents such as questions, answers, requests, comments, considerations, etc.). Thus, in the transcripts both for the interviewer and the respondent, the number of events and the number of words per QA sequence could easily be counted.

For missing data, list-wise deletion is used in some of our analyses, and therefore 6629 QA sequences were available for analysis (53 CAPI and 55 CATI interviews). The sequences included 24,195 turns containing 33,741 events, with an average of 3.65 turns (SD=3.38) and 5.09 events (SD=4.75) per sequence. The number of events for a sequence ranged from a minimum of one event (i.e., the interviewer briefly stating the assumed answer and continuing with the next question) to a maximum of 88 events (this occurred during the administration of a question about the respondent's satisfaction with life). In order to allow for computation of Pearson correlations, in subsequent analyses we used the trimmed number of turns and events (see Table A12A.2 in the supplementary online materials).

We tallied the number of respondents' uncertainty markers using the text search option of the Sequence Viewer program and counting the phrases and words indicating uncertainty in Dutch Transcribers were explicitly instructed to transcribe any noticeable hesi- tance of speech, i.e., filled pauses (Swerts 1998), that normally appear in Dutch as "uh" [ah] or "urn" [am] with "eh," and we also counted occurrences of these pauses using text search commands. All analyses were conducted using R, version 3.3.1 (R Core Development Team 2018). Statistical analyses have not been adjusted for the design effect of sampling.

For the analyses of interaction quantity, we use methods that include the complex structure of the data. In analyzing survey interview interactions, it is necessary to take crossclassification by respondents and questions, and nesting within interviewers into account (Olson and Smyth 2015; Yan and Tourangeau 2008), but in this case, interviewers, respondents, and questions are also nested within mode (i.e., interviewers interviewed only in one mode), as displayed in Figure 12.1. Since we expect that mode, interviewers, respondents, and questions each will have a unique effect on the interaction quantity, we take this nesting into account by estimating cross-classified random effects models with number of words uttered cross-classified by respondents and by questions and with questions and respondents nested within mode (see Online Appendix 12A for a full description of the model). Due to the nature of our data (108 respondents, 27 interviewers), including interviewers into the same model as respondents yielded zero variance at the interviewer level in all cases; therefore, in all models we did not include random effects at the interviewer level.

FIGURE 12.1

Data structure of the number of words uttered nested in mode and interviewers and cross-classified by respondents and questions.

 
<<   CONTENTS   >>

Related topics