Desktop version

Home arrow Mathematics

  • Increase font
  • Decrease font

<<   CONTENTS   >>


Just over half of the youth who participated were female (52%), and the average age of the youth is 9.10 years old (Table 10.1). In 26% of the interviews, the interviewers knew the youth prior to the interview, 31% of the interviews took place while someone else was present, and 73% of the interviews took place in the youth's home. The average score for the total internalizing measure is 16.85 on a scale that ranged from 0 to 53. The first subscale of the internalizing measure, anxiety/depressed, showed an average score of 5.57 with a high score of 20. The second subscale, withdrawn, showed an average score of 4.68 out of 15. The last internalizing subscale, somatic complaints, has an average score of 6.61 and a high value of 19. The overall externalizing scale averaged 7.98 and ranged from 0 to 35. The first subscale of the externalizing measure, rule breaking, has an average of 2.17 and ranged from 0 to 15. The final subscale of externalizing, aggressive behavior, has an average of 5.81 with a range of 0-24. Only 11% of the youth indicated that they have ever tried any form of substance use. On a scale of 0-7, the average cultural participation score for youth was 2.41. Lastly, the average reported discrimination score was 13.65, which ranged from 0 to 31.

Table 10.2 presents the results of random-effects models predicting overall measures of internalizing (linear) and externalizing (linear) behavior, and Table 10.3 presents substance use (logistic), cultural participation (linear), and cultural discrimination (linear). All models include random effects for the 19 independent interviewers.

Internalizing and Externalizing Behaviors

Model 1 of Table 10.2 shows that youth who were known to the interviewer prior to the interview had -3.539 fewer reported internalizing behaviors (p < 0.001) after adjusting for gender and age. Models 2 and 3 show that the other measures of privacy (presence of a third party and interview location) are not significantly associated with internalizing behavior. Model 4 shows that knowing an interviewer prior to an interview still significantly reduced the total internalizing score by -3.459 (p < 0.001), after adding the other two measures of privacy to the model.

None of the measures of privacy are associated with externalizing behavior in any of the models (Table 10.2, Models 5, 6, 7, 8). Neither gender nor age significantly predicts the externalizing aggregate scale. In separate models we also explore the relationship between our privacy measures and the three internalizing subscales and the two externalizing subscales (see Online Appendix 10A). In brief, we find that knowing an interviewer was negatively associated with all three internalizing subscales and that none of the privacy measures were associated with the externalizing subscales.

Substance Use, Cultural Participation, and Cultural Discrimination

Table 10.3 presents the random-effects models predicting substance use (Models 1-4), cultural participation (Models 5-8), and cultural discrimination (Models 9-12). We see that none of the privacy measures are associated with reports of substance use, cultural participation, or cultural discrimination. There are also no associations between gender and the outcome variables in any of the models in Table 10.3. However, a one year increase in age is associated with an average decrease in reports of cultural discrimination of -0.794 (p < 0.05). This association appears in all four models in this set with a similar effect size (Models 9-12).


We find that when an interviewer knew the participant in a CBPR intervention study among American Indian communities, the reports of internalizing behavior were on average 3.5 points lower (Table 10.2, Models 1 and 4) than instances where the participant was unknown to the interviewer. Knowing a participant is negatively associated with all of the internalizing subscales (Online Appendix Table A10A.1). This is critical as the ASEBA measurement of internalizing behavior includes thresholds for borderline and clinical ranges on all its scales. Underreporting of internalizing behavior means that borderline or clinical participants may be incorrectly classified. Programs that rely on these classifications for intervention eligibility would then be missing key participants when they are using interviewers who know the participants outside of the survey context who would otherwise be included.

Although knowing an interviewer was associated with lower internalizing scores, the same privacy concern was not associated with reports of externalizing behavior, substance use, cultural participation, or cultural discrimination. The presence of a third party during the interview and whether the interview took place in the home was also not associated with any of the core measures. The majority of the tested associations between privacy concerns and the core BZDDD outcomes among youth are not significant. These outcomes indicate that social desirability effects due to these three privacy concerns are largely non-existent among youth during the baseline period of this study.

A longstanding challenge of CBPR projects is that there is greater potential for breaches of confidentiality and privacy when the lines between researcher and participant blur. This is particularly true when we employ interviewers and other research members for a research project situated in their own community (Banks, et al. 2013; Holkup, et al. 2004). In such settings, participants may edit their responses to prevent their neighbor or someone else that they know who is working as an interviewer from discovering information they would not normally disclose outside of an interview. The findings of this text are therefore largely reassuring as they show few associations between reduced privacy and edited responses.

However, there is still work to be done to reduce the potential for social desirability concerns related to privacy. In this study, 26% of the youth interviews took place with a youth whom the interviewer already knew. For research projects with the potential for this situation to occur, we suggest that blocks of sensitive questions should be administered with a self-administered data collection mode when feasible. Using either a paper questionnaire or preferably a form of computer-assisted self-interviewing (CASI) would prevent the interviewer from hearing or seeing how a participant responds to a sensitive question. This would retain the advantage of employing interviewers from the local community and discourage edited answers.

Addressing the 31% of interviews that took place with a third party present is more difficult. Youth interviewed for the baseline data collection were between 8 and 10 years old, and 73% of the interviews took place in their home. Although we lack data on who the third person was and their relationship to the youth, it is reasonable to assume the presence of caretakers or other relatives. It can be exceptionally difficult to request privacy in many interview settings in the home, and even more difficult when interviewing younger children (Mauthner 1997; Mneimneh, et al. 2018). Fortunately, we see no evidence that the presence of a third party was associated with any of the key outcomes for this study, nor any associations with the location of the interview. Additionally, although 31% of interviews with a third party present is high, it is remarkably similar to the percent (30.2%) of third party presence reported for the World Mental Health Interviews conducted in the United States (Mneimneh, et al. 2018).


The sample for this study is unique in that it represents a group of people who share a language, culture, and history. Further, it is a sample of youth aged 8-10. The extent to which the associations tested here can be generalized to other groups and age ranges is questionable. However, this is one of the few studies to look for social desirability effects among American Indians in a survey context. A review of social desirability in cross-cultural research identified no studies that examined social desirability among American Indians (Johnson and Van de Vijver 2003).

Another limitation is that although we find that many of the key outcomes are unassociated with our privacy measures, many other questions were asked in the survey. Our focus here is to assess the core outcomes only. Future studies using these data or data gathered with similar methods should assess item-specific privacy effects whenever possible.

Finally, there were multiple interviewers who worked in each of the study's primary locations. However, these interviewers were not randomly assigned to different participants and we do not have information about how interviewers decided to select a family to interview out of the pool that was available for a given community. This means that we cannot assess if some interviewers picked participants that they knew or if the outcomes we examined in this chapter were associated in some way with the interviewer's choice to interview a family.

<<   CONTENTS   >>

Related topics