Mixed methods can provide a more complete understanding of the results or outcomes of an intervention study. Midgley, Ansaldo, and Target (2014) used a qualitative approach nested in a randomized control study called Improving Mood With
Psychoanalytic and Cognitive Behavioral Therapy (IMPACT). The intervention was designed to treat adolescent depression in the United Kingdom and to prevent relapse. Before the intervention, all adolescents and parents underwent qualitative interviews to assess the issues that brought them to treatment, as well as their hopes and expectations about the therapy. At the posttreatment follow-up, families were interviewed regarding their experiences of therapy over time with the focus on treatment outcomes and cultural and contextual factors that affected the outcomes. In addition, participants identified outcomes that were important to them that the investigators did not expect.
For community-based participatory research, mixed methods are important to gain both emic and etic perspectives (Ahmed, Beck, Maurana, & Newton, 2004). Because mixed methods research designs place high value on the stories behind the numbers—both in exploratory designs where the experiences and insights of the community under study inform the quantitative investigation, and in explanatory designs where they illuminate the quantitative data—mixed methods are especially attractive to community partners whose interest is in improving practice and outcomes suitable to a context. As such, mixed methods provide a valuable bridge between researchers and community partners, which is essential to the successful implementation of an intervention.
Mixed methods strategies are essential to understanding what must be adapted in evidence-based models to ensure successful outcomes for different patients and in different communities. The Federal Coordinating Council for Comparative Effectiveness Research (CER) defined CER broadly, asserting that it is patient-centered, “real-world” research that can help patients, clinicians, and other decision makers assess “the relative benefits and harms of strategies to prevent, diagnose, treat, manage, or monitor health conditions and the systems in which they are made” (Congressional Budget Office of the Congress of the United States, 2007; National Institutes of Health, n.d.). A misleading assumption is that all participants respond to an intervention in the same way, no matter the context. In the real-world, people come to the treatment with different preconceived notions about what is wrong and what to do about it. Tension between “patient-centeredness” and application of an “evidence base”—between incorporating context and general applicability of evidence (“generalizability”)—keeps treatments that might be beneficial (e.g., depression treatment) from getting to people who could benefit from the treatments (e.g., persons with medical comorbidity such as diabetes who are sometimes poorly adherent to lifestyle changes and medical regimens; Ciechanowski, Katon, & Russo, 2000; Gonzalez et al., 2008). Mixed methods can offer insight into how an intervention works or does not work, and for whom and in what context.