ARE TREATMENT EFFECTS REAL? THE ROLE OF FIDELITY
LAURA N. GITLIN AND JEANINE M. PARISI
I have not failed. I have just found 10,000 things that do not work.
A multisite intervention study to improve exercise adherence among cardiac patients reports large and statistically significant benefits overall for the treatment groups; yet, outcomes vary by site with some demonstrating null findings.
A home-based intervention that provides stress reduction techniques to parents caring for children with chronic illness demonstrates positive results with parents also reporting a strong bond with interventionists with whom they felt comfortable in confiding.
A proven intervention for self-management of chronic illness is replicated in multiple senior centers, but demonstrates varying levels of effectiveness by site.
These are common scenarios in behavioral intervention research. Each of these scenarios raises questions as to whether treatment effects reported from the evaluations of the interventions are attributable to the intervention itself, are consequences of other unmeasured factors, or reflect inadequate or inconsistent implementation of the intervention protocols. In the multisite study, exercise adherence scenario, unaccounted for variations in dose, intensity, or different motivational styles of interventionists may be at play; in the study on parental outcomes, formation of a strong therapeutic alliance versus the specific stress reduction techniques of the intervention may account for reported benefits; and for the multisite self-management study, poor adherence to the delivery of the intervention at some sites may explain variations in outcomes and differences in effectiveness levels reflecting threats to internal and external validity.
These examples showcase the critical need to attend to what is known as “fidelity.” Fidelity, also referred to as “implementation fidelity,” “fidelity of implementation,” “intervention or treatment fidelity,” or “treatment integrity,” is a multidimensional construct that, at its most basic or fundamental level, refers to whether an intervention is implemented as designed or intended (Bond, Evans, Salyers, Williams, & Kim, 2000; Gearing et al., 2011; Mowbray, Holter, Teague, & Bybee, 2003; Perepletchikova & Kazdin, 2005).
The consideration of fidelity is critical in every phase of advancing an intervention (developing the intervention, evaluating the intervention’s efficacy and effectiveness, and implementing the intervention in a practice setting). Without an understanding of the level of fidelity achieved, it is impossible to determine whether an outcome from an evaluative study was due to the intervention itself or other potential competing factors, especially if the intervention was not delivered as intended. Similarly, without fidelity, it would be impossible to evaluate whether an intervention could be replicated effectively in the future (Gearing et al., 2011).
Although essential to behavioral intervention research, fidelity is an often overlooked study design element (Hardeman et al., 2008). Most intervention reports fail to include an adequate description of a fidelity plan, how fidelity was measured, or the level of protocol adherence achieved through fidelity monitoring (Dane & Schneider, 1998; Gearing et al., 2011). Moreover, reviews of published interventions have found that fidelity is rarely reported. For example, only 3.5% of psychosocial interventions published between 2000 and 2004 sufficiently addressed fidelity (Perepletchikova, Treat, & Kazdin, 2007). Similarly, a review of 63 social work intervention studies revealed that the majority lacked critical information concerning intervention delivery (e.g., mention of training, treatment manuals, and supervision) to adequately assess study outcomes (Naleppa & Cagle, 2010). Likewise, in a review of high-impact journals publishing education intervention research between 2005 and 2009, Swanson and colleagues (2011) found considerable inconsistencies in fidelity reporting. Even in articles that did provide fidelity information, the authors found fewer than 10% included data about the quality of implementation. Thus, across many disciplines, fidelity tends to be underemphasized or hidden, and outcomes of fidelity monitoring are rarely reported in behavioral intervention research.
The purpose of this chapter is to provide a comprehensive overview of the meaning and importance of fidelity and to describe specific strategies for addressing fidelity. We begin by briefly reviewing the historical use of the term to highlight the evolution of this construct and its varied definitions over time. Next, we examine the multiple roles and purposes it serves at different junctures along the intervention pipeline. Finally, we consider strategies for addressing fidelity and discuss the key challenges in advancing fidelity plans in the design and testing of behavioral interventions.