Home Engineering Behavioral Intervention Research: Designing, Evaluating, and Implementing
CHALLENGES IN ENHANCING, MONITORING, AND MEASURING FIDELITY
We have identified six challenges concerning fidelity for which further attention is needed in order to advance this aspect of behavioral intervention research. These include the lack of guidelines, adequate funding, measures, reporting requirements, understanding of its analytic role, and understanding of its role in comparative effectiveness studies.
Foremost is the lack of firm guidance as to how to best enhance, monitor, and measure the core components of fidelity, especially given the wide variations in fidelity definitions, frameworks, and models employed in intervention studies. It is unclear as to whether certain enhancement strategies are more effective than others and which enhancements work best for which types of interventions, populations, and settings. Also unclear are best practices concerning supervisory approaches, the impact of course corrections on trial outcomes, which training techniques of interventionists are most effective, and what are the most optimal strategies for delivering effective interventions and enhancing their receipt and enactment. As enhancement, monitoring, and measurement approaches remain idiosyncratic and tailored to specific interventions, comparing fidelity strategies across intervention studies to derive best practices has not been feasible.
Gearing and colleagues (2011) provide the most comprehensive guidelines and practices to date on the basis of their systematic review of 24 meta-analyses and review articles reporting on fidelity. Nevertheless, the lack of written documentation concerning best fidelity practices leaves investigators dependent upon a limited set of publications on this topic and their own experiences. Fortunately, new studies are emerging to examine this issue. For example, Stirman and colleagues (2013) are investigating whether clinicians receiving post-workshop support subsequently deliver a cognitive processing therapy with better consistency (fidelity) and whether this in turn improves patient outcomes. Their study is in progress and as such, results are not available. However, findings from their study will be useful in helping to establish coaching and supervisory approaches when testing behavioral interventions.
A second challenge is funding for developing and implementing fidelity plans as it requires resources including staffing, time, and sufficient funding (Spillane et al., 2007). In the early phases of intervention development in which resources are focused on the end goal of feasibility and safety, fidelity considerations can easily be viewed as less important and relegated to a back seat, if addressed at all. In evaluation phases (efficacy and effectiveness), some minimal fidelity plan is required, but again the extent to which fidelity is enhanced, monitored, and measured can vary widely across trials depending upon available resources and their allocation.
The importance of attending to fidelity must be matched by resources that can be allocated to this endeavor.
The third challenge is measurement. Currently, there are no standardized assessments or measures for different aspects of fidelity. Investigators typically create their own monitoring forms and measurement approaches. Given the significant limitations in time and funding, most investigator-developed fidelity measures are not submitted to a validation process. Additionally, fidelity measures tend to be study specific; hence, comparison of fidelity outcomes across intervention studies on similar dimensions is not always possible. In order to derive an understanding of acceptable levels of adherence and impact on outcomes for different interventions, objective, well-validated measures are needed along with the development of measurement strategies that integrate both objective and subjective appraisals of fidelity. A related point is that it is unclear as to the causal pathways between fidelity and trial outcomes. Different levels of adherence among interventionists or different adherence levels to intervention strategies by study participants may impact trial results; yet, these relationships have not been systematically considered (Hardeman et al., 2008).
The fourth challenge concerns the need for better reporting guidelines and requirements for grant applications and publications. Grant reviewers are not necessarily instructed in nor may they understand the need to evaluate the quality and effectiveness of a fidelity plan for a proposed intervention study in a grant application. In a grant application, a fidelity plan should be presented near the end of the design section and after the description of the intervention. Page limitations, however, can constrain the level of detail provided for a fidelity plan (up to 12 pages for most NIH applications). Including a detailed description of a fidelity approach can be challenging. Providing a table describing enhancements, monitoring approaches, and measures of each fidelity component (study design, delivery, receipt, and enactment) may be one strategy for overcoming space limitations.
Similarly, journal reviewers of a manuscript reporting treatment outcomes may not critically appraise whether an acceptable level of adherence was obtained (Naleppa & Cagle, 2010). Most checklists for reporting trials, such as the 2010 Consolidated Standards of Reporting Trials (CONSORT) (see www.consort-statement.org/), do not explicitly include the need to report fidelity methods, measures, or outcomes.
Additionally, there are limited opportunities to include fidelity plans and outcomes in a comprehensive report. In main trial outcome papers, fidelity considerations are typically consigned to a few sentences as part of the description of the intervention that confirm treatment integrity was monitored and achieved. An exemplar is the publication by Barsky and Ahern (2004), which reports the results of a randomized trial designed to test a cognitive behavior therapy program for hypochondriasis. The authors appropriately address fidelity in a brief description as follows:
Treatment fidelity was assessed by auditing audiotapes of randomly selected therapy sessions from all 3 therapists; adherence to the CBT manual was excellent.
Receipt of the consultation letter was acknowledged by 96.8% of the primary care
physicians. (pp. 1465-1466)
As most journals have word limitations, this is a typical minimalist statement that authors have begun to appropriately include and which is expected.
Although fidelity considerations are not yet part of main stream clinical trial literature, there is increasing awareness of the importance to do so. Consequently, more attention will be afforded to this aspect of intervention development, and we suspect and hope that more careful disclosure of fidelity plans and outcomes will be required. At a minimum, in a trial outcome paper, a brief overview of a fidelity plan and adherence results should be presented after a description of an intervention and before the discussion section (Davidson et al., 2003). Also, researchers are now reporting fidelity outcomes in publications separate from the main trial outcome publication so that more careful delineation of the fidelity plan and results can be presented. Examples of this approach include the publication of Long and colleagues (2010) on therapist fidelity with a multicomponent cognitive behavioral intervention for posttraumatic stress disorder; or the publication of Hardeman and colleagues (2008) on adherence to behavior change techniques used in a physical activity intervention.
Yet another issue is the role of fidelity in analyses. It is unclear whether an indicator of adherence level should serve as an outcome, a covariate, a moderator, or a mediator. It may be that for some interventions, a certain level of exposure or enactment is required for a benefit to be achieved. In Lichstein and colleagues’ (1994) example of a hypertension medication intervention, benefits may occur only if a patient strictly conforms to the medication dosing; for the GBGB, enactment of only one of three behavioral activation prescriptions may be needed to realize a benefit (Gitlin et al., 2013). A fidelity plan that included appropriate design and measurement features could help to address these key questions.
Finally, a sixth challenge is the role of fidelity in comparative effectiveness research. As the focus in this type of trial is the comparison of two distinct interventions, assuring fidelity to each treatment arm and assuring that each is differentiated from the other are critical.
|< Prev||CONTENTS||Next >|