Home Engineering Behavioral Intervention Research: Designing, Evaluating, and Implementing
Describing Behavioral Interventions
Despite the proliferation of reporting guidelines, there is no consensus as to what should be included in the actual description of an intervention that is evaluated. The elements of behavioral interventions that are reported tend to vary widely. Report details tend to be inconsistent across and even within journals. Michie and colleagues (2009) indicate that, in their review of over 1,000 behavioral change intervention outcome studies, from 5% to 30% provided sufficient details of actual interventions. Correspondingly, Grant and colleagues (2013) in a review of 239 trials found that less than half provided sufficient details about the experimental (43%) and control group (34%) conditions.
There are numerous significant consequences of inadequately reporting details of an intervention. These include the lack of ability to understand an intervention’s treatment components and the relationship between components, and the specific delivery characteristics that may contribute the most to changes observed in outcome measures. Additionally, an inadequate description of an intervention can hinder its replication potential. Ioannidis (2005) found that, of 45 studies demonstrating intervention effectiveness, only 20 (44%) had findings that were subsequently replicated in further investigations. It is unclear whether failures were due to poor replication of an intervention, a possible Type II error of subsequent studies; or whether an intervention was not effective in the original trial, a Type I error. Difficulties in replication may contribute in part to the “17+” lag between idea inception of an intervention to its testing and implementation (Institute of Medicine [IOM], 2001).
Inconsistencies in reporting interventions are due to a range of causes. These may include the lack of scientific reporting guidelines specific to describing behavioral interventions; absence of a uniform and agreed-upon language to categorize and understand interventions and their delivery characteristics and treatment elements; restrictions imposed by scientific journals concerning page and/or word counts; lack of investigator understanding of the theory base and underlying mechanisms of an intervention; and, in some cases, proprietary considerations in which the commercialization of an intervention is the goal such that providing intervention details would interfere with marking and purchase potential. The lack of a uniform language to describe behavioral interventions is particularly disconcerting. For instance, terms such as “multimodal” and “multicomponent” are inconsistently defined and utilized in the literature. Also, in any one area, behavioral interventions are categorized and defined variably. Take for example, nonpharmacological interventions for persons with dementia and their caregivers. There is no consensus as to what constitutes a psychoeducational intervention versus a counseling intervention. Cognitive stimulation interventions vary widely, and there is no agreement as to the key elements of this approach. Care management and care coordination interventions remain “black boxes” such that the specific elements that constitute and distinguish these interventions from others are unclear. Similarly, the suboptimal reporting of behavior change interventions in preventing and treating HIV has resulted in poor uptake and replication. Abraham and colleagues (2014) identify four such inadequacies in this area: inadequate descriptions of interventions, variation in the content and reporting of active control groups, inadequate examination and reporting of underlying mechanisms by which the intervention has its effects and who benefits the most, and lack of replication limiting generalizability.
Despite the importance of the CONSORT statement, two critical aspects of intervention research are not clearly articulated. First, the CONSORT checklist does not include any mention of fidelity. One consequence of this may be that few publications on randomized trials include fidelity information (see Chapter 12 on fidelity concerning the consequences of not reporting fidelity).
Second, the CONSORT checklist does not provide sufficient guidance on the specific elements to include when describing an intervention. Schulz and colleagues (2010) suggest that “sufficient details [of each intervention] to allow replication including how and when they [interventions] were actually administered” (p. 2) be given.
To address the aforementioned shortcomings, many researchers and editors have developed extensions of the CONSORT checklist. Davidson and colleagues (2003) recommend that, at a minimum, details be provided about an intervention that address the “who, what, where, when, and how” aspects of delivery. The Workgroup for Intervention Development and Evaluation Research (WIDER) has advanced recommendations specific to behavior change interventions that have been adopted by several journals including Implementation Science (Michie et al., 2009) and Addiction (West, 2008). WIDER recommendations focus on four areas: “detailed description of interventions,” “clarification of assumed change process and design principles,” “access to intervention manuals/protocols,” and “detailed description of active control conditions” (Albrecht, Archibald, Arseneau, & Scott, 2013, p. 3). Mayo-Wilson and colleagues (2013) are currently developing standards specific to social and psychological intervention trials (CONSORT-SPI) that complement, extend, and update CONSORT guidelines.
Building on these previous efforts and on the basis of our collective experiences, we recommend that seven elements of interventions be described as listed in Table 24.4.
The first element is a description of the theory base or underlying conceptual framework that guided intervention development, which can explain connections between treatment components. This should offer an understanding of why the intervention may have an impact on desired outcomes. Next is a description of seven aspects of the intervention’s delivery including the dose, intensity and duration, target of intervention, mode and location of delivery, and format. If the intervention is complex, then describing each of its components is important. Providing a sense of the content of each session and the way in which the intervention unfolds are also helpful. There should also be a discussion about the interventionists including their level of skill and professional backgrounds, and the training required for the intervention. Certification procedures or how competence in intervention delivery is assessed should also be stated. Finally, a fidelity plan and associated measures should be described.
In addition to following reporting guidelines and adequately describing the intervention, there are other considerations in developing a main trial outcome paper.
TABLE 24.4 Basic Reporting Details of a Behavioral Intervention
Foremost is what to include. Reporting on primary endpoints is of course paramount, but other research questions may also be included depending upon the data and the story it imparts. As suggested in Figure 24.1, these may include secondary outcomes, long-term outcomes, moderation analyses concerning which groups benefit, mediation analyses, why or how the intervention achieved its effects, long-term use of intervention strategies, satisfaction and acceptability of the intervention, and so forth. Although all of these elements would not be appropriate to include in one report, it suggests that, for any one trial, there are multiple potential outcome papers that can and should be generated.
|< Prev||CONTENTS||Next >|