Home Engineering Behavioral Intervention Research: Designing, Evaluating, and Implementing
TYPES OF CONTROL GROUPS FOR RCTs
Considerable heterogeneity exists in the forms of control conditions for RCTs of behavioral intervention research. Although there have been several attempts to address control group selection in the literature (Freedland et al., 2011; Mohr et al., 2009; Whitehead, 2004), there is currently little agreement or consistency among investigators about how to best design or select controls for behavioral intervention research. Different types of control conditions may have significantly different effects on the outcomes of RCTs (Cuijpers, van Straten, Warmerdam, & Andersson, 2008). Thus, ultimately the choice of controls may have a major impact on the evidence that underlies evidence-based research and practice (Mohr et al., 2009).
Control conditions vary in the magnitude of the impact that exposure to the condition is designed or expected to have on the trial’s outcome variable(s) (Freedland et al., 2011). The no-treatment control, in which no alternative treatment is provided, is expected to have the least impact, as discussed previously, and the active alternative treatment control is considered the most impactful. In between are waitlist, attention/placebo comparison, and treatment component control groups.
Although there are many available choices for a control condition, none is perfect or suitable for all occasions. So, faced with many possibilities, which control(s) should the researcher choose? The choice depends on the specific research questions being asked, the existing state of knowledge about the intervention under study (Lindquist et al., 2007; West & Spring, 2007), and logistic issues/constraints (e.g., budgetary issues). Table 8.2 summarizes the different types of control/comparison conditions for RCTs of behavioral interventions and their relative advantages and disadvantages.
In the no-treatment control, outcomes for people randomly assigned to receive the treatment are compared to those people who receive no treatment at all. In clinical research, this group is often referred to as “treatment as usual.” The main question is: “Does the treatment produce any benefit at all, over and beyond change due to passage of time or participating in a study?” One of the principal drawbacks of this type of control is that people randomized to no treatment may find their own treatment outside the bounds of the study. For example, in trials of cognitive interventions to improve memory performance in the elderly, researchers need to be concerned that participants who have concerns about their memory or who have been diagnosed with Alzheimer’s disease or a related disorder may be less willing to accept an assignment to the no-treatment condition and to make a commitment
not to seek other treatment during the intervention trial (Willis, 2001). They might decide to seek training on their own by buying a book or manual on cognitive improvement, by using an Internet-based memory training program, or through signing up for a training class outside of the study. They also may be more likely to drop out of the study, as new and more promising treatments become available, thus compromising the investigator’s ability to do long-term assessments (Willis, 2001). They also may drop out if they feel that participation is too burdensome (e.g., an extensive assessment) relative to the benefits of participation. In such instances, it is important to actively monitor retention and what participants are doing with regard to engaging in other training-related activities outside the main intervention study.
To overcome some of the problems of the no-treatment control, many researchers employ some form of a wait-list control design in which the treatment is delayed rather than withheld, and the waiting period is equivalent to or longer than the length of the treatment. People randomized to the treatment are compared to people on a wait-list to receive the treatment. The advantage is that everyone in the study receives the treatment sooner or later. This may help reduce the likelihood that participants randomized initially to a no-treatment condition will be disappointed or resentful and seek out treatment on their own or drop out of the trial. However, depending on the length of the wait-period, this still may present a problem as some participants may not be content to remain on the wait-list, even for a relatively brief period of time, and may seek “off-study” treatments of their own. On the other hand, those who are content to remain on the list, especially for prolonged periods of time, may be atypical in some way (e.g., unusually cooperative or agreeable). Another potential problem in the use of wait-list controls is that expectations for improvement may differ between the treatment and the control groups (Whitehead, 2004). People on the wait-list may not expect that they will improve, even when they finally receive the treatment. Alternately, people on the wait-list may improve spontaneously on their own and then receive the treatment condition, although they no longer meet the initial study inclusion criteria. Finally, once the wait-list participants receive the treatment, there no longer exists a long-term control, limiting the possibility of testing the long-term effects of an intervention. Also, one needs to be careful as in some instances, for example, a couple of friends being randomized to different treatment conditions, there may be some “cross-condition” talk.
A usual design alternative to no-treatment controls is the attention/placebo control. Here, a new treatment is compared to a control intervention that delivers the same amount of contact and attention from the intervention agent, but none of the key active ingredients by which the new treatment is expected to cause change in the outcomes under study. This control tests whether the new treatment produces benefits beyond the effects owing to nonspecific influences such as experimenter attention, social support, or positive expectations. The optimal attention/placebo control should encompass none of the “active ingredients” of the treatment under evaluation (Gross, 2005). For example, attention controls may receive an intervention such as an educational seminar on health care that is not thought to include any of the critical intervention elements that may change participant behavior.
Masking (or blinding) is used in clinical trials to keep the group (e.g., the active placebo) to which the study participants are assigned not known or easily discerned by those who are “masked” (Schulz, Chalmers, & Altman, 2002; Stephenson &
Imrie, 1998; Viera & Bangdiwala, 2007). In “double-blind” trials, both study participants and the experimenter or health care provider are supposed to be unaware to which groups participants were allocated. In practice, this is often difficult to do, and it is not always clear which groups were actually masked. One of the main disadvantages of the placebo control is that participants who become unblinded and learn they are receiving placebo do more poorly than participants who do not know they are receiving placebo. Particularly in cases where neither the study participant nor the experimenter can be blinded, it is best that both the participants and research staff hold equal expectations about the merits of the intervention and control conditions (West & Spring, 2007). However, maintaining equivalent expectancies across groups, referred to as “equipoise,” becomes more difficult when the treatment is lengthy, which is often the case when the intervention is dealing with serious, real-life issues (Willis, 2001). The ethical principle of clinical equipoise means that there must exist genuine uncertainty over whether or not the treatment will be superior to some alternative. If there is certainty about the superiority of the treatment relative to a placebo control, then the principle of equipoise is violated. These problematic issues have led investigators to question whether we should routinely use placebo controls in clinical research (Avins, Cherkin, Sherman, Goldberg, & Pressman, 2012). Other issues include the cost of including an attention control condition and ensuring the placebo group does not have active ingredients, which is often difficult to do in behavioral intervention trials. In addition, it is important to monitor issues surrounding treatment efficacy for the attention control condition to ensure that the intervention team is adhering to the protocol for the condition. They may not be as committed to this condition as they are to the treatment condition or understand the importance and purpose of including it in a trial.
A fourth type of control is known as the relative efficacy/comparative effectiveness design. This control condition addresses the question of whether a newly developed treatment works better than an existing best practice or standard of care. The treatments are thought to represent different conceptual approaches to the problem and have different hypothesized mechanisms of impact on outcomes. Use of this type of control requires large numbers of participants in each treatment group because all of the interventions being compared show promise or are known to work, so the expected differences between them are relatively small. The main questions in comparative effectiveness research are (a) which intervention works better and (b) at what relative costs (West & Spring, 2007). Although this might be thought to be a direct and useful way to compare interventions, several problems in using this design have been noted (Willis, 2001). First, when comparing two or more interventions, they often vary in so many ways (e.g., number of sessions, mode of delivery, degree of social contact) that there are few variables that can be held constant across treatment conditions. Differential expectations among the individuals who administer the interventions can also pose a major challenge to the internal validity of comparative effectiveness designs. Ideally, the interventions should be delivered at multiple sites by staff who have similar levels of training and expertise and hold similar expectancies about intervention effectiveness. Using a shared measurement framework across interventions also facilitates cross-study comparisons and helps reduce the problems associated with using different intervention protocols (Belle et al., 2003; Czaja, Schulz, Lee, Belle, & REACH Investigators, 2003).
|< Prev||CONTENTS||Next >|