Desktop version

Home arrow Engineering arrow Behavioral Intervention Research: Designing, Evaluating, and Implementing

Source

TYPES OF CONTROL GROUPS FOR RCTs

Considerable heterogeneity exists in the forms of control conditions for RCTs of behavioral intervention research. Although there have been several attempts to address control group selection in the literature (Freedland et al., 2011; Mohr et al., 2009; Whitehead, 2004), there is currently little agreement or consistency among investigators about how to best design or select controls for behavioral intervention research. Different types of control conditions may have significantly different effects on the outcomes of RCTs (Cuijpers, van Straten, Warmerdam, & Andersson, 2008). Thus, ultimately the choice of controls may have a major impact on the evidence that underlies evidence-based research and practice (Mohr et al., 2009).

Control conditions vary in the magnitude of the impact that exposure to the condition is designed or expected to have on the trial’s outcome variable(s) (Freedland et al., 2011). The no-treatment control, in which no alternative treatment is provided, is expected to have the least impact, as discussed previously, and the active alternative treatment control is considered the most impactful. In between are waitlist, attention/placebo comparison, and treatment component control groups.

Although there are many available choices for a control condition, none is perfect or suitable for all occasions. So, faced with many possibilities, which control(s) should the researcher choose? The choice depends on the specific research questions being asked, the existing state of knowledge about the intervention under study (Lindquist et al., 2007; West & Spring, 2007), and logistic issues/constraints (e.g., budgetary issues). Table 8.2 summarizes the different types of control/comparison conditions for RCTs of behavioral interventions and their relative advantages and disadvantages.

In the no-treatment control, outcomes for people randomly assigned to receive the treatment are compared to those people who receive no treatment at all. In clinical research, this group is often referred to as “treatment as usual.” The main question is: “Does the treatment produce any benefit at all, over and beyond change due to passage of time or participating in a study?” One of the principal drawbacks of this type of control is that people randomized to no treatment may find their own treatment outside the bounds of the study. For example, in trials of cognitive interventions to improve memory performance in the elderly, researchers need to be concerned that participants who have concerns about their memory or who have been diagnosed with Alzheimer’s disease or a related disorder may be less willing to accept an assignment to the no-treatment condition and to make a commitment

Types of Groups

Definition

Pros

Cons

Control Groups

No-treatment

Outcomes for people randomly assigned to those who receive treatment are compared to those who receive no treatment at all

Ability to assess test-retest or practice effects

People randomized to no treatment may find own treatment outside of the study Higher study dropout rates

Wait-list

People randomized to the treatment are compared to those on a wait-list to receive the treatment

Everyone in the study receives treatment sooner or later

People on the wait-list may still find treatment outside of the study People who are content to wait may be atypical or may have different expectancies for improvement Once wait-list participants receive the treatment, there is no long-term control for follow-up

Attention/placebo

New treatment is compared to a control intervention that delivers the same amount of support and attention, but does not include key components considered critical for the treatment

Tests whether the new treatment produces benefits beyond the effects due to nonspecific influences such as experimenter attention or positive expectations

People assigned to attention/placebo control may seek out treatment similar to the active intervention

Attention/placebo controls may have differential expectations for improvement

Relative efficacy/

comparative

effectiveness

Head-to-head comparison between two or more treatments, each of which is a contender to be the best practice or standard of care

Assists consumers, clinicians, and policy makers to make informed decisions about what will assist health care at individual and population level

Requires large sample sizes in each treatment group to detect an effect Interventions may vary in so many ways that there is no common basis for comparison Subject to limitations such as missing data, incomplete follow-up, unmeasured biases, competing interests, and selective reporting of results

(Continued)

Types of Groups

Definition

Pros

Cons

Control Groups

Parametric/dose

finding0

Random assignment of people to different forms of interventions varying on factors such as number, length, duration of sessions, and so forth

Can be done early in treatment in order to determine the optimal dose or form of treatment

Multiple levels of the variable under investigation can be examined

May be costly and time-consuming depending upon how many different factors one varies

Additive/

constructive

comparison0

(component

control)

Those in the experimental group receive added treatment components that are hypothesized to add efficacy

Provides strong control by holding the two experimental comparisons equivalent except for the "add-on" Fewer ethical concerns because all components are considered efficacious

May be difficult to identify how to sequentially add or combine new treatment components

Adding one or a few treatment components at a time may be a lengthy and costly process

May have low statistical power because the treatment effect of the add-on component might be slight compared to the effect of the existing treatment

Treatment

dismantlinga

(component

control)

People randomized to receive the full efficacious intervention are compared to those randomized to receive a variant of that intervention minus one or more parts

Removing noneffective components may create a more cost-effective intervention Fewer ethical concerns because all components are considered efficacious

May be difficult to identify the active components of the treatment to drop May be costly to include the full efficacious intervention model from the beginning

Existing Practice Comparison Groups

Treatment-as-usual (TAU); usual care (UC); routine care (RC)b

Control conditions are used to compare experimental interventions to existing treatments or clinical practices

Controls for many of the traditional threats to internal validity

Treatment provided by TAU, UC, or RC may vary considerably across patients and health care providers Outcomes may include variability from sources other than the treatment itself

a Adapted from West and Spring (2007). bAdapted from Freedland et al. (2011).

not to seek other treatment during the intervention trial (Willis, 2001). They might decide to seek training on their own by buying a book or manual on cognitive improvement, by using an Internet-based memory training program, or through signing up for a training class outside of the study. They also may be more likely to drop out of the study, as new and more promising treatments become available, thus compromising the investigator’s ability to do long-term assessments (Willis, 2001). They also may drop out if they feel that participation is too burdensome (e.g., an extensive assessment) relative to the benefits of participation. In such instances, it is important to actively monitor retention and what participants are doing with regard to engaging in other training-related activities outside the main intervention study.

To overcome some of the problems of the no-treatment control, many researchers employ some form of a wait-list control design in which the treatment is delayed rather than withheld, and the waiting period is equivalent to or longer than the length of the treatment. People randomized to the treatment are compared to people on a wait-list to receive the treatment. The advantage is that everyone in the study receives the treatment sooner or later. This may help reduce the likelihood that participants randomized initially to a no-treatment condition will be disappointed or resentful and seek out treatment on their own or drop out of the trial. However, depending on the length of the wait-period, this still may present a problem as some participants may not be content to remain on the wait-list, even for a relatively brief period of time, and may seek “off-study” treatments of their own. On the other hand, those who are content to remain on the list, especially for prolonged periods of time, may be atypical in some way (e.g., unusually cooperative or agreeable). Another potential problem in the use of wait-list controls is that expectations for improvement may differ between the treatment and the control groups (Whitehead, 2004). People on the wait-list may not expect that they will improve, even when they finally receive the treatment. Alternately, people on the wait-list may improve spontaneously on their own and then receive the treatment condition, although they no longer meet the initial study inclusion criteria. Finally, once the wait-list participants receive the treatment, there no longer exists a long-term control, limiting the possibility of testing the long-term effects of an intervention. Also, one needs to be careful as in some instances, for example, a couple of friends being randomized to different treatment conditions, there may be some “cross-condition” talk.

A usual design alternative to no-treatment controls is the attention/placebo control. Here, a new treatment is compared to a control intervention that delivers the same amount of contact and attention from the intervention agent, but none of the key active ingredients by which the new treatment is expected to cause change in the outcomes under study. This control tests whether the new treatment produces benefits beyond the effects owing to nonspecific influences such as experimenter attention, social support, or positive expectations. The optimal attention/placebo control should encompass none of the “active ingredients” of the treatment under evaluation (Gross, 2005). For example, attention controls may receive an intervention such as an educational seminar on health care that is not thought to include any of the critical intervention elements that may change participant behavior.

Masking (or blinding) is used in clinical trials to keep the group (e.g., the active placebo) to which the study participants are assigned not known or easily discerned by those who are “masked” (Schulz, Chalmers, & Altman, 2002; Stephenson &

Imrie, 1998; Viera & Bangdiwala, 2007). In “double-blind” trials, both study participants and the experimenter or health care provider are supposed to be unaware to which groups participants were allocated. In practice, this is often difficult to do, and it is not always clear which groups were actually masked. One of the main disadvantages of the placebo control is that participants who become unblinded and learn they are receiving placebo do more poorly than participants who do not know they are receiving placebo. Particularly in cases where neither the study participant nor the experimenter can be blinded, it is best that both the participants and research staff hold equal expectations about the merits of the intervention and control conditions (West & Spring, 2007). However, maintaining equivalent expectancies across groups, referred to as “equipoise,” becomes more difficult when the treatment is lengthy, which is often the case when the intervention is dealing with serious, real-life issues (Willis, 2001). The ethical principle of clinical equipoise means that there must exist genuine uncertainty over whether or not the treatment will be superior to some alternative. If there is certainty about the superiority of the treatment relative to a placebo control, then the principle of equipoise is violated. These problematic issues have led investigators to question whether we should routinely use placebo controls in clinical research (Avins, Cherkin, Sherman, Goldberg, & Pressman, 2012). Other issues include the cost of including an attention control condition and ensuring the placebo group does not have active ingredients, which is often difficult to do in behavioral intervention trials. In addition, it is important to monitor issues surrounding treatment efficacy for the attention control condition to ensure that the intervention team is adhering to the protocol for the condition. They may not be as committed to this condition as they are to the treatment condition or understand the importance and purpose of including it in a trial.

A fourth type of control is known as the relative efficacy/comparative effectiveness design. This control condition addresses the question of whether a newly developed treatment works better than an existing best practice or standard of care. The treatments are thought to represent different conceptual approaches to the problem and have different hypothesized mechanisms of impact on outcomes. Use of this type of control requires large numbers of participants in each treatment group because all of the interventions being compared show promise or are known to work, so the expected differences between them are relatively small. The main questions in comparative effectiveness research are (a) which intervention works better and (b) at what relative costs (West & Spring, 2007). Although this might be thought to be a direct and useful way to compare interventions, several problems in using this design have been noted (Willis, 2001). First, when comparing two or more interventions, they often vary in so many ways (e.g., number of sessions, mode of delivery, degree of social contact) that there are few variables that can be held constant across treatment conditions. Differential expectations among the individuals who administer the interventions can also pose a major challenge to the internal validity of comparative effectiveness designs. Ideally, the interventions should be delivered at multiple sites by staff who have similar levels of training and expertise and hold similar expectancies about intervention effectiveness. Using a shared measurement framework across interventions also facilitates cross-study comparisons and helps reduce the problems associated with using different intervention protocols (Belle et al., 2003; Czaja, Schulz, Lee, Belle, & REACH Investigators, 2003).

 
Source
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >

Related topics