Desktop version

Home arrow Engineering arrow Behavioral Intervention Research: Designing, Evaluating, and Implementing


Now that we have established the importance of standardization, we will review aspects of an intervention protocol that need to be standardized—simply put, all of the elements of a protocol beginning with the study design through treatment delivery and assessment (Table 6.1). With respect to the study design, this must be established early on in a study as it sets the stage for other elements of the protocol. Unless adaptive and emerging trial design approaches are being used (see Chapter 2), the design of a study is rarely changed throughout the course of a trial. The participant inclusion/exclusion criteria must also be firmly established and well defined. For example, in the PRISM trial (Czaja et al., 2013), our target population was older adults “at risk for social isolation.” We operationalized “at risk for social isolation” as living alone, not working or volunteering more than 5 hours a week and not spending time at a senior center or formal organization for more than 10 hours per week. Sometimes, in the beginning of a trial based on the recruitment data, the trial entry criteria may need to be adjusted. However, this should be done infrequently, be well justified, and be clearly specified. For example, early on in the PRISM trial, we learned that an inclusion criterion, “never having used a computer,” was too stringent as some individuals reported that they had been exposed to a computer in their doctor’s office or through a child or grandchild. Thus, this criterion was adjusted to “not having a computer at home, not having an e-mail address, and limited experience with a computer.” The original and adjusted criterion were both documented and dated in a MOPs, which was continuously referred to in reporting results.

Another standardized approach is the consent. Clearly, as discussed in Chapter 13, the consent form must be consistent with the study protocol and as must be the process for obtaining consent. It is understood that in some cases there might be separate consent forms for subgroups of participants such as parents and children, but within subgroups the consent form and the process for consenting must be the same.

Participant screening protocols should also be standardized, as should the type of data collected during screening. Typically, a screening script and a screening

TABLE 6.1 Elements of an Intervention That Need to Be Standardized

Elements to Be Standardized

Inclusion/exclusion criteria Consent protocols

Participant screening protocol

  • ? Scripted
  • ? Information collected

Data collection protocol

  • ? Recruitment information—source—"How did the participant hear about the study?"
  • ? Eligibility/noneligibility
  • ? Categories of reasons for noneligibility
  • ? Dropouts and reasons for dropouts
  • ? Assessment protocol
  • ? Measures
  • ? Order of, and protocol for, administration
  • ? Who does the assessment
  • ? Blinding protocol
  • ? Data coding
  • ? Data storage and transfer

Treatment implementation

  • ? Content
  • ? Dosage
  • ? Planned participant contacts
  • ? Order/sequencing
  • ? Nature of compensations
  • ? Training of participants
  • ? Protocol and documentation of unplanned contacts

Training of interventionist Data-coding strategies

Protocols for reporting of adverse events/alerts and for event resolution

data form need to be developed. Screening data should include recruitment information (how did the individual hear about the study?), eligibility status, reasons for noneligibility (if appropriate), eligibility questions, and basic demographic information. Investigators also need to develop a system for tracking characteristics of participants who drop out of the study and the reasons for dropping out (Chapter 10).

There are numerous aspects of a data collection protocol that need to be standardized, including the measures and assessment instruments, protocol for the administration of measures/questionnaires, timing of assessments, team members responsible for the assessments, and blinding protocols. There should also be a specified procedure for data coding, data transfer, and data storage and for reporting and resolving adverse events and serious adverse events (Chapter 13). Table 6.2 presents a sample of the assessment battery used in the PRISM trial according to the order of administration of the measures and the format for administration. For this trial, measures were collected over the telephone at the follow-up assessments as it

TABLE 6.2 A Sample of PRISM Assessment Battery: Order and Format of Administration


6 Months—Mail

Demographic information (Czaja et al., 2006a)

Technology, computer, Internet experience questionnaire (Czaja et al., 2006b)

Computer attitudes (Jay & Willis, 1982)

Life Space Questionnaire (Stalvey, Owsley, Sloane, &

Ball, 1999)

Formal care and services utilization (Wisniewski et al., 2003) Technology Acceptance Questionnaire Computer proficiency (Boot et al., 2015)

Ten-Item Personality Inventory (TIPI; Gosling, Rentfrow, & Swann, 2003)

Demographic information

Technology, computer, Internet experience questionnaire

Computer attitudes Life Space Questionnaire

Formal care and services utilization

Technology Acceptance Questionnaire

Computer proficiency

PRISM System/Control Group Evaluation

Baseline—In Person

6 Months—In Person

Mini-Mental State Exam (Folstein, Folstein, & McHugh, 1975) Fuld Object-Memory Evaluation (Fuld, 1978)

Snellen Test of Visual Acuity WRAT_T (Jastak & Wilkinson, 1984)

Animal Fluency (Rosen, 1980)

STOFHLA (Baker, Williams, Parker, Gazmararian, &

Nurss, 1999)

Reaction Time Task

Stroop Color Name (McCabe, Robertson, & Smith, 2005) Social Network Size (Berkman & Syme, 1979; Lubben, 1988) Shipley vocabulary (Shipley, 1986)

Trails A and B (Reitan, 1958)

Social Support (Cohen, Mermelstein, Kamarack, & Hoberman, 1985)

Loneliness Scale (Russell, 1996)

SF-36 (Ware & Sherbourne, 1992)

Digit Symbol Substitution (Wechsler, 1981)

Letter Sets (Ekstrom, Frendch, Harman, & Dermen, 1976, pp. I-1,80, 81,84)

Perception of Memory Functioning (Gilewski, Zelinski, & Schaie, 1990)

Perceived Vulnerability Scale (Myall et al., 2009)

CESD (Irwin, Artin, & Oxman, 1999; Radloff, 1977)

Social Isolation (Hawthorne, 2006)

Life Engagement Test (Scheier et al., 2006)

Quality of Life (Logsdon, Gibbons, McCurry, & Teri, 2002)



Perception of Memory Functioning

6 Months—Phone

Social Network Size Social Support Loneliness Scale

Perceived Vulnerability Scale


Social Isolation Life Engagement Test Quality of Life

PRISM, Personalized Reminder Information and Social Management; STOFHLA, Short Test of Functional Health Literacy in Adults; SF-36, Short Form Health Survey; CESD, Center for Epidemiologic Studies Depression.

would have been difficult for assessors to remain blinded to treatment as the study involved technology that was placed in the homes of participants.

Finally, it is imperative to detail all of the elements of the treatment conditions including the control condition (if one is included in a trial). This includes the content; format of delivery (e.g., home vs. telephone); dosage and planned participant contacts; the ordering/sequencing of the content/components of the intervention; and protocols for participant compensation. It is also important to have plans for “unplanned contacts” and “what if scenarios.” Of course, we realize that when dealing with human research participants it is difficult to predict unplanned events. We recommend developing a session-by-session plan if an intervention involves “sessions” (Chapter 3) and a protocol for tracking all participant contacts (planned and unplanned). The latter should include duration of contact, which will greatly facilitate dosage-outcome analyses. In addition, protocols for classifying, recording, reporting, and resolving adverse events and serious adverse events should be in place (see Chapter 13).

< Prev   CONTENTS   Source   Next >

Related topics