Desktop version

Home arrow Engineering arrow Behavioral Intervention Research: Designing, Evaluating, and Implementing

Source

Format of the Assessment

Cognitive measures are available in paper-and-pencil or computerized formats. Although computerized assessments would seem to ensure greater fidelity (see Chapter 12) and validity, the results are clearly divided. For populations with significant impairments that may lead to problems in cooperation or effort, there have been several studies showing that computerized assessments generate data that are less complete and possibly less reliable than standard administration procedures (Iverson, Brooks, Ashton, Johnson, & Gualtieri, 2009; Keefe, Bilder et al., 2006; Silver et al., 2006), and computerized assessment can serve to mask invalid performance. Some recently developed assessment strategies can detect invalid performance (Harvey, Siu, et al., 2013), but the message here is clear: Testers need to be as active, observant, and involved in the administration of computerized assessments as they are in the administration of paper-and-pencil assessments.

Frequency of Assessments and Practice Effects

In a cognitive or functional enhancement intervention study, an estimate of treatment-related cognitive change requires assessment before and after treatment. As we have noted before, there are several situations where repeated assessments pose challenges. One is the retest improvement, or “practice,” effect, which can be due to exposure to testing, familiarity with the materials, and increased comfort levels. There are several solutions to this problem (Goldberg, Keefe, Goldman, Robinson, & Harvey, 2010). One is the use of alternate forms, but alternate forms can be remarkably poorly correlated with each other in certain populations, which can significantly weaken the reliability of assessing cognitive change. Another is the use of a parallel research design, which allows for comparison of changes in performance over time across active and inactive treatments. As long as participants do not perform at the ceiling of a measure such that improved performance cannot be detected, the difference between active and inactive conditions can index the effect of the treatment minus the effects of repeated testing alone.

Practice effects are challenging, because few measures will have information from normative studies that examined practice effects beyond two or three reassessments in the population of interest and even fewer in healthy individuals for normative comparison. While it is generally believed that practice effects habituate after a few assessments, leading to stable performance over time, some other data suggest small but incremental effects across numerous assessment sessions. However, in the absence of ceiling effects, practice effects are preferable over poorly correlated alternate forms, which will prevent the identification of a treatment effect.

 
Source
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >

Related topics