Desktop version

Home arrow Philosophy

  • Increase font
  • Decrease font

<<   CONTENTS   >>

Task-specific Self-report Inventories

Some self-report inventories measure strategic processing as an aptitude or a trait, by asking respondents “to generalize their actions across situations rather than referencing singular and specific learning events” (Winne & Perry, 2000, p. 542). In a review,

Veenman (2005) showed that correlations between strategy data from such selfreport inventories and on-line methodologies, such as concurrent thinking aloud, are generally low, indicating poor convergent validity due to several issues with offline self-report inventories, such as their decontextualized and retrospective nature. However, as posited by Shellings (2011), low correlations between strategy data from self-report inventories and on-line methodologies may not necessarily indicate the invalidity of data gathered by means of self-report inventories, but reflect the lack of specificity and proximity of many self-report inventories to actual task contexts. Accordingly, she showed that self-reports of strategies on an inventory that was tailored to a particular reading task context yielded scores that were fairly well aligned with on-line strategy data.

Of note is that such task-specific strategy inventories ask learners to make judgments about their strategic activities in a specific task, rather than about what they generally or typically do during learning or comprehension. The logic of this approach is also consistent with Ericsson and Simons (1993) recommendation that retrospective verbal protocols be provided immediately after task performance, that is, when metacognitive states and strategic activities are still relatively accessible for retrieval in working memory.

In several studies, Braten and colleagues (Anmarkrud & Braten, 2009; Braten & Anmarkrud, 2013; Braten & Samuelstuen, 2004, 2007; Samuelstuen & Braten, 2005, 2007), in the context of learning from a single expository text, have demonstrated the possibility to obtain valid measurements when using self-report strategy inventories with items tailored to a specific task context and administered shortly after task performance. These authors have followed the four guidelines for constructing task-specific strategy inventories explicated by Braten and Samuelstuen (2007). First, a specific task (e.g., to read a text for a particular purpose) must be administered, to which the items on the inventory are referring. Second, the task must be accompanied by an instruction that directs learners to monitor their strategies during task performance and informs them that they will be asked some questions afterwards about how they proceeded. Third, to minimize the retention interval, the strategy inventory must be administered immediately after task completion. Finally, in referring to recent episodes of strategic processing, the wordings of task-specific items must be different from more general statements. This means that general item stems such as “when I study” or “when I read” must be omitted in task-specific items. Moreover, to make it clear that the items refer back to the recently completed task, the verb must be in the past tense (e.g., “I tried to understand the content better by relating it to something I know”).

Validating Task-specific Self-report Inventories. Strategy scores on a task-specific self-report inventory based on these guidelines have been validated in several ways. For example, Braten and Samuelstuen (2007) showed that strategies self-reported in this way corresponded quite closely with strategies traced in the study materials, with correlations between self-reported and traced surface-level strategies exceeding .75 and correlations between self-reported and traced deeper-level strategies exceeding .80. Samuelstuen and Braten (2005, 2007) demonstrated that students’ strategy scale scores accounted for their performance on expository text comprehension tasks, with correlations between deeper-level strategies and performance exceeding .35 (see also, Braten & Anmarkrud, 2013).

The same type of task-specific self-report inventory was developed by Brâten and Stroms© (2011) to measure strategic processing when individuals read multiple texts on the same topic. Specifically, this measure was constructed to assess a surface strategy involving the accumulation of pieces of information from different texts and a deeper-level strategy involving cross-text elaboration. These two dimensions of multiple text comprehension strategies have been confirmed through factor analysis and scores on these dimensions have been found to predict performance (Brâten, Anmarkrud, Brandmo, & Stroms©, 2014; Brâten & Strems©, 2011), with the accumulation strategy negatively and the cross-text elaboration strategy positively related to intertextual comprehension performance. Further validation of scores on this inventory was provided by Hagen, Braasch, and Brâten (2014), who compared strategy scale scores with strategies as revealed by students’ spontaneous note-taking, and by List, Du, Wang, and Lee (2019), who compared strategy scale scores with multiple text model construction as revealed by students’ written responses and with integrative processing as revealed by their think-aloud utterances.

Although valid scores have been obtained on task-specific self-report inventories of strategic processing administered immediately after task performance, such measures may be subject to some of the same errors that seem to plague general, decontextualized strategy inventories. Thus, because individuals in any case have to retrieve strategic episodes after some delay, fallible, biased, and reconstructive memory processes cannot be ruled out. In particular, the social desirability of response alternatives may bias people’s self-reports of strategies. Accordingly, people may report more strategic processing than actually executed because they believe this is to be approved by others. It is also possible that people report much use of certain strategies because they believe that those strategies are effective, not because they actually used them to any great extent. Yet another potential problem with task-specific self-report inventories administered after task performance may be a tendency to report using strategies described by the items although other strategies were actually used.

Questioning During Task Performance. Given such potential threats to the validity of scores on task-specific inventories administered after task performance, the procedure used by Zimmerman and colleagues (Cleary & Zimmerman, 2001; Kitsantas & Zimmerman, 2002) may be a viable option. In this approach, participants have been asked a series of task-specific questions about their self-regulatory processes, including their strategies, not only before and after but also during their efforts to learn particular athletic skills. While some of these questions, for example regarding self-efficacy, were close-ended with participants responding on a scale from 0 to 100, others were open-ended with responses coded by the researchers. For example, Kitsantas and Zimmerman (2002), who studied the practice of volleyball overhand serving among college women, stopped the participants during the practice episode and asked them about their strategy for successfully executing the next overhand serve after having missed the target on the two preceding attempts. The self-reported strategies were then coded into the categories of specific techniques, visualization strategies, concentration strategies, both specific techniques and concentration strategies, and practice/no strategy.

This approach was found to be valid in the sense that it differentiated between participants at different levels of expertise with respect to the processes included in the three phases of the cyclical model of self-regulated learning (Zimmerman, 2000,

2013). In the performance phase, in particular, the experts displayed more use of specific technique strategies than did other participants and also monitored their use of specific techniques and outcomes to a greater extent. Further, self-regulatory processes measured in this way were highly correlated with participants’ serving performance after the practice (Kitsantas & Zimmerman, 2002). Similar results were obtained by Cleary and Zimmerman (2001), who used task-specific questioning to study self-regulatory processes during participants’ practice on a particular basketball skill.

<<   CONTENTS   >>

Related topics