Desktop version

Home arrow Philosophy

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Sharing the load: a strategy to improve self-regulated learning

The need for strategies to improve self-regulated learning

Self-regulated learning (SRL) entails the self-directive and proactive processes that learners can use to achieve academic success (Winne & Hadwin, 1998; Zimmerman, 2008). Examples of those processes are goal-setting, selecting and using learning strategies, and monitoring one’s own effectiveness. These and other SRL processes are important for students to be able to regulate their own learning and development, not only during their school and college years but throughout their lives (e.g., Bjork, Dun-losky, & Kornell, 2013). In most models of SRL (for a review, see Panadero, 2017) both monitoring and control processes play an important role. That is, for SRL to succeed, students need to accurately monitor their learning processes and use that information to regulate further learning activities (e.g., Nelson & Narens, 1990).

Yet, research has shown that both children and adults tend to overestimate their own learning processes (e.g., Dunlosky & Lipko, 2007), which is problematic for decisions on their future learning processes and their future learning outcomes (e.g., Dunlosky

& Rawson, 2012). With complex tasks, SRL strategies such as self-monitoring of learning can be too demanding for an individual student. Using collaborative learning as a strategy to divide the demands of the learning task, learners can create a collective cognitive capacity which could potentially lead to a more efficient way of learning with more room for monitoring and regulating the learning process. In this chapter, a cognitive load perspective will be used to discuss how collaborative learning could be a strategy to improve SRL.

Self-regulated learning skills

In order to self-regulate one’s own learning processes, an interaction between cognition and metacognition needs to take place (Flavell, 1979). In the model by Nelson and Narens (1990) there are two levels that interact with each other through monitoring and control processes. The first level, the object-level, is the level at which cognitive processes like learning, language processing, or problem solving are going on. The meta-level contains a model of the learners understanding of the task they are performing. This meta-level is partly informed via monitoring processes but also includes metacognitive knowledge about the task and the learner (i.e., strategies for specific tasks in relation to the experience of the learner; Flavell, 1979). Information gained when monitoring task performance at the object-level is used to update the model of the task at the meta-level. In turn, information from the meta-level is used to influence the activities at the object-level (i.e., control processes). These two levels, and the information flow between them, enable the learner to regulate ongoing learning processes (Dunlosky & Metcalfe, 2009).

Hence, an important strategy for effective SRL is self-monitoring in which learners evaluate their own performance against some standard or goal. Self-monitoring can be measured by asking learners to make monitoring judgments about their own learning process. Monitoring judgments can be made retrospectively (e.g., self-assessment), concurrently (e.g., confidence judgments), or prospectively (e.g., predicting future performance; Baars, Vink, Van Gog, De Bruin, & Paas, 2014; Schraw, 2009b). For example, a judgment of learning (JOL) could be used to have learners judge whether they have understood a text or are able to answers questions about the text on a future test. The accuracy of monitoring judgments is usually operationalized as the correspondence between the judgments and test performance. The correspondence can be expressed as relative accuracy, absolute accuracy, or bias (Schraw, 2009a). Relative monitoring accuracy shows the correspondence between monitoring judgments and performance, and is measured with intra-individual correlations (often the Goodman-Kruskal Gamma correlation, e.g., Maki, 1998; Thiede, Anderson, & Therriault, 2003). Relative accuracy expressed as the gamma correlation shows to what extent participants are able to discriminate between problems on which they perform poorly and problems on which they perform well (Maki, Shields, Wheeler, & Zacchilli, 2005). Absolute accuracy shows how precise the monitoring judgment is and it is measured by the actual deviation between monitoring judgments and performance (e.g., Baars et al., 2014; Baars, Visser, Van Gog, De Bruin, & Paas, 2013). For example, if a student made a monitoring judgment in which (s)he estimates to have five out of ten questions correct but only gets four questions correct on a performance test, the absolute accuracy is one. Bias would measure whether there is an over- or underestimation. In the previous example bias would be one, a positive outcome, indicating overestimation (for a review of accuracy measures, see Schraw, 2009a, 2009b).

When studying word pairs (i.e., paired-associates) both children and adults were found to be able to judge their memory accurately when there was a delay between studying the word pair and the monitoring judgment (for a review of the delayed-JOL effect, see Rhodes & Tauber, 2011). That is, a simple strategy to improve monitoring accuracy when studying word pairs would be to ask learners to make a monitoring judgment after a list of words instead of after each word directly. Yet, this delayed-JOL effect was not found for learning from texts (Maki, 1998) or problem-solving tasks (Baars, Van Gog, De Bruin, & Paas, 2018). Moreover, reviews of research on monitoring judgments when learning from texts (i.e., meta-comprehension) have shown that the accuracy of a single monitoring judgment after reading a text (200-1000 words) is generally very low (average gamma correlation of .27). This indicates that learners cannot accurately monitor their own learning processes when learning from text without any additional instructional support (e.g., Dunlosky & Lipko, 2007; Thiede, Griffin, Wiley, & Redford, 2009).

Similarly, studies on monitoring learning from problem-solving tasks (i.e., metareasoning) also found that learners experience difficulties in making accurate monitoring judgments (Ackerman & Thompson, 2017). In educational settings like schools and universities, usually well-structured problems are used in domains such as science, technology, engineering, and mathematics (STEM). In contrast to ill-structured problems that do not have a well-defined goal or solution procedure, well-structured problems are typically solved by applying a limited and known set of concepts and rules (Jonassen, 2011). Research has shown that without additional instructional support, students were found to overestimate themselves when making monitoring judgments about solving well-structured biology problems (Baars, Leopold, & Paas, 2018; Baars, Van Gog, De Bruin, & Paas, 2017; Baars et al., 2014; Baars et al., 2013).

Interestingly, generative strategies were found to improve self-monitoring accuracy when learning from expository text and problem-solving tasks. Generative strategies are learning activities that learners can use to generate (new) information about the learning materials by elaborating on those materials (Fiorella & Mayer, 2016; Wittrock, 1992). Examples of generative strategies that were found to improve monitoring accuracy are generating keywords (e.g., Thiede et al., 2003), making summaries (Thiede & Anderson, 2003), making concept maps (e.g., Redford, Thiede, Wiley, & Griffin, 2012), giving self-explanations (e.g., Griffin, Wiley, & Thiede, 2008), making diagrams (e.g., Van Loon, De Bruin, Van Gog, Van Merrienboer, & Dunlosky, 2014), practicing problems (e.g., Baars, Van Gog, De Bruin, & Paas, 2014), or completing partially worked-out examples (Baars et al., 2013). These generative strategies can provide students with predictive cues on their comprehension of learning materials (i.e., their mental representation), which can help to make more accurate self-monitoring judgments (e.g., Baars et al., 2014; Thiede et al., 2009).

However, in a study by Baars et al. (2018) it was shown that self-explaining during the learning phase or at the posttest did not improve monitoring accuracy or performance when learning to solve problems in secondary education. Furthermore, monitoring accuracy was lower for more complex problem-solving tasks than for less complex problem-solving tasks. These results seem to imply that the complexity of the learning materials plays an important role in monitoring and influences the effectiveness of strategies to improve monitoring.

Looking at SRL models (e.g., Winne & Hadwin, 1998; Zimmerman, 2008), inaccurate monitoring is problematic for the learning process. When monitoring is inaccurate, regulation choices on how to proceed with the learning process will most likely be useless or even harmful for learning. In line with these predictions, Dunlosky and Rawson (2012) found that without additional support, students tend to overestimate their learning, which led to premature termination of study efforts and lower retention. As the consequences of inaccurate monitoring are quite severe and generative strategies do not always suffice in supporting students to make more accurate monitoring judgments, it is important to know why making accurate monitoring judgments is so difficult.

 
<<   CONTENTS   >>

Related topics