Desktop version

Home arrow Philosophy

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Commentary: Measuring Strategic Processing in Concert: Reflections and Future Directions

In the very first chapter of the Handbook, the editors described the rationale for the section on measuring strategic processing based on the idea that measurements and analyses of strategic processing data are rapidly evolving and that the field is beginning to move away from a reliance on retrospective self-reports and towards finer grained measurements of strategic processing along with new approaches to analyze and visualize the data. The section of the book on measuring strategic processing gives an excellent overview of the different ways strategic processing can be measured today: retrospective self-reports (Vermunt, this volume), concurrent and task-specific selfreports (Braten et al., this volume), and eye-tracking and fMRI (Catrysse et al., this volume). Finally, the use of Big Data is explored as a way to measure and present strategic processing in educational environments (Lawless and Riel, this volume). In this chapter we will briefly discuss each of these chapters and measurement approaches and aim to provide the reader with suggestions for using these measurement techniques in concert. Therefore, we will first try to sketch a clear overview of what aspects of strategic processing each of the different chapters aims to measure and at what level of granularity. We will dig into the question as to how the different chapters conceptualize and operationalize strategic processing. When thinking about how to use different measurement techniques in concert, getting this picture clear is important because there is, at this moment in time, no strong reason to assume that even when the authors of the different chapters in this volume were invited to write about how they suggest measuring the same concept of strategic processing, this term would convey similar meanings in each of the chapters (cf. Alexander, 2017). The field has for long been dominated by self-report measures to capture strategic processing, and self-reports still play an important role in the field. We will start our chapter with a brief discussion of the two chapters in this volume that focus on self-report measures.

Measuring strategic processing: reflections

In his chapter about Surveys and Retrospective Reports to Measure Strategies and Strategic Processing, Vermunt puts the focus on the measurement of learning strategies and conceptualizes them as a combination of cognitive, metacognitive, and affective learning activities that students use to learn (Vermunt, this volume). Vermunt hereby uses the broadest and most general conceptualization of strategic processing of all. He describes two “generations” of student learning inventories to measure this concept: a first generation that focuses on measuring cognitive processing strategies (and to some extent also motivation) and a second generation of student learning inventories that also includes metacognitive scales. Although some of these instruments measure students’ processing strategies in a rather general way (e.g., the ILS), the idea of certainly the second generation instruments was and is not that stable constructs are measured. This common misunderstanding has been fed by the faulty association with other traitbased self-report instruments because of the name under which Vermunt s widely used ILS survey is known: the Inventory of Learning Styles. To avoid this misunderstanding, Vermunt (this volume) proposes in his chapter to change the name of his own well-known ILS from “Inventory of Learning Styles” towards ‘Inventory of Learning Patterns of Students. Learning patterns is the label Vermunt and other researchers have been using in most of his publications in the 2010s to conceptualize the combination of cognitive, metacognitive, and affective learning activities that students use to learn (see e.g., Gijbels, Donche, Richardson, & Vermunt, 2014). Although the validity of retrospective self-reports and especially inventories asking students about their general or average way of learning has been questioned, Vermunt also stresses some advantages of this technique such as the possibility of gaining an overview of large groups of students in a short time in one place and moment that suits the students, and with high ecological validity. According to Vermunt, all methods to measure strategic processing face the problem that in fact we want to externalize processes that are inherently internal and all methods face validity issues. Hence, more triangulation research is needed in which multiple research methods (including retrospective self-reports) are used (see also Alexander, 2017). Overall, Vermunt (this volume) highlights important issues that played a role (and to some extent still do) in the development of measurement instruments of students’ learning strategies. He pointed out the issue of stability, with earlier instruments like the ASI (Entwistle & Wilson, 1977) capturing personality characteristics, typically considered as stable traits. Similarly, the Inventory of Learning Processes (ILP; Schmeck, Ribich, & Ramanaiah, 1977) defines a learning style as a disposition, also suggesting a stable characteristic, although the ILP does not include personality aspects. In contrast, as mentioned above, the later generation of measurement instruments does not start from learning approaches as stable processes anymore and acknowledges variations across (among other things) learning domains and topics. The task/course-specific versus general focus of measurement instruments also relates to some extent to the trait or state dichotomy. This led to different connotations of strategies and styles:

“Strategy” was used to refer to the preferences shown in tackling an individual task, while “style” related to general preferences more akin to the psychological term cognitive style with its implications of relatively stable behaviour patterns rooted in personality differences or cerebral dominance.

(Entwistle, 1991, p. 201)

A second issue that becomes apparent from Vermunt’s chapter is that the self-report instruments developed to capture approaches to learning/students’ processing really differed in aim. Although all are concerned with students’ learning strategies/activi-ties, some focused on (the frequency of) students’ behavior (“How frequently do you do this or that activity?”), while others were mainly concerned with how students perceived aspects of their own learning or the learning environment (usually making use of statements for which students have to indicate their (dis)agreement or “digging deeper” with interviews). This distinction between behavior and perceptions can directly be linked to the three applications of the inventories/questionnaires that Ver-munt (this volume) describes: (1) gaining knowledge about dimensions/developments in students’ use of learning strategies (i.e., behavior), versus (2) helping students to reflect on their own use of learning strategies and (3) evaluating the teaching environment (i.e., perceptions of learning in the second application and of teaching in the third). Besides measuring behavior versus perceptions, instruments also differ in their inclusive character. For example, the LASSI was designed as a diagnostic instrument and the MED NORD questionnaire maps students’ well-being and study orientations, therefore including many aspects of students’ learning. These differences in aim of the instruments puts the discussion of conceptual clarity to the fore, which we touched upon above. Since these instruments introduce a wide variety of concepts (see tables in Vermunt’s chapter) and some also address relating concepts such as attitudes, personality characteristics, cognitive styles, and preferences for physical and social study environments, conceptual clarity about what we mean with a specific term and how it relates to associated concepts remains the absolute first step. Although we fully agree with Vermunt’s call for more focus on relationships, we feel this can only be achieved in a meaningful way when the concepts (used to operationalize strategic processing) that are to be related are sufficiently defined (see also Dinsmore & Alexander, 2012).

The focus of the chapter by Braten, Magliano, and Salmeron is on concurrent and task-specific self-reports to measure strategic processing. They define (processing) strategies as “forms of procedural knowledge that individuals intentionally and plan-fully use for the purpose of acquiring, organizing, or elaborating information, as well as for reflecting upon and guiding their own learning, comprehension, or problem solving.” Strategies are distinguished from skills, the latter referring to automatic, unintentional, and routinized information processing, while strategies involve effortful, intentional, and planful processing. When automatic skills “fail,” strategic processing might lead to the desired outcome.

They distinguish superficial (such as selection and rehearsal) and deeper (such as constructing summaries and drawing inferences) processing strategies. Self-reports are defined as “utterances or answers to prompts or questions provided by an individual about his or her cognitions and actions during learning, comprehension, or problem solving.” In the chapter, three different models for understanding the role of strategic processing in learning are discussed: the good strategy user model (Pressley, Borkowski, & Schneider, 1987), the model of domain learning (Alexander, 2004), and the cyclical model of self-regulated learning (Zimmerman, 1998). The authors point out that these models have relied to a large extent on concurrent task-specific selfreports in attempting to measure strategic processing. The most important part of the chapter is where the authors discuss four different ways to assess strategic processing by means of self-reports: verbal protocol analyses, eye movement cued self-reports, task specific self-report inventories, and diary methods. An interesting point raised concerning validation is the issue of priming/cueing by inventories. More specifically, the authors note that:

Students’ reports of strategy use were restricted to strategies prelisted in an inventory even when those strategies did not seem to reflect what the students did during task performance and they were allowed to describe what they actually did in their own words.

Researchers should be aware of this “side-effect” inventories can have. In a similar vein, if “self-monitoring processes” are to be studied and prompting devices are used to keep track of those processes, a critical question to ask is to what extent these processes can still be considered “self.” Braten and colleagues conclude their chapter by stating that there is a general lack of studies that compare strategy data across different methodologies and they call for more multiple method research including one or more of the self-report methodologies that they discussed “in combination with other types of strategy measures to clarify the similarities and differences between those measures.” This relates to the point we made earlier in this chapter regarding conceptual clarity. Another observation from Braten and colleagues’ description of the different models is that different aspects of motivation are involved. As Zimmerman pointed out “strategic competence is of little value without motivation,” so we clearly understand and support the link with motivational concepts in the different models. Pressley’s Good Strategy User Model and Zimmerman’s Model of Self-Regulated Learning make links to attributions and self-efficacy respectively, dealing with the motivational question: “Can I do this task?” Alexander’s Model of Domain Learning brings interest to the fore, which relates to the question, “Do I want to do this task?” Interestingly, the link with the question, “Why do I want to do this task?” relating to goals and reasons for performing specific activities is not explicitly made in these models (it is made in relation to verbal protocols further in the chapter), while goals seem to be relevant in this discussion as well (for a detailed discussion on the different motivational questions and their related concepts, see Graham & Weiner, 2012; Pintrich, 2003; Wijnia, Noordzij, Arends, Rikers, & Loyens, 2019). Hence, both the chapters of Vermunt and of Braten et al. end with a similar message: there is still a place for self-report measures, but we need to combine them with other types of measures. The question remains, however, are we indeed talking about data-triangulation when we are combining different ways to measure strategic processing, or should we be talking about triangulation of theories, since the combination of different measures might in fact imply that different (aspects of the same theoretical) constructs are being measured? In addition, Brâten and colleagues also highlighted the importance of the replication of research findings. What is a good moment to start triangulation, when can one be reasonably confident that data from an individual source/instrument are sufficiently stable/reliable?

While both the chapters of Vermunt and of Brâten et al. deal with self-report measures of strategic processing, there is a clear difference in the type of self-report instruments they describe. The instruments discussed by Vermunt are all retrospective self-reports (they ask students to reflect upon and self-report their strategic processing strategies in general, not connected to a specific task). The main part of the chapter by Vermunt deals with surveys, which can be labeled as “off-line” ways to self-report strategic processing. Off-line refers here to self-report inventories or interviews that are not administered while the learning takes place, but before or after the learning task has taken place. The chapter by Brâten et al. deals with both online and off-line self-report instruments but differs from the instruments discussed in Vermunt’s chapter because Brâten and colleagues clearly focus on measurement instruments that try to capture students’ strategic processing strategies during (online) or immediately after (off-line) the task performance. In the latter case, the self-report questions are formulated in a task-specific way. The difference between Vermunt’s and Brâten et al.’s chapters is hence not the difference between online and off-line measures, but between retrospective (Vermunt) and concurrent (Brâten et al.) self-reports. Both Vermunt and Brâten et al. call for self-report instruments to be combined with other types of measures. Combining retrospective and concurrent self-report instruments could also be an interesting suggestion. While far less “fancy” compared to the measurement methods that we will discuss in the next paragraphs, this suggestion already illustrates the types of problems and opportunities triangulation of this type of data (and the “level” of the theories behind the data) could imply.

The chapter by Catrysse et al. (this volume) has a clear focus on “online” and “concurrent” measurement methods. The chapter gives an overview of how eye-tracking and functional magnetic resonance imaging (fMRI) can be used to examine students’ levels of processing in relation to learning from verbal material. Catrysse and colleagues describe different theoretical frameworks to conceptualize strategic processing and link the framework to the measurement tool used. Interestingly, the authors speak of a “general disposition towards levels of processing,” although the authors do not elaborate on whether they see processing strategies as stable characteristics. What becomes clear from the chapter is that it is not very insightful to categorize students as deep or surface learners, as the authors indicate at several points in the chapter that research points in the direction of a combination of deep and surface processing within students. A strong point of their chapter is that they explicitly mention the role of assessment, acknowledging that assessment demands can steer students’ learning activities (see also Loyens, Gijbels, Coertjens, & Coté, 2013). Indeed, the type of assessment and the weight accorded to it, determine students’ study activities (e.g., Al Kadri, Al-Moamary, & Van der Vleuten, 2009). fMRI research has used basic learning tasks at word level (e.g., judging whether a word is a living or a non-living object as an example of a surface-level task) to examine students’ levels of processing for which mainly Craik and Lockhart’s (1972) framework has been used. The word-level at which fMRI can be applied immediately poses a limitation, as learning often involves a text level. Certainly in higher education, understanding, critically evaluating, and being able to integrate different text sources is an all-important skill. Another legitimate question that the authors themselves pose regarding the potential of fMRI in measuring students’ levels of processing is how these levels of processing are reflected in behavioral measures. The answer to this important question is still open. The eye-tracking research describes levels of processing during text learning as a multidimensional construct and stresses the importance of the interplay between levels of processing and other learner characteristics such as interest. In addition, contextual characteristics are also of influence, as levels of processing also interact with these characteristics. The importance of the interplay between learner characteristics and the nature of processing is reflected in several theoretical models such as those described by Alexander (1997) and Richardson (2015) but also by Vermunt and Donche (2017). An important reflection that Catrysse and colleagues make, however, is that what is referred to as “deep cognitive processing” in these theoretical models on students’ levels of processing is not at all the same as what is understood by “deeper cognitive processing” in the eye-tracking literature. Either longer (eye) fixation durations or higher activity in certain brain areas can be related to different cognitive processing. Catrysse et al. refer therefore to students’ levels of processing as complex, multidimensional, and dynamic in nature and stress the importance of adapting multi-method designs to grasp the complexity of students’ levels of processing. According to the authors, micro-measures such as eye tracking and fMRI cannot be used as stand-alone measures and hence the use of multi-method designs is advocated. While the suggestion to use multiple methods is very explicitly present in their chapter, the issue of deliberately combining different theories of strategic processing (at different levels) is only implicitly discussed.

Lawless and Riel (this issue) describe the aim of their chapter is to provide inspiration regarding where the future of research in education in general and strategic processing in specific needs to go. In their chapter they examine the capture and analysis of data in various commercial applications and how industry has successfully leveraged these data to better understand user engagement. The authors remark that “we in education have not kept up with many of the affordances of technology in the way that industry has.” Given the financial aspects involved, this is, however, not surprising. Nevertheless, it remains an interesting and important issue. The focus of the chapter is on Big Data, which the authors define by a series of characteristics that include volume, but also velocity, variety, value, and veracity. The chapter focuses on “the affordances of Big Data and what this means for the future of research examining learners’ strategic processing and the design of instruction to improve these strategies in the service of obtaining positive learning outcomes.” While the authors do explain what they understand by Big Data, it is far less clear what “strategic processing” or how “these strategies in the service of obtaining positive learning outcomes” need to be understood. The authors do mention the importance of supporting/scaffolding students’ learning processes, especially in open-ended learning activities such as problem- and project-based learning. Indeed, these instructional processes aimed at guiding students’ learning processes in student-centered learning environments often remain implicit, but are crucial for the success of those instructional approaches (for an overview of the processes involved in problem-based learning, see Wijnia, Loyens, & Rikers, 2019). The chapter takes a “data-driven” approach towards “strategic processing.” Indeed, today, much data about learners is available, both about their digital lives (e.g., clicker data, website viewing patterns) but also data about their physical lives can be traced and made available (e.g., steps taken, heart rate). These daily activities can be observed and analyzed in Big Data contexts and result in learner profiles that reveal, e.g., learners’ interests. While the gaming industry already uses this type of data to keep gamers engaged and interested in a game, the step towards introducing adaptive difficulty in learning environments also seems possible.

When it comes to measuring strategic processing based on Big Data, the idea is that “productive strategies” can be detected and that students can be nudged towards using such strategies. This indeed sounds promising, but in order to become pedagogically meaningful, the conceptual question, “What is a strategy?” remains to be answered. Without a clear conceptual idea of what a strategy is or might be, identifying a series of sequences for high- or low-performing learners will probably continue to fall short of the hype and fail to be informative for instructional interventions. In addition, and more generally, one must remain critical of the purpose for which this data is used. On the one hand, it can provide interesting insights for both educators and students themselves. The authors refer to formative feedback for students in this respect to get a better sense of what they are doing and how effective this is. Also, educators can get a better idea based on Educational Data Mining/Learning Analytics (EDM/LA) systems of what better performing students are doing in their courses. However, on the other hand, for systems such as Course Signal, there is also the danger of stereotype threats or, even more severe, exclusion. Similar to Vermunt (this volume) who objected to the use of learning strategy inventories for selection purposes of individual students, Lawless and Riel also indicate that the monitoring of students’ educational trajectories “can inadvertently imprison learners in cages designed by their past choices.” This is not to say that this type of monitoring making use of big data cannot take place, as prior studies have provided interesting insights into predictors of study success by combining student information from a variety of sources (e.g., De Koning, Loyens, Smeets, Rikers, & van der Molen, 2012; Richardson, Abraham, & Bond, 2012). Hence, the purpose of EDM/LA systems should always be carefully considered. An important observation in this respect is, as the authors acknowledge, that “EDM/LA approaches are only as good as the corpus of data available to extract, aggregate, and analyze.”

Another issue that emerged in the present chapter is again the question of self versus other monitoring, when tools/systems are used to help students with their study processes. This is in line with our point related to the chapter by Braten and colleagues earlier. In the present chapter, we were triggered by the description of the results of Sun and colleagues (2011) who described how students became dependent on the tools presented to them. Nevertheless, we concur with the authors that there is great potential for Big Data to foster automated instructional personalization.

The chapter by Lawless and Riel ends with raising some important theoretical, ethical, and logistical challenges for future research and calls for a more team-based approach (i.e., de-siloing educational research) to the research into strategic processing with diverse teams of experts across education, content disciplines, and computer sciences. While we agree with the authors’ conclusion, we also think that a teambased approach is only possible when there is agreement on the individual concepts (i.e., conceptual clarity).

 
<<   CONTENTS   >>

Related topics