Desktop version

Home arrow Philosophy

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Domains of Applicability and Limitations

Inventories and questionnaires on learning strategies as described above have been used for a variety of purposes (Entwistle, 2018; Garcia & McKeachie, 2005). Three important domains of applicability are the following.

First, they have been used to gain scientific knowledge about dimensions and developments in students’ use of learning strategies, their motives, and their views and beliefs. This kind of research has, for example, looked into the internal structure of learning strategies, conceptions, and orientations in different educational contexts, developments in students’ use of learning strategies during the school career and at transitional phases in that career (e.g. from secondary to higher education), consistency and variability in students’ use of learning strategies, relations between learning strategies and personal and contextual factors, and relations between learning strategies and learning outcomes. In this way a rich knowledge base has been built up about the domain of student learning in higher education (for reviews, see for example, Dinsmore, 2017; Entwistle, 2018; Entwistle & McCune, 2004; Fryer, 2017; Gijbels, Donche, Richardson, & Vermunt, 2014; Lonka et al., 2004; Pintrich, 2004; Richardson, 2000; Vermunt & Donche, 2017; Zusho, 2017).

Inventories and questionnaires on learning strategies have also been used to help students reflect on their use of learning strategies, and to help students remedy the weak sides of their approaches to learning and studying (e.g. Donche, Coertjens, Van-thournout, & Van Petegem, 2012; Garcia & McKeachie, 2005; Vermunt, 1995; Weinstein et al., 1988). When completing a self-report strategy inventory, students are encouraged to think about their way of learning and studying, and when they receive feedback on their strategy scale scores they can compare their own use of different learning strategies with that of their peers. The inventory items themselves can give them ideas about potential learning activities they had never thought of before. Sometimes this awareness raising function is embedded into a learning strategy or study skills training programme, in which students receive training to improve learning strategies they are not very proficient in (Weinstein et al., 1988).

Third, learning strategy inventories have also been used to evaluate the effect on students’ learning strategies, motives, or views of the teaching environment or a specific course (e.g. Asikainen & Gijbels, 2017; Entwistle & McCune, 2013; Garcia & McKeachie, 2005). Lonka et al. (2008) point to the opportunity their MED NORD instrument provides to improve medical education by long-term follow-up studies in which students’ well-being, motivational strategies, epistemologies, approaches to learning, and their perception of their learning environment are examined. Vermunt et al. (2018) developed their learning gain instrument to evaluate the impact of different universities, disciplines, and learning environments on students’ gains in their use of cognitive and metacognitive learning strategies. Information about the development of students’ way of learning in a particular learning environment can give important feedback to lecturers and directors of study to improve the quality of university teaching. If, for example, during a particular course students’ use of memorizing strategies increases, this may signal an undesirable effect of the teaching in that course on the quality of student learning.

Discussions we had with policymakers in the context of our learning gain research also revealed some expectations towards the use of learning strategy inventories that we firmly disagree with. In our view these inventories cannot be used for selection purposes of individual students, for example in admission procedures for universities. Neither can they be used for accountability purposes, for example to rank order universities, courses, or lecturers based on the outcomes of comparative analyses of student learning strategy data (e.g. to rank universities that score higher and lower on students’ use of critical thinking strategies). The reason for these domains of nonapplicability is on our view exactly the self-report nature of these tools. It is vital that students, when they are answering questions about their learning, do not have any other interests than responding as honestly as possible, and that there are no right and wrong answers (Vermunt et al., 2018).

Compared to other research methods on learning strategies, self-reports have their advantages and disadvantages. Advantages are, for example, the possibility to get an overview of large groups of students in a relatively short time, the opportunity to complete the instrument at a time and place that suits individual students, and the possibility to compare a student’s individual score on a learning scale with those of other students. The ecological validity is usually high since the questions are geared towards students’ studying in their natural study settings. Data collection can be fully digitalized and data analysis can be done easily and quickly with big data files.

But there are also disadvantages. There is the issue of validity, or to what extent can we report validly about the mental processes that occur in our brains (Karabenick et al., 2007; Trevors, Feyzi-Behnagh, Azevedo, & Bouchet, 2016)? Terms used in Likert scales like ‘I do this rarely’ or ‘I do this frequently’ may mean different things to different students, or different things to the same students earlier and later in their studies. Especially inventories asking students about their general or average way of learning may impose a difficult task on students to generalize or average their use of learning strategies across a multitude of concrete study experiences. Sometimes the items are not phrased in the language that students use to think about their learning. This is especially the case where items are derived from the scientific literature. We must be cautious to translate or adapt existing inventories for use with other populations than they were originally developed for. Items referring to specific learning activities may not be relevant for new populations, while their actually used learning activities are not represented in these inventories’ items (e.g. use the ASI to measure work-based learning strategies, use the ILS to measure children’s learning strategies, use the SPQ to measure Indigenous students’ learning strategies).

There are alternative methods to measure students’ use of learning strategies that may be more suited to specific circumstances or purposes. Thinking aloud, eyetracking, video-recording observable learning behavior, interviews, learning analytics, performance assessment, fMRI scanning, portfolios, and teacher judgments are some examples of other methods used to measure the cognitive and regulative learning strategies that students use (Endedijk et al., 2016). The other chapters in this book present excellent discussions of some of these methods, so we will not go into detail here (see, for example, Braten, Magliano, & Salmeron, this volume; Catrysse, Gijbels, & Donche, this volume; Lawless & Riel, this volume). The central problem that all these methods face is that we want to observe or externalize processes that are inherently internal: how students memorize information, relate and structure theories, engage critically with their studies, apply knowledge, think of examples, relate to own experiences, regulate their mental processes, etc. And we want to do that in situations and with tasks that are as similar as possible to their actual learning and study environments and tasks.

All these methods have validity issues. Eye-tracking yields detailed information about where students are looking at in learning or classroom situations, but the correspondence between what students are looking at and where they are thinking about is not straightforward. Thinking aloud yields rich data on students’ mental processes while they are doing a study task, but it is labor-intensive, may disrupt ‘natural’ thinking processes, and may not be representative of the study processes students engage in during their normal studies. The relation between brain processes (for example, fMRI scan data) and learning strategies is still largely to be explored.

 
<<   CONTENTS   >>

Related topics