Desktop version

Home arrow Philosophy

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Findings and discussion of the review

The full table with the descriptive evidence from each review is presented in Table 3.1.

Each of the reviews is listed in the references section with an asterisk preceding the reference. The findings from the reviews in the table will be presented and discussed in accordance with the three guiding questions for the chapter - conceptualization and operationalization of levels of processing, systematic effects of levels of processing on performance, and the influences of contextual and individual factors that mediate or moderate the relation between levels of processing and performance.

Conceptualization and Operationalization of Levels of Processing

Conceptualization. With regard to how levels of processing were conceptualized in these reviews we found that ten of the reviews explicitly discussed levels of processing, while five did not. Of the five reviews that did not discuss levels of processing (’Aftlerbach, Pearson, & Paris, 2008; ‘Alexander & Judy, 1988; ‘Ashcraft, 1990; ’Paris, 1988; Paris, Lipson, & Wixson, 1983), two of these reviews (’Aftlerbach et al., 2008; ’Alexander & Judy, 1988) were concerned with the definition of a strategy. For instance, Afflerbach and colleagues addressed the confusion between the terms skill and strategy making the claim that confusion between these two terms could result in less effective instruction for children and adolescents.

Of the reviews that did address levels of processing there were a variety of frameworks from which these levels were addressed. Four of the reviews addressed levels of processing from the perspective of the development of expertise. These perspectives have been forwarded by Alexander and colleagues (Alexander, Grossnickle, Dumas, & Hattan, 2018; ’Dinsmore, 2017; ’Dinsmore & Alexander, 2012; ’Dinsmore, Hattan, & List, 2018). In each of these reviews, conceptualizations of deep- and surface-level processing (strategies to understand the problem versus transforming them respectively) is informed by Alexanders Model of Domain Learning (MDL; Alexander, 1997, 2004). In the MDL, surface-level strategies are those strategies designed to better understand and solve a problem, whereas deep-level strategies are those strategies designed to transform a particular problem. The MDL predicts that those in acclimation (i.e., novices) would rely primarily on surface-level strategies, whereas experts

Table 3.1 Pooled Studies and Codings

Citation

Number of Studies

Conceptualization/ Level

Measurement/

Level

Context (i.e., domain, setting)

Task

Learner Ind Diff

Process-Outcome

Links

Aftlerbach (2008)

N/A

Examined differences between skills and strategies, explicitly looked at definitions of skills/strategies. This article is all about conceptualizing skills and strategies.

N/A

Reading.

N/A

N/A

Alexander, Graham, and Harris (1998)

N/A

Defines strategies as being procedural, purposeful, effortful, willful, essential, and facilitative. Strategies as a type of procedural knowledge. Contrasts strategies from skillful behavior. Strategies as domain general, domain specific, or task specific. Includes metacognition, self-regulation, learning and instructional strategies.

N/A

Ways teachers can influence strategy growth: explicitly teaching relevant strategies and creating environments in which strategies are required, valued, and rewarded.

Looks at task variables such as nature of the domain, time constraints, mode of response, and perceived value of the task.

Knowledge, motivation, mindfulness, automaticity (does this count) and other individual differences such as short-term memory.

It is implied that being strategic will result in better outcomes from learners, but this is not explicitly examined.

Alexander, Grossnickle, Dumas, and Hattan (2018)

N/A

Skills v. strategies.

Types of strategies: domain general and specific; deep v. surface processing; cognitive and metacognitive. Then also mentions meta-strategies, relational reasoning, online learning.

Mentions selfreport, assigning conditions, think alouds, eye tracking, and neurophysiological methods

Includes a description of strategies in online settings, as well as strategies in the classroom.

N/A

Considers epistemic beliefs, motivation, and emotion.

Makes some loose links between strategies and learning.

Citation

Number of Studies

Conceptualization/ Level

Measurement/

Level

Context (i.e., domain, setting)

Task

Learner Ind Diff

Process-Outcome

Links

Alexander and

Judy(1988)

N/A

Domain general or specific. Found definitional issues in the studies. No mention of deep v. surface.

N/A

Looked at studies that focused on a particular domain (although found that the studies mentioned a weak articulation of the content). Mentions the importance of social-contextual factors. Found that most studies had participants of college-age or older.

N/A

Knowledge was discussed at length since the focus was on the interaction between domain-specific and strategic knowledge. Also mentions the importance of motivation and social-contextual factors.

Draws the conclusion that both domain and strategic knowledge are central to learning.

Ashcraft

N/A

Strategy defined as how a task is performed mentally. Focus on mental arithmetic.

Mentioned that students can use more than one strategy at once. Strategic processes become more automatic.

N/A

Math.

Arithmetic tasks.

N/A

Performance becomes more rapid and accurate as students develop.

Asikainen and

Gijbels (2017)

43

Directly examined deep v. surface-level processing.

Looks at selfreport measures: ETLA, ASSIST, SPQ, interviews, etc.

Only looked at higher education, longitudinal studies. Domainspecific (several higher education domains were included such as biology, economics, hospitality, etc.).

N/A

Initial approaches.

Ambiguous.

(Continued')

Citation

Number of Studies

Conceptualization/Level

Measurement/ Level

Context (i.e., domain, setting)

Task

Learner Ind Diff

Process-Outcome

Links

Dinsmore (2017)

134

Surface, deep, metacognitive/ self-regulatory.

Examined how quantity, quality, and conditional use were measured.

Reading (45%), mathematics (18%), domain general (17%), science (10%).

Well structured (69%) versus ill structured (24%), both (7%).

Learner goals.

Quality and conditional use explain performance more consistently than simply frequency of strategy use; and numerous person and environmental factors shape the degree to which certain strategies are effective for certain learners.

Dinsmore and Alexander (2012)

221

Directly examined conceptions of levels of processing; explicitly 41.4%, implicitly 50%, and absent 8.6%.

Directly measured levels of processing for studies; 48% self-report, 28.3% by condition; 14.3% coding scheme, 8.1% absent; .9% behavior; .4% by outcome.

Most studies were domain genera (n=l 17) followed by physical/life science (n=38) and social science (n=36).

60% were task based and 40% were not.

N/A

Mixed/ambiguous links.

Dinsmore, Hattan

& List

17

Surface level, deep, metacognitive/ self-regulatory.

Direct observation (1 study), online self-report (24%), offline self-report (71%).

Physical or life science (24%), social sciences (47%), performing arts, physical/ kinesthetic.

Generally asked to read rather than tasks specific to the domain.

Lumped task and outcome together -coded as ill-defined (24%) or well-defined (71%).

Stage of development (71% undergraduate students), MDL stage (94% acclimation), domain knowledge, topic knowledge,

Direct links to performance.

individual interest, situation interest

Citation

Number of Studies

Conceptualization/ Level

Measurement/

Level

Context (i.e., domain, setting)

Task

Learner Ind Diff

Process-Outcome

Links

Hattie &

Donoghue

228 metaanalyses

Surface, deep, and transfer with an acquiring and consolidation phase for the surface and deep levels.

N/A

Wide variety.

Wide variety.

Degree to which students understand criteria for success influences strategy selection.

The results indicate that there is a subset of strategies that are effective, but this effectiveness depends on the phase of the model in which they are implemented. Further, it is best not to run separate sessions on learning strategies but to embed the various strategies within the content of the subject to be clearer about developing both surface and deep learning, and promoting their associated optimal strategies and to teach the skills of transfer of learning.

Najmaei & Sadeghinejad

N/A

Metacognition as a more abstract level than cognition.

N/A

Business/ marketing.

N/A

N/A

Suggests future directions to link managers’ decisions to strategies they use.

Paris (1988)

N/A

Using metaphors to describe learning strategies - Craik and Lockhart’s depth versus Anderson “spread of activation” are discussed.

N/A

N/A

N/A

Levels of expertise are discussed.

N/A

(.Continued)

Table 3.1 (Continued)

Citation

Number of Studies

Conceptualization/ Level

Measurement/

Level

Context (i.e., domain, setting)

Task

Learner Ind Diff

Process-Outcome

Links

Pintrich (1999)

N/A

Surface, deep following Weinstein and Mayer, metacognitive, self-regulatory. Pintrich, Wolters, and Baxter (1999) have suggested that metacognitive knowledge be limited to students’ knowledge about person, task, and strategy variables. Self-regulation would then refer to students’ monitoring, controlling, and regulating their own cognitive activities and actual behavior.

N/A

N/A

N/A

Motivational beliefs (self-efficacy, task value, goal orientation).

N/A

Vermunt and Donche (2017)

44 learning patterns studies in which the ILS is used

Deep, stepwise, and concrete strategies, regulation strategies.

All used the Inventory of Learning Styles (ILS).

Teaching strategies, perception of the learning environment, disciplinary differences.

Discussion is much broader than task.

Personality, academic motivation, goal orientation, attributions of academic success, self-efficacy, effort, epistemological and intelligence beliefs, prior education, age, and gender.

Ties the use of strategies to better performance more at the course and semester level rather than a specific task or performance.

would increasingly rely on deep-level strategies and less on surface-level strategies (cf. ’Dinsmore et al., 2018). Thus, the MDL does not predict that quantity of strategies should relate directly to performance. Rather, the level or type of strategy could be explained by the individual’s development of expertise in a particular domain - such as mathematics - and that that use of the appropriate strategy for that level of expertise should better predict performance in that domain. For example, Dinsmore and Alexander (2016) empirically tested this notion by examining how levels of processing influenced performance on an astronomy task. Those who had low prior knowledge (one of the hallmarks of being a novice) did not perform well using primarily deeplevel strategies, whereas those with more prior knowledge performed better on the outcome task using more deep-level strategies. For instance, participants who tried to use elaborative strategies (using one’s own prior knowledge to add information in addition to what the author wrote) while reading the text passage in the study only comprehended that passage better when they possessed higher levels of background knowledge. In other words, in these cases, elaborating on a topic when you have little or no prior knowledge - or worse, inaccurate knowledge - can make comprehension more difficult.

Two reviews relied instead on the Learning Patterns framework (’Asikainen & Gij-bels, 2017; ’Vermunt & Donche, 2017). The Learning Patterns framework has evolved quite a bit over time (Richardson, 2015) but began with quasi-experimental investigations by Marton and Saljb (1976a, 1976b). These investigations examined the role that expected assessments changed how individuals processed information for studying. For instance, if the task assessment for a text passage was to memorize important details of the text, individuals would be expected to use surface-level strategies such as rehearsal. Those individuals who were asked to interpret what the text meant would be expected to use deep-level strategies such as making inferences about the message the author is trying to convey. Although Marton and Saljo were examining these effects at the task level, much of the current research using SAL examines students’ processing at the course or even semester level. This is evident in ’Vermunt & Donche’s (2017) review in which he examined a popular instrument used to measure levels of processing in SAL - the Inventory of Learning Styles (ILS; Vermunt, 1998). It should be noted that the levels in this theory go beyond simply surface and deep level with Biggs (1987) adding an achieving level as well. Similarly, *Asikainen and Gijbels (2017) also examined self-report instrument yet expanded beyond the ILS and included a wider variety of self-report instruments.

Three reviews did not examine levels of processing with regard to deep and surface; rather these reviews examined cognitive versus metacognitive and self-regulatory levels (’Alexander, Graham, & Harris, 1998; ’Najmaei &Sadeghinejad, 2016; Pintrich, 1999). ’Alexander et al. (1998) and ’Najmaei and Sadeghinejad (2016) relied primarily on Flavell’s (1976) conceptualization of metacognition to frame the differences between cognitive and metacognitive strategies. Pintrich (1999), on the other hand, used his self-regulated learning (SRL) framework (Pintrich, 2000) which encapsulated cognitive, metacognitive, self-regulatory, and affective strategy use during performance. Although no such current review of SRL strategies exists to our knowledge, the use of SRL to investigate strategy use is typified by the work of Azevedo, Greene, and colleagues (Greene & Azevedo, 2009; Taub, Azevedo, Bouchet, & Khosravifar, 2014).

For example, Deekens, Greene, and Lobczowski (2018) used an SRL framework to investigate individuals’ self-regulatory strategy use (and the levels they defined within that framework) across two academic domains - history and science. Differences between the metacognitive and self-regulatory levels are explored in depth in Dinsmore, Alexander, and Loughlin’s (2008) systematic review of those constructs.

An outlier to reviews within the three frameworks previously mentioned was ‘'Hattie and Donoghue’s (2016) meta-analysis, which was based on and refined from Hattie’s Visible Learning framework (Hattie, 2008). Hattie’s visible learning is the perspective that students learn best when they become their own teachers - through, among other ideas, better constructed feedback for students to use (e.g., Hattie & Clarke, 2018). Although this framework is not as tightly constructed as the MDL or SAL - it does not contain the specific mechanisms of how deep and surface strategies influence learning - "Hattie and Donoghue (2016) meta-analysis draws primarily from informationprocessing views of learning (e.g., Klahr & Wallace, 1976).

Operationalization. Six of the tabled reviews specifically examined the measurement of levels of strategic processing. Two of these reviews were focused solely on retrospective self-report measures ("Asikainen & Gijbels, 2017; "Vermunt & Donche, 2017). Retrospective self-report refers to measures that survey the use of strategies after the task or activity has taken place. As the use of retrospective self-report has been typical in the SAL literature over the past few decades, the prevalence of these retrospective self-reports is not surprising. Given the examination of processing over longer periods of time - such as a course or a semester - the use of retrospective self-reports is easier and less time intensive than some of the concurrent self-report instruments used elsewhere. For example, Vermunt’s ILS (Vermunt, 1998) asks how often students are, “Relating elements of the subject matter to each other and to prior knowledge; structuring these elements into a whole.”

Three of the reviews examined measurement of levels of processing beyond retrospective self-report ("Alexander et al., 2018; "Dinsmore & Alexander, 2012; "Dinsmore et al., 2018). Across these three reviews it is apparent that retrospective selfreport remains the dominant measure of strategy use with "Dinsmore and Alexander (2012) reporting that almost half (48%) of the studies they reviewed used retrospective self-report, with a higher percentage of retrospective self-report (71%) in their review of studies solely using the MDL. Other methods of measurement included concurrent self-report. Concurrent self-report refers to measurements that collect data about strategic processing during a task, rather than after a task. Concurrent measurements of strategy use were primarily the use of the think-aloud protocol, eye tracking, and neurophysiological measures such as functional magnetic resonance imaging. Think-aloud protocols refer to the process of asking individuals to verbally report their strategy use as they are engaged in a task (cf., Ericsson, 2006; e.g., Parkinson & Dinsmore, 2018). Eye tracking measurements are those that examine how movement of the eye relative to a task (typically a text) relates to their processing (cf. Rayner, Chace, Slattery, & Ashby, 2006; e.g., Catrysse et al., 2018). Finally, neurobiological measures, such as functional magnetic resonance imaging (fMRI) or functional near infrared spectroscopy (fNIRS) relates the hemoglobin response (i.e., blood flow) of certain regions of the brain to individuals’ processing (cf. Kotz, 2009; e.g., Dinsmore, Macyczko, Greene, & Hooper, 2019).

Further, one review examined different facets of strategy use and levels of processing more specifically. In his review, ‘Dinsmore (2017) examined measures of the quantity (i.e., how often a strategy was used), quality (i.e., how well a strategy was used), and conditional use (i.e., when a strategy was used) to investigate whether these measures and the facets of strategy use better related to performance outcomes - a topic discussed in a subsequent section. In this review, Dinsmore found that 94% of the studies contained some measure of quantity, while only 24% and 19% of those studies contained some measure of quality and conditional use respectively. Since most of the studies reviewed were self-report, capturing the quantity of that strategy use is fairly straightforward. However, capturing the quality and conditional use of strategies requires more time and labor-intensive measures, such as think aloud protocols (TAPs).

Discussion. At issue in the previous two subsections were the conceptualization and operationalization of levels of strategy use. Taken together, findings from these reviews indicate that issues of conceptualization and operationalization have plagued the educational and psychological literature. With regard to the conceptualization of levels of processing, it is clear that how these levels are conceptualized are at worst not explicitly defined (‘Dinsmore & Alexander, 2012), and at best researchers in this area have been using competing frameworks with little impetus to collaborate across these frameworks - with some exception (Dinsmore et al., this volume; Gijbels & Fryer, 2017). As Loughlin and Alexander (2012) pointed out, without conceptual clarity, interpreting the findings of these studies - and their accompanying reviews - becomes difficult.

Exacerbating these conceptual issues are measurement issues. The heavy reliance on retrospective self-report has been highly problematic in related areas of the literature such as metacognition and SRL (Dinsmore et al., 2008; Veenman, Van Hout-Wolters, & Aftlerbach, 2006). However, the more time and labor-intensive measurements such as TAP are likely not suitable for large, generalizable, longitudinal studies that leverage larger sample sizes over repeated instances across a semester or year of study. Therefore, there has been - and remains - a difficulty with accurately and practically assessing levels of cognitive processing. This issue has left us with either measurements that are quite practical in collecting data from hundreds, even thousands, of students that may or may not accurately reflect their strategic or cognitive processing (i.e., retrospective self-report scales), or measurements that may be more accurate but are difficult to collect and analyze at any large scale.

 
<<   CONTENTS   >>

Related topics