Desktop version

Home arrow Philosophy

  • Increase font
  • Decrease font

<<   CONTENTS   >>

Operationalization of generality and specificity

Within the existing research on domain-general and domain-specific strategic processing, an initial operational divide exists between those who identify particular strategic processes as domain-general mainly theoretically based on a conceptualized usefulness of a given strategy across domains (e.g., Niaz, 1994), and those that rely on data (possibly published across multiple studies) to determine if a particular strategy is actually domain-general in its usefulness (e.g., Dinsmore, 2017). One reason why this pattern may be problematic for this area of research is because some sets of strategies can be described as domain general (e.g., help-seeking strategies) because, theoretically, such a type of strategy can easily be conceptualized as useful across a number of tasks and domains of learning. However, in the actual enactment of a strategy such as help-seeking across domains, tasks, learning contexts, or developmental periods, such a strategy may appear very divergently. For this reason, theorizing about the domain- or task-generality of particular strategic processes can sometimes rest upon implicit semantic and ontological categories within the mind of the researcher, making theoretical debates about domain-generality or specificity of a given strategy difficult to resolve (hypertext reading strategies is one recent example of this; Alexander, Grossnickle, Dumas, & Hattan, 2018; Leu et al., 2008).

Help-seeking strategies, and their various specific enactments across contexts, can form a useful illustration of the way in which theorizing about the domain-generality of strategic processes can depend on the ontological categorization of those processes. For example, a young child learning to draw with colored pencils may evoke a highly emotionally charged help-seeking strategy (e.g., crying) while a graduate student learning to do statistical analysis may employ a very different help-seeking strategy, such as reading statistical message boards on the Internet. Are these two very different sets of behaviors both instances of the same strategic process? Because strategies have historically been defined as goal-oriented and effortfully used procedural knowledge (Alexander & Judy, 1988), and therefore must be in service of accomplishing a goal, the goal itself (e.g., getting help) may not be the most useful way to define or identify the strategy. Rather, it may be more helpful to theoretically separate a students goal in enacting a strategy from the strategy itself, as some in the literature have previously done (e.g., Fryer, Ginns, & Walker, 2014). This is because, many human goals are necessarily salient across domains of learning, and a variety of different strategic processes may be useful in achieving those goals (Ames & Archer, 1988). This conceptual issue is relevant to the main focus of this chapter, because the goal of a strategy may be inherently domain-general, but the particular process that an individual student uses to achieve that goal (i.e., the strategy) can be domain-specific in its enactment.

Complicating matters further, it is of course always possible for a student to attempt a particular strategy on a task or within a domain or discipline in which it is not appropriate. But does the presence of an attempt indicate the strategy is domain-general? I would argue that some commonly expected effectiveness should be required to mark a particular strategy as domain-general, rather than simply an attempt. To return to the help-seeking example above, the young-child that resorted to crying as a help-seeking strategy while learning to draw with colored pencils may find the same strategy is not effective on another task or within another domain (e.g., learning to play a video game), because care-givers or instructors may respond differently across those contexts. The difference in effectiveness of this particular help-seeking strategy may be even more stark across developmental periods as the child grows up. As a somewhat frivolous example, crying is not likely to be a highly effective help-seeking strategy in graduate level statistics courses, but other forms of help-seeking such as sending an email to an instructor may be effective. In all of these cases, the goal of the procedure is the same (i.e., help-seeking), but the actual process engaged in by the learner is very different both in its enactment and in its effectiveness (Reeves & Sperling, 2015).

The issue of disentangling the strategic process from its goal is related to the further methodological problem of meaningfully connecting the observed behaviors of participants to their underlying cognitive mechanisms or latent structure. For instance, one of the most frequently utilized methods for making inferences about underlying mental attributes from observed data is through latent variable analyses such as factor analysis or item-response theory models (e.g., Dumas & Alexander, 2016). Such models relate to the study of domain-generality and specificity, because they are capable of using the covariance among observed variables to determine whether an observed variable (e.g., an item on a measure) indicates a highly specific latent attribute or a latent attribute that is more generalizable. The well-known and influential theory of general intelligence (g; Spearman, 1904) is one theoretical perspective that posits a body of entirely domain-general cognitive abilities that is based mainly on evidence from factor analytic investigations. In contrast, other theories about the structure of mental attributes include more domain-specific cognitive attributes (e.g., Carroll, 1993), and also base their arguments on factor analytic evidence. Within this factor analytic tradition, the way in which student performance on particular tasks covaries is used to make inferences about the generality of underlying abilities. For example, if student performance on a number of tasks or measures covaries strongly and in a positive direction, an inference can be made that a generalizable underlying latent attribute causes the variation in performance on each task. In contrast, if performance on a number of tasks covaries weakly, the opposite inference—that multiple highly specific latent attributes are present—can be made.

However, one major weakness in linking latent variable research to research on strategic processing is that the actual cognitive processes required for the successful completion of the type of tasks or tests that are conducive to psychometric analysis are seldom known authoritatively enough to infer that the procedural knowledge being measured is actually domain-general or if some other capacity such as processing-speed (Habeck et al., 2015) is driving the covariance. In addition, almost any cognitive task over a certain level of complexity can be solved in multiple ways and using varying strategic processes, so the covariance structure of performance data that is typically used in factor analytic research can rarely point directly to specifically identified strategic processes. In this way, latent variable methods are highly useful for identifying the domain-generality of abilities—that consist of both declarative and procedural knowledge evoked in both quantitative and qualitatively different ways across students—but struggle to provide strong evidence for the domain-generality of strategic processes themselves. Please see Greene and colleagues’ contribution to this Handbook for a full discussion of variable-centered methodological approaches to strategic processing research.

In contrast to methods that use the covariance among task performance to infer the generality of cognitive functions, other programs of research that have been relevant to the domain-generality and specificity of cognitive strategies have used a process-oriented methodological and measurement approach. In such an approach, the actual processes that participants enact while problem solving are the focus of research. For example, studies in this tradition may utilize think-aloud (Anmarkrud, Brâten, & Stromso, 2014) or eye-tracking (Catrysse et al., 2018) methods in order to identify not only whether or how well students are able to complete a task but also how they go about it procedurally. Using data such as these, researchers are able to determine whether or not a particular strategy is useful to students across multiple learning tasks, domains of learning, or even across multiple disciplines of practice. For example, if researchers observe students engaging in the same or very similar strategic procedures (e.g., summarizing text) both when they are learning biology and when they are learning psychology, that may indicate that such a strategy is domain-general because it is used across domains.

Of course, the same strategy may be differentially effective across domains and may constitute a highly adaptive or optimal strategy in one domain while it is a relatively weak strategic option in another domain. For example, visualizing may be a highly useful strategy in such domains as chemistry or geometry, but only a somewhat useful strategy within domains such as history. Nonetheless, students may engage in the visualization strategy across both domains, marking it as a domain-general strategy. Such a pattern illustrates a critical point for the direct instruction of strategic processes to students. While an instructor may teach domain-general strategic processes and describe them as such to students, it is likely also critical to carefully explain the particular tasks or learning contexts within those domains for which the strategy may be most appropriate. One example of a strategy that may be over-used, at least by undergraduate students, is highlighting (Cerdan & Vidal-Abarca, 2008). As a support for organizing and remembering what is read, highlighting appears useful across many different types of texts and reading situated in a variety of domains. But, more detailed research has shown that highlighting typically supports only surface-level cognitive processing and can be much more or less effective depending on the elements of the text being read and highlighted (e.g., whether or not the text features technical diagrams; Cromley, Snyder-Hogan, & Luciw-Dubas, 2010). For that reason, a strategy like highlighting may be domain-general, but its effectiveness for learning across domains is far from definite.

In addition, the identification and operationalization of the domain-generality of a given strategy is complicated by the question of whether or not domain-generality presupposes that the same strategy is useful across domains by the same individual student, or whether domain-generality can mean only that the strategy is useful across domains, but not by the same student. This question deals specifically with the relations among strategies and the way ability or performance in a given domain is typically measured, as well as the question of transfer of procedural knowledge across tasks and domains.

<<   CONTENTS   >>

Related topics