Desktop version

Home arrow Philosophy

  • Increase font
  • Decrease font

<<   CONTENTS   >>

Count Data

Many studies of strategic processing have involved self-report data regarding strategy use, which are often normally distributed and amenable to GLM analyses (e.g., Askell-Williams et al., 2012). On the other hand, some studies of strategic processing involve counts of the number of times participants use particular strategies, either via observation (e.g., Hagen et al., 2014), participants’ think-aloud verbalizations of strategy use (Greene et al., 2018), or via trace data from computer-based learning environments (Bernacki, 2018). For example, think-aloud protocols conducted during learning events can be coded into behaviors (i.e., monitoring strategy use), and those coded data can be transformed into quantitative data by totaling up or tallying the data (Creswell & Plano Clark, 2018). Researchers can use count data to assess behavior, to compare behavioral measures to self-report measures, and to predict learning and performance outcomes from these data (Gall, Gall, & Borg, 2007; Greene et al., 2011). Often, count data are not normally distributed, and when used as a criterion variable, they violate a basic assumption of GLM analyses. In this case, researchers must use statistical techniques that can compensate for these non-normal distributions of data, such as Generalized Linear Model analyses. In one study, researchers conducted strategy use intervention to see if the instruction received by the students in the intervention group affected subsequent strategy use (Yoon & Jo, 2014). The researchers found that the instructed learning strategy was used more frequently in the treatment group than the comparison group by comparing frequency of count data. In sum, many studies involving strategy use or strategic processing include some measure of the frequency of those behaviors. When the outcome measure is a frequency, it is often the case that the data are non-normally distributed, and in those cases count models should be considered, rather than GLM.

Path Analysis

Compared to GLM, path analysis, also called path model analysis, allows researchers to model and investigate equivalent as well as more complex relations among predictor and outcome variables (Kline, 2014). In regression, all predictors are modeled to correlate with one another, and each predictor has its own unique path modeled as directly connecting the predictor to the outcome variable. Each path has an estimated regression coefficient, which can be tested for statistical significance. Each of these regression coefficients represents the unique relationship between the predictor and the outcome variable, after controlling for all other predictors in the regression. For example, a researcher may be interested in how the frequency of use of two deep strategies, such as elaboration and self-testing, as well as two surface strategies, such as highlighting and summarization, each predict academic achievement (Dinsmore, 2017; Dunlosky et al., 2013). A regression approach can be conducted using an equivalent path model, where the analysis would produce six correlations among the four strategies, as well as four path model coefficients, one for each strategy (see Figure 21.1). In statistical parlance, such a model is saturated, because every variable (i.e., predictors and outcome) is connected to every other variable.

However, in regression models only one formulation of the variables is possible (i.e., all predictors related to the outcome directly). Researchers often have more complex conceptualizations of the relations among phenomena, such as positing mediators between variables (e.g., Pintrich, 2000). For example, rather than assuming that each strategy acts only directly on academic achievement, the researcher may posit instead that the two surface strategies are correlated, with each predicting use of elaboration, which in turn predicts self-testing, which in turn is the only predictor of academic achievement (see Figure 21.2). In this conceptualization, two deep strategies serve as mediators of the relationships between the two surface strategies and academic achievement. To test these ideas, the researcher would use path analysis where the posited model would include only a single correlation (i.e., the two surface strategies), with one path from each surface strategy to elaboration, a single path from elaboration to self-testing, and a single path from self-testing to academic achievement. This more nuanced model, compared to the regression-equivalent path model, is not saturated

Example Path Model of a Regression Analysis

Figure 21.1 Example Path Model of a Regression Analysis

Example Path Model Analysis

Figure 21.2 Example Path Model Analysis

because there are possible paths that are not specified (e.g., a path from highlighting to self-testing, or from elaboration to academic achievement). Models that are not saturated can be more thoroughly tested for data-model fit than saturated models, and compared to other specifications (e.g., deep strategy use predicting surface strategy use, which in turn predicts academic achievement) to determine which specification best represents relations in the data. Thus, path models and their associated analyses allow researchers to test more complex models than GLM methods such as regression, and often allow for more rigorous tests and comparisons of different conceptualizations of the relations among variables. On the other hand, to be estimated successfully, path model analysis often requires a larger sample size than GLM models (Kline, 2014).

Bernacki et al. (2012) used trace data from college students’ use of a technology-enhanced environment to study relations among strategy use, self-reported motivation, and learning performance. The trace data represented counts of college student participants’ use of highlighting, note taking, and definition tools, as well as monitoring tools such as accessing a list of learning goals. Bernacki et al. (2012) could have used a regression model in their analysis, but that would not have allowed them to test their hypotheses regarding how motivation predicts strategy use, which they posited would then predict learning, a common consequential order in models of selfregulated learning (SRL; Schunk & Greene, 2018). By using path model analysis, they were able to discover that approach-based motivation was positively related to strategy use, whereas avoidance-based motivation was negatively related. Further, highlighting was the only strategy use variable that predicted learning. Finally, they found acceptable data-model fit for this model, indicating its superiority to a saturated model, adding evidence regarding the validity of their conceptualization.

Deekens et al. (2018) posited a similar model in the second of two studies in their article, arguing that the relationship between pre and posttest measures of learning would be mediated first by the frequency of monitoring, which in turn would predict the frequency of deep and surface strategy use, with both strategy use variables predicting posttest performance. Again, a path analysis model allowed them to examine a common consequential ordering of variables in SRL theory, where prior knowledge positively predicts the frequency of monitoring, which in turn would positively predict deep strategy use while negatively predicting the use of surface strategies. As Dinsmore (2017) and others have argued, Deekens et al. posited that deep strategy use would positively predict posttest performance, whereas surface strategy use would negatively predict performance. These authors found support for their path model, despite a relatively small sample size likely leading to power concerns. A standard regression model would not have allowed for tests of relations between prior knowledge and monitoring, or monitoring and strategy use, as the path model analysis did. In sum, path models allow researchers to more closely adhere to theoretical relations in their analysis and can provide unique insights regarding contingent relations among strategy use and other variables of interest in learning (e.g., motivation, prior knowledge, monitoring; Ben-Eliyahu & Bernacki, 2015).

<<   CONTENTS   >>

Related topics