Desktop version

Home arrow Philosophy

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Other Latent Variable Models

Latent variable (factor analysis) approaches are important techniques for researchers analyzing strategy data. One reason is the important point brought up by Freed et al.—the need for data reduction. If we gather (or code for) 35 different strategies, we may not be able to conduct 35 different analyses, but we can reduce these 35 codes to a smaller number of high-level codes. However, researcher decisions about these groupings might be biased; hence, more objective data reduction techniques like exploratory factor analysis or multidimensional scaling are used. The disadvantage to these techniques is that they are completely data-driven; results may be specific to the particular sample. Like path analysis and SEM, there may not be a single ‘best’ model.

Growth Curve Modeling

As noted by Freed et al., there is much interest in how strategy knowledge changes over time, and growth curve modeling (GCM) is a very powerful and flexible approach. GCM can be implemented in either an SEM framework or as a special form of multilevel modeling. The main results of GCM are an average starting score (intercept) and average speed of growth (slope). These are averages because in effect a separate curve is lit for each participant. Intercepts and slopes can be related to antecedent(s) and/or outcome(s) of those strategies. For example, Ahmed and colleagues (Ahmed, Van der Werf, Kuyper, & Minnaert, 2013) modeled growth in three types of learning strategies from students at the beginning, middle, and end of 7th grade: shallow learning strategies, deep learning strategies, and metacognitive learning strategies. Four academic emotions from the beginning of 7th grade (anxiety, boredom, enjoyment, and pride) were tested as antecedents of the strategy growth curves. Patterns were somewhat complex, but students who had higher scores on enjoyment and pride at the beginning of the year also had higher scores on all strategy use scales at that time and grew faster in all of the strategy use scales over the course of the year. GCM can easily handle missing data and non-linear growth (curvilinear or discontinuous) in a way that mixed ANOVA cannot.

If researchers are interested in growth on several strategy variables, growth in one strategy can be related to growth on another strategy. These ‘growth on growth’ models can be very informative for developmental theories of strategy use (e.g., Siegler, 2005 ‘overlapping waves’ model of strategy development). For example, Ahmed et al. (2013) could have correlated the intercepts and slopes among shallow, deep, and metacognitive learning strategies. These correlations would then suggest how strategies develop in tandem. For example, if shallow learning strategies decline and deep learning strategies increase, these two slopes will have a negative correlation (the more one’s shallow learning strategies decline, the more one’s deep learning strategies increase). There are many different options for testing growth-on-growth models; correlating intercepts and slopes is only one option (Wickrama, Lee, O’Neal, & Lorenz, 2016).

As with SEM, a disadvantage of GCM is that these require large samples, and having more measured variables (i.e., measuring at more time points) gives more power. With three time points, only the most basic models can be run; with more time points, more complex models can be run. Most published research fits quadratic models, but more types of functions are becoming available in software, packages, and apps (e.g., logistic, exponential, Gompertz) and the common functions may not be the best fitting ones. Measures generally must have very high internal consistency reliability (> .90 is ideal), which can be difficult to obtain. In addition, researchers should show that participants interpret the questionnaire or test items similarly over time, called measurement invariance. If factor loadings for items change over time, that suggests the underlying construct does not have the same meaning to participants over time. Visual checks on raw data are critical for GCM, are time consuming, and may show that there is not just one shape of growth (see Growth Mixture Modeling below). GCM also has a steep learning curve for researchers, although those who already know SEM or multilevel modeling will be able to master the technique faster.

Transition Analyses/Data Mining

Briefly mentioned by Cho et al. (this volume), researchers may be interested in specific multiple-strategy patterns within learning process data. For example, after enacting a low-level strategy in a think-aloud, what comes next? There are many statistical approaches to transition analysis, including ones that resemble the chi squared test (e.g., Wampold & Margolin, 1982), hidden Markov models (e.g., D’Mello, Olney, & Person, 2010), and log-linear models (Jadallah et al., 2011). All of these identify pairs of coded variables (such as ‘from’ monitoring ‘to’ recalculating) that occur disproportionate to what would be expected if strategy sequences were random. For example, Jadallah et al. found that 23% of the time after a Collaborative Reasoning teacher prompted a ‘give evidence’ strategy, the next move was a student giving evidence. Furthermore, 17% of the time after giving evidence, the teacher offered praise for using that strategy, and 13% of the time praise led to giving more evidence. These transition analyses showed that the intervention had its effects on achievement because the teacher strategy prompts did in fact result in students using those strategies.

Other approaches to finding patterns in strategy data fall under data mining methods, such as classification trees, neural networks, machine learning, time series analysis, social network analysis, and so on. Other methods discussed here are often referred to as data mining methods, including LCA, hidden Markov models, and even some regression models. In many cases, these approaches are used to seek out transition patterns within strategy use data. Researchers interested in strategy use should stay abreast of these emerging analytical techniques, as they may come into wider use in the coming years.

The advantage of transition analyses is that they are the only technique that can answer questions about sequences rather than about simple co-occurrence within a transcript or log. Transition analyses have the disadvantages of low statistical power, and results may be sample-specific. Researchers need to be clear about which codes will be included in transition analyses (e.g., whether to include an off-task code), as analysis will change substantially from including or excluding even a single code. In addition, having fewer codes is more likely to show meaningful patterns in transition analysis. Moreover, there are challenges in deciding whether to analyze ‘the next’ turn (lag =1) and/or ‘the next turn after’ (lag = 2). Transition analyses using the Wampold and Margolin (1982) method are easy to run using the Multiple Episode Protocol Analysis freeware from Erkens and colleagues (e.g., Janssen, Erkens, & Kanselaar, 2007).

 
<<   CONTENTS   >>

Related topics