Desktop version

Home arrow Psychology arrow The Wiley Blackwell handbook of the psychology of recruitment, selection and employee retention

Construct Measurement

Simulations, when coupled with technology, present new opportunities to significantly expand the domain of constructs assessed in a selection context. Advances in selection science, such as measuring response latencies in an effort to enhance the measurement of personality (Ranger & Khun, 2013), and the collection of other micro-behaviours (such as click patterns and mouse over times which are being explored as possible predictors of workplace behaviour) (Reynolds & Dickter, 2010), are enriching the evidence used to make inferences about the constructs being measured and expanding construct representation.

However, we contend that simulations offer the most promise for expanding the domain of constructs assessed as they are uniquely able to model complex interactions among the traits required for job performance that might otherwise be difficult to capture (Aguinis, Henle & Beaty, 2001).

As one example, current research and theory suggest that, now more than ever, leaders need to be agile learners and that this ability differentiates successful from unsuccessful leaders (Charan, Drotter & Noel, 2000; Goldsmith, 2007; McCall, 1998). Learning agility is defined as the willingness and ability to learn from experience and subsequently apply that learning to perform successfully under novel or first-time conditions (De Meuse, Dai & Hallenbeck, 2010; Lombardo & Eichinger, 2000). It is important to note that true learning agility is indeed a twofold attribute: individuals must be both willing and able to learn from experience. At its core, learning agility is founded on a compilation of cognitive, personality and motivational factors and is about extracting key lessons from one situation and applying them later to a different situation. When assessing learning agility, practitioners have typically assessed the personality and motivational factors (the willingness) and the underlying cognitive ability required, leaving the true ability to learn and use new information unmeasured. Simulations allow for refined measurement of the ability component of the construct. They add benefit because they can require applicants to absorb, integrate and interpret information in a real-time, simulated environment and to model, explore and attempt different strategies when using new information: this is particularly important as these behaviours reflect the very essence of learning agility (Malka et al., 2012).

Simulations may simultaneously measure an array of constructs. It has often been difficult to assess their internal consistency, reliability and construct validity, and this has led to some controversy over their use in a selection context (Whetzel et al., 2012). What simulations measure and why they are predictive of job performance is in most cases not well understood (Callinan & Robertson, 2000). This is particularly true the further one moves away from direct point-to-point correspondence between the predictor and the criterion space. When there is a direct overlap between the assessment and the work performed on the job, we move away from a traditional focus on construct or trait-based assessment to a focus on what people are expected to do on the job (Guion, 2010). But a direct overlap is not always possible for a variety of reasons. As simulations are built and used in selection contexts, it is important to have a solid understanding of what is actually being measured.

To the extent that simulations are not high in stimulus and response fidelity, there is the potential to introduce construct irrelevant variance. As simulations are built, and technology is continually leveraged for their delivery, this concept needs to be explored. When engaged in a computer-based simulation, for example, applicants depend on their ability to operate computers or hardware. With a highly complex branching simulation or game, being able to navigate the environment itself may be a confounding variable - one not actually related to constructs required for job performance (e.g., Zenisky & Sireci, 2002). Such variance may attenuate test-takers’ performance in a way that is unrelated to their true standing on the targeted constructs of interest and consequently undermine the validity of the selection outcomes. The use of technology for simulation delivery may be most appropriate for jobs where technology use is a fundamental job requirement. Similarly, it has been mentioned that video-based SJTs might give irrelevant contextual information and unintentionally insert more error into SJTs (Weekley & Jones, 1997). It may be the case that these irrelevant constructs may have different effects on test performance across different subgroups (Yongwei, Sireci & Hayes, 2013).

 
Source
< Prev   CONTENTS   Source   Next >

Related topics