Desktop version

Home arrow Psychology arrow The Wiley Blackwell handbook of the psychology of recruitment, selection and employee retention

Validity

Meta-analytic studies have found simulations to be valid predictors of job performance. In perhaps one of the most influential studies examining criterion-related validity evidence for a variety of predictor measures, Schmidt and Hunter (1998) in a meta-analysis found work samples and assessment centres to have corrected validity coefficients for the prediction ofjob performance of 0.54 and 0.37, respectively. Interestingly, the corrected validity coefficient for tests of general mental ability was 0.51, suggesting that work samples are

some of the best predictors available. A meta-analysis of performance tests conducted by Roth, Bobko and McFarland (2005) found a corrected validity coefficient of 0.33. McDaniel, Hartman, Whetzel and Grubb (2007) examined the validity of SJTs. Their meta-analysis found an overall corrected validity coefficient of 0.26. Finally, in metaanalyses of assessment centre validity, Gaugler, Rosenthal, Thornton and Bentson (1987) found a corrected validity coefficient of 0.36 for overall ratings, and Arthur, Day, McNelly, and Edens (2003) found corrected validity coefficients ranging from 0.25 to 0.39 for dimension ratings. It is important to note that simulations are frequently developed to assess multiple constructs and their level of observed validity will vary according to what is measured.

In addition to excellent criterion-related validity, simulations have been found to provide incremental validity beyond measures of cognitive ability. For example, Roth and colleagues (2005) found that performance tests added to the prediction ofjob performance when cognitive ability was also used as a predictor. Clevenger, Pereira, Wiechmann, Schmitt and Harvey (2001) found incremental validity for SJTs even when included in a selection battery consisting of measures of mental ability, conscientiousness, job experience and job knowledge. Schmidt and Hunter (1998) found assessment centres have modest incremental validity when used with a test of general mental ability.

To date, there is a substantial number of meta-analytic studies supporting the validity of traditional simulations. However, technology has changed the very nature of simulations and the ever-increasing rate of change has meant that science has not been able to keep pace. For example, few research studies have examined the validity of computer-based inbaskets or branching role-plays (Olson-Buchanan et al., 1998; McNelly, Ruggeberg & Hall, 2011; Mueller-Hanson et al., 2009). Additional studies and meta-analytic research are needed before stable estimates of validity can be achieved.

 
Source
< Prev   CONTENTS   Source   Next >

Related topics