Specificity and generalizability
When developing simulations, practitioners must consider the level of specificity needed as it relates to the level of generalizability required. The more contextualized the simulation, the higher its fidelity and the more situation- and job-specific it becomes (Callinan & Robertson, 2000). For example, a hands-on performance test may be required to assess a set of specific tasks or technical skills needed for a given job. Alternatively, assessment centres can target behaviours required across jobs. There is a tradeoff between the fidelity of simulation and what is assessed and the generalizability of assessment outcomes (Reynolds & Dickter, 2010; Zenisky & Sireci, 2002). High-fidelity simulations are less generalizable across jobs. However, there may be situations where sacrifices in fidelity are justified to achieve generalizability (Boyce et al., 2013). The research suggests that, in some cases, this sacrifice may not be detrimental, at least not in terms of criterion-related validity. A meta-analysis by Whetzel, Rotenberry and McDaniel (2014) examined the level of specificity of in-basket content (generic vs. job-specific) and found that specificity had little impact. Generic and job-specific in-baskets exhibited equivalent operational validity estimates, suggesting no meaningful difference exists between the validity of the two types of assessment.