In The Nature ofIntelligence, Thurstone (1924, p. xiv) wrote: ‘there is considerable difference of opinion as to what intelligence really is, but, even if we do not know just what intelligence is, we can still use the test as long as they are demonstrably satisfactory for definite practical ends.’ Ninety years later, we can conclude that intelligence and cognitive tests have demonstrated they are excellent procedures for the practical purposes of personnel selection and that, although opinions differ on what intelligence is persist, important advances on the theoretical account of cognitive ability have been made. There is no consensus about the nature and existence of a general factor of intelligence, but the vast majority of researchers now agree that a general factor can be found when a large battery of cognitive tests is factor analysed, and the majority of the psychometric models of cognitive abilities include a general factor (Horn’s model is the exception). There is less agreement regarding the number of levels in the hierarchy and the number and type of medium and narrower abilities.
Though survey data show that GMA tests are frequently used in personnel selection across the world, this is not a sufficiently good reason for using the procedure for decision making. For example, graphology is very popular in some countries (e.g., Brazil, France and Israel), but the empirical evidence shows that its validity for predicting job proficiency is zero. In other words, if the validity of a procedure is zero, it would be the same as using a table of random numbers to choose an applicant. The empirical evidence cited in previous sections suggests that, in a rapidly changing world of work, GMA is the best predictor of the future adaptability to new tasks and functions.
Succinctly, the state-of-art suggests that: 1) the validity of cognitive ability tests are gen- eralizable across occupations and situations, and moderated by job complexity, so that operational validity is 0.40 or higher; 2) the relationship between GMA and task performance is linear and its effects are primarily indirect thorough job knowledge; 3) GMA predicts moderately OCB, but not CWB; 4) there are ethnic and group differences in both GMA and the specific cognitive abilities, and the standardized differences are greater for lower job complexity levels and for crystallized ability; 5) although there is some evidence of differential validity, there is no differential prediction (bias) for African-Americans.
Finally, the findings underscore that cognitive ability tests may be valuable, cost-saving instruments for companies by ensuring high standards of individual job performance, which in turn raise productivity (Scherbaum et al., 2012).