There is a large literature on cross-cultural and cross-national differences on personality measures (e.g., Triandis & Suh, 2002). While most of the literature shows that the strucu- tre of personality is similar across cultures (e.g., Hough et al., 2001; McCrae & Costa, 1997), there are few studies examining these differences in the context of the personality predictors used in employee selection contexts. Both te Nijenhuis, van der Flier and van Leeuwen (1997) and te Nijenhuis, van der Flier and van Leeuwen (2003) report that the basic taxonomy of commonly used personality factors (e.g., neuroticism and extraversion) can be generalized to immigrant groups in The Netherlands, even for first-generation immigrants, and that the items function similarly across groups. These studies also report that the score differences between immigrants and non-immigrants are small on measures of broad personality traits and more specific facets of personality.
Impact of cultural content
One area of relevant, general cross-cultural research has focused on the impact of culture- specific content on test performance (e.g., Freedle, 2003; Freedle & Kostin, 1997; Helms- Lorenz et al., 2003; Malda et al., 2010; van de Vijver, 1997, 2008). This line of research has found that test performance suffers when the cultural content embedded in a test is different from the cultural content with which the test-taker is familiar. For example, Malda and colleagues (2010) experimentally manipulated the cultural content embedded in measures of short-term memory, attention, working memory and figural and verbal fluid reasoning to be consistent with White South African culture or Black South African culture. In this study, Malda and colleagues created test items that were equivalent, but manipulated the cultural content so that it was more familiar for one culture than the other. They found test performance was better on the test versions that were consistent with the test-taker’s culture. The cultural content of the test moderated the relationship between race and test performance.
Freedle and colleagues (Freedle, 2003; Freedle & Kostin, 1997) have found that cultural differences in the use and interpretation of common words can lead to differential item functioning and serves to disadvantage the cultural minority test-taker. Common to both these research studies is that the linguistic demands of tests may be confounded with the cultural content embedded in the tests (Ortiz, Ochoa & Dynda, 2012). As Helms-Lorenz and colleagues (2003) explain, ‘differential mastery of the testing language by cultural groups creates a spurious correlation between g and intergroup performance differences, if complex tests require more linguistic skills than do simple tests’ (p. 13).
In summary, the research comparing different culture groups on predictors commonly used in an employment context is limited and most of the research focuses on predictor constructs rather than predictor methods. Current research finds that minority cultural groups (e.g., immigrants) tend to score lower on cognitive tests than the majority cultural group (e.g., non-immigrants). However, the size of the differences varies, with tests relying on acquired knowledge and the dominant language of the culture showing the largest differences. Tests of basic cognitive operations tend to show much smaller differences. As with the other comparisons we have reviewed, non-cognitive constructs tend to show near-zero score differences. However, the degree to which cultural context is embedded in the measurement of the constructs can have a large impact on the nature of the differences observed. As was true for the research on score differences between race and ethnic groups, aspects of the measurement can play a role in differences observed.