Verifiability of biodata and faking
A main difference between personality and biodata inventories is that the latter include a larger number of verifiable or ‘hard’ items, such as basic demographic or background information. These items are uncontrollable (there is nothing one can do to alter one’s place of birth or ethnicity) and intrusive compared to the ‘soft’, more controllable, unverifiable items assessing attitudes and behaviours, such as ‘What are your views on recycling?’ ‘[How often do you go to the gym?’ ‘Do you think people should drink less alcohol?’ ‘Do you like country & western music?’ It has, however, been suggested that unverifiable items increase the probability of faking (Becker & Colquitt, 1992). Indeed, although some degree of inflation does exist for verifiable items, early studies reported inter-correlations in the region of 0.95 between responses given to different employers (Keating, Paterson & Stone, 1950), showing that verifiable items yield very consistent responses even across different jobs. Yet a review of the literature concluded that faking affects both verifiable and non-verifiable items and that attempts to control it have been largely unsuccessful, though empirical keying prevents faking more than other keying types (Lautenschlager, 1994).
One study compared the validity of verifiable and non-verifiable biodata items in call centre employees and applicants (Harold, McFarland & Weekley, 2006). Results show that although applicants did not score significantly higher on overall biodata items than their incumbent counterparts, non-verifiable items had lower validities in the applicant sample. Harold, McFarland and Weekley concluded that ‘the good news is that a biodata inventory comprised of all verifiable items was equally valid across incumbent and applicant samples regardless of the criterion examined ... [T]he bad news, however, is that the validity of non-verifiable items shrank in the applicant sample’ (2006, p. 343).
Regardless of these results, today jobs such as services and team work (Hough, 1998) call for attitudinal and interpersonal constructs to be assessed in order to predict occupational success. Thus, non-verifiable, soft, subjective items will inevitably be incorporated in contemporary biodata scales.
Schmitt and colleagues (2003) proposed that in order to reduce faking and social desirability respondents should be asked to elaborate on their answers - a method previously used in ‘accomplishment records’, for example, ‘Give three examples of situations where you worked well under pressure’ or ‘Can you recall past experiences where you showed strength and leadership?’ (Hough, 1984). Results indicated that respondents tended to score lower (be more modest) on items that required elaboration (Schmitt & Kunce, 2002); indeed, scores on elaborative items were 0.6 SD lower, which is approximately the difference found between participants instructed to respond honestly and those asked to ‘fake good’ in laboratory studies (Ellingson, Sackett & Hough, 1999; Ones, Visvesvarian & Reiss, 1996). A subsequent study showed that the validities of elaborative items were in line with standard biodata items and in some cases even higher (Schmitt et al., 2003). In addition, validities (predicting self-ratings, self-deception, impression management, GPA and attendance) were unaffected by elaboration instructions even though lower means were found for the elaborative items.
Other methods for reducing the likelihood of faking include warnings (Schrader & Osburn, 1977), such as ‘Any inaccuracies or fake information provided will be checked and result in your no longer being considered for this job’, to the more creative use of bogus (fake) items that may trick respondents into faking well (Paunonen, 1984), for example, ‘How many years have you been using the HYU-P2 software?’ However, including bogus items is widely thought of as unethical.