Desktop version

Home arrow Management

  • Increase font
  • Decrease font

<<   CONTENTS   >>

What to Assess?

We saw in Chapter l that the ‘competence’ movement in the 1980s benefited from the drive towards vocational workplace accreditation. The significant proportion of the workforce in most countries had left school with few, if any, formal qualifications but, nonetheless, had developed to be highly skilled workers. How can you recognise that ability and communicate it in such a way that potential employers understood what a job applicant was capable of? Accreditation of Prior Learning (APL) was one approach used in the United Kingdom. Trained vocational assessors would look at a portfolio of work and decide if it met the standard for the award of a qualification. Implicit in this approach is that a candidate could provide artefacts that represented evidence of the breadth of their experience and the complexity of work undertaken. The evidence would be workplace-specific, but it would be set against a generic competence framework.

Exam scores, equally, represent evidence of achievement but, in a competence-based approach, success can only be confirmed through demonstration of competence in the real world, not the exam hall. A competency-based method of assessment requires more valid evidence: a high school certificate in mathematics is not proof of an ability to manage inventory in a warehouse. It follows, then, that assessment requires the individual to demonstrate their ability.

TABLE 12.1

Suggested Marker Framework



Manual handling

Control of aircraft flight path through the physical manipulation of controls

Application of procedures

Interpretation, application and enactment of policies, procedures and checklists

Management of systems

Configuration of aircraft systems, utilities, automation and navigation


Quality and effectiveness of intra- and inter-crew communication

Task management

Control of work in space and time

Dekker et al. (2010) raised concerns about the epistemological confidence we can legitimately have in the constructs we use to create knowledge of the world. In Table 11.6, I offered a set of ‘competencies’ which, I suggested, describe the fundamental processes that underpin competent performance. While they might offer useful targets for training interventions, I have not presented any scientific evidence that any of the items exist in any tangible sense. The fact that they are useful does not make them truths. If we now scale up the problem to that of assessment, it is highly unlikely that we would be able to observe all of these constructs in any meaningful way in the workplace. In Table 12.1, I offer a framework of behaviours that are observable and represent the outputs from the competence set. Its purpose is to facilitate assessment by providing a tool that can be used in a consistent manner.

Weber (2016) compared the performance of assessors in two airlines, one of which did not have a framework to guide the process. Differences were observed between the grades awarded with the raters of the airline who were not using a framework generally giving lower scores, failing pilots more often than the assessors of the airline with guidance and, finally, there was a larger spread of the scores awarded. Using a standardised methodology, then, supports a more consistent approach to assessment.

A poorly designed assessment tool results in raters falling back on their own native world view and then shaping their assessments to fit the marker scheme. Not only do the markers require adequate framing, but the boundaries between them also need to be clear. Given that we are trying to tease out samples from a stream of behaviour, we need to be able to communicate what units of performance to select into which category. If the clusters are too large, then there will probably be overlap between categories. As a result, two assessors are likely to put the same performance into different categories or the same observer might be inconsistent across observations, leading to unreliability. For example, in Table 11.3, ‘Application of Knowledge’ includes the statement ‘knows where to source required information’ while ‘Application of Procedures’ includes ‘identifies where to find procedures and regulations’. It is difficult to see how these behaviours differ. Weber et al. (2013) explored the reasoning behind the scores awarded by assessors. It was found that some assessors gave the same score but for different reasons, whereas others gave a different score but used the same reasons. The degree to which the assessment category is open to interpretation will influence variance between observers. On the other hand, if the categories are too tightly defined, then the number of markers we need to capture the full range of performance will increase. Beaubien et al. (2004) suggested that fewer markers are preferable as the effort needed to assess against too many categories degrades assessor performance.

Another problem we need to recognise, as was mentioned earlier, is that not all of the ingredients of a competent performance are ‘observable’ in the workplace. If we consider the APL approach, flight documentation (weight and balance, fuel slips, completed flight plans etc.) is an artefact that captures underpinning knowledge. Flight data and air traffic control (АТС) transmissions also represent evidence about proficiency. A full exploration of ‘competence’ would require multiple modes of sampling including performance, underpinning knowledge and other associated skills. Because of the effort such an approach would involve, the default mode has been to draw conclusions from samples of behaviour collected in the simulator or on the flight deck.

An observational framework is no more than a tool through which to capture aspects of underlying competence. To have confidence in the data gathered, we need to strike a balance between a comprehensive coverage and a manageable system. The solution is to identify a few clusters of behaviour that are significant in terms of overall operational effectiveness and clearly map onto the competence model. They must be observable and be described in such a way that the marker can be used in a consistent and standard way by a cohort of assessors. Figure 12.1 illustrates how competencies map on to markers. The marker, in effect, is an aggregate performance. It provides the starting point for analysis in that a poor performance in each marker category can be decomposed into the relevant competence components.

<<   CONTENTS   >>

Related topics