Desktop version

Home arrow Management

  • Increase font
  • Decrease font

<<   CONTENTS   >>

Assessment Frameworks and ‘Competence’

Once the concept of assessing ‘CRM’ skills was accepted, the best mode of sampling was considered to be observation on the flight deck and, to that end, various marker frameworks were proposed as tools for capturing performance in the workplace. There are three methods commonly available for developing assessment schedules. The first method looks at past failure and attempts to identify behaviours that would have reduced the probability of an undesired outcome. Probably the longest-standing

TABLE 11.1

Extract from University of Texas Behavioural Markers Rating Scale

SOP briefing

The required briefing was interactive and operationally thorough

Concise, not rushed, and met the requirements Bottom lines were established

Plans stated

Operational plans and decisions were communicated and acknowledged

Shared understanding about plans Everybody on the same page

aviation marker framework, the NASA-UT Crew Effectiveness Markers, represents this approach. It formed the basis of the subsequent University of Texas Behavioural Markers Rating Scale. Table 11.1 illustrates two of the 13 suggested markers, each of which was rated on a four-point scale (Helmreich et al., 1990)

The next approach is the ‘expert committee’ model. This method involves panels of subject matter experts debating what acceptable behaviour looks like. The European NOTECHS scheme (Flin et al., 2003) and the more recent EBT/CBT framework (EASA, 2019) exemplify this approach and is, possibly, the most common method used. Examples are at Tables 11.2 and 11.3.

The third approach is to conduct structured interviews with operating crew, using tools such as critical incident and repertory grid, to seek examples of behaviour both good and bad. Having assembled a range of statements, subject matter experts then undertake a ‘card sort’ activity’ (MacLeod, 2005). The set of statements are assigned to categories according to similarity or relatedness. An example is in Table 11.4.

In previous chapters I have used data from a number of line operations safety audit (LOSA) surveys. A content analysis of the narrative reports written by observers is another source of information about crew performance. Table 11.5 illustrates a structure that emerged from 423 comments extracted from reports. Only the most significant clusters are included.

TABLE 11.2

Categories and Elements of NOTECHS



1. Cooperation

Team-building and maintaining Considering others Supporting others Conflict-solving

2. Leadership and managerial skills

Use of authority and assertiveness Providing and maintaining standards planning and coordination Workload management

3. Situation awareness

Awareness of aircraft systems

Awareness of external environment Awareness of time

4. Decision-making

Problem definition and diagnosis Option generation

Risk assessment and option selection outcome review

TABLE 11.3

Extract from EASA Competency Framework

Application of Knowledge (KNO)

Description: Demonstrates knowledge and understanding of relevant information, operating instructions, aircraft systems and the operating environment

  • 1. Demonstrates practical and applicable knowledge of limitations and systems and their interaction
  • 2. Demonstrates required knowledge of published operating instructions
  • 3. Demonstrates knowledge of the physical environment, the air traffic environment including routings, weather, airports and the operational infrastructure
  • 4. Demonstrates appropriate knowledge of applicable legislation.
  • 5. Knows where to source required information
  • 6. Demonstrates a positive interest in acquiring knowledge
  • 7. Is able to apply knowledge effectively

Application of Procedures and Compliance with Regulations (PRO)

Description: Identifies and applies appropriate procedures in accordance with published operating instructions and applicable regulations

  • 1. Identifies where to find procedures and regulations
  • 2. Applies relevant operating instructions, procedures and techniques in a timely manner
  • 3. Follows SOPs unless a higher degree of safety dictates an appropriate deviation
  • 4. Operates aircraft systems and associated equipment correctly
  • 5. Monitors aircraft systems status
  • 6. Complies with applicable regulations
  • 7. Applies relevant procedural knowledge

TABLE 11.4

Interview-based Marker Construction

Task Management

This dimension relates to the conduct of the work. It includes the consistent and appropriate use of checklists and procedures, making effective use of time. It also includes the avoidance of distraction and maintaining the bigger picture of things happening in and around the aircraft. There are three critical clusters:

• Use of SOPs (understanding and interpretation of SOPs, their application, attention to detail, diligence)

• Creates solutions, adapts procedures and improvises as demanded by situation. Able to work within resource constraints. Manages time by prioritising tasks

• Makes an appreciation of risk and considers implications of actions.

Positive indicators include:

A consistent, but flexible, use of SOPs. Monitors the conduct of work during busy periods and positively verifies that tasks have been completed. Maintains an even tempo of work (no unnecessary haste or urgency). Actively develops a mental picture of what to expect during the next stage of flight (e.g. through verbalisation). Anticipates and thinks ahead. Is aware of time available/remaining,

Negative indicators include:

Too strict an adherence to or rigid application of SOPs. Spending too much time out-of-the-loop on admin tasks. Rushing or delaying actions unnecessarily.

TABLE 11.5

Emergent Structure from LOSA Narratives (% of comments)

Systems management (10.6%)

(Wrong selection, left in wrong position, inappropriate selection, hesitation, confusing systems behaviour)

Application of procedures (13.2%)

(Acting from memory, acts with no clearance, failing to act, incorrect actions, unaware of need to act, inconsistent actions)

Communication (21.5%)

(Explanation, clarification, intervention style)

Task management (22%)

(Anticipation, clarification, prior requests forgotten, workload, prioritisation, path management and control)

The approaches illustrated differ in that the first three - the UT, NOTECHs and EBT frameworks - reflect the views of those involved in their construction. They are heavily influenced by the opinions of the collaborators and this is sometimes apparent in the language used in the descriptions. The latter two are rooted in the observation of workplace performance and the structure emerges later. One fundamental weakness of these approaches is that they were applied to the problem of developing a process of performance assessment, not to elucidate a comprehensive competence model. They were designed to sample, not to catalogue. By default, then, they will be incomplete in that their coverage of performance wall not be comprehensive. Also, their formulation wall reflect their intended purpose.

A further problem is that the frameworks described were designed to be used in any context. No consideration was given to whether different categories of aviation demand different competencies. For example, as we have seen, business aviation pilots have to spend time managing the expectations of their clients as well as flying the aircraft. Freighter crews have to deal with complex operational issues, often to do with the cargo they are carrying, while at remote destinations with little operational support. Police and emergency medical support helicopters often have to make decisions under intense pressure. While it is probable that there is a core of common performance across all types of aviation, we also need to address the complete operational envelope, to match training to the risk profile and to the operational demands faced by the user. In the next section, I want to explore what competence requirements emerge from looking at aviation from a systems perspective.

<<   CONTENTS   >>

Related topics