Desktop version

Home arrow Management

  • Increase font
  • Decrease font

<<   CONTENTS   >>

Non-Training Interventions

Before looking at the design of training to support competence development, I want to consider alternative approaches to behaviour change. Early versions of CRM

TABLE 11.6

Proposed Competence Model


Supporting Activities


Causal analysis; risk appraisal; establish the gap between observed and expected; establish abnormal cues based on mental model; compare assumptions about cause and effect relations among cues


Identify options; establish operational constraints; clarify remaining capability/ functionality; planning for contingencies


Identify actions required; establish resources required; implementing contingency plan


Referencing observed behaviours to expectations; establish deviations from normal state; use critical thinking


Validate rule set; identify information requirements; validate efficacy of option; establish time requirements


Use proper phraseology: pay attention to completeness of standard reports; seek information/clarification/ check understanding; exchange information and comprehensions to establish a shared understanding of the problem; formulate and communicate hypothesis about cause and effect relationships


Monitor, support others, provide guidance and suggestions; states appropriate priorities; update situation periodically; resolve opposing interpretations based on team conflict resolution


Create space and time; control stress

training requirements included a reference to ‘organisational factors’, although the term was typically not elaborated on. The latest EASA guidance includes a reference to ‘the operator’s safety and organisational culture’ (EASA, 2017). Again, there is no clarification of what such training is supposed to cover. In Chapter 9,1 explored some aspects of the relationship between the organisation and front-line crew. The key message of the chapter was that the legitimate actions of management can generate unintended consequences. From a systems perspective, business models create situations where work processes do not map onto human physiology. The evidence suggests that this mismatch can impair performance and shape risk tolerance, which means that buffering is reduced. Furthermore, differences between individuals in terms of their need for recovery, well-being and LWB results in sickness/absence that places a further burden on the remaining available workforce.

Table 11.7 shows the distribution of pilots listed as long-term sick (LTS) by age compared with the distribution of pilots in each age group. Pilots on LTS have been off work for longer than 19days and, therefore, need their aviation medical certificate to be reissued. As an aside, the administration of a pilot’s return to w'ork is itself an organisational problem than can increase lost productivity. The group included individuals suffering from cancer, other long-term illness, musculoskeletal problems, sleep disorders and so on. On a daily basis, in this airline, approximately 300 pilots were unfit for duty of whom one-third w'ere on LTS. Over a 5-year period (2009-2013), LTS events rose from 131 per 1000 pilots to 171 per 1000 pilots while days lost to LTS rose from 9.2/pilot to 14.92/pilot. The data suggests that some age groups are over-represented.

TABLE 11.7

Distribution of Pilots on LTS (n = 2800)


% Pilots





























Table 11.8 looks at the distribution of pilots appearing in a monthly list of those taking more fuel than required by the flight plan. These two tables together suggest that age can affect both availability for work and operational effectiveness. In Chapter 1,1 discussed age effects in railway workers in relation to reduced risk perception and over-estimating the efficacy of actions. In the broader context, this data suggests that the demands made on staff, as reflected in contracts of employment and work schedules, might benefit from being tailored to accommodate the effects of age. Organisational design at a strategic level, therefore, may be one route to shaping behaviour at the operational level.

In Chapter 9 we saw that the adoption of fuel efficiency policies was variable and reflected a number of factors, one of which was possibly perceived increased effort. In the study by Gosnell et al. (2019), airline captains were advised that their company was participating in a project that looked at fuel efficiency. Pilots were assigned to one of four groups: a control group; those provided with information about personal performance; those who were set targets; those who were offered a financial incentive (a charitable donation if targets were met). Interestingly, simply being told of the project improved behaviour with the control group making the largest contribution

TABLE 11.8

Excess Fuel Uplift by Age of Captain


% Total CNs

% on List






















to fuel efficiency during the project. Providing information about performance and setting improvement targets also had an effect. The trial demonstrated that behavioural modifications could result in a potential annualised saving of US$9.15 million at 2014 fuel prices plus the associated reduction in C02 emissions. Importantly, one spillover effect was an improvement in job satisfaction for those captains included in the experimental groups. In a post-trial review, of the 335 participating captains, approximately half welcomed a continuation of such management strategies while just 4% preferred a return to the pre-study status quo. This framework of interventions could be adapted to other behavioural change targets but significant in its approach is the fact that it involved active engagement of crew, target setting and feedback.

It is not my intention to discuss airline management, generally, here. However, from a broader systems safety perspective, it does appear that management activity is a significant factor. From a training perspective, where stress and fatigue, for example, are included in syllabi, there is a danger that the disconnect between the classroom message and the lived experience of the crew will provoke cynicism in trainees. We do have an issue w'ith allocating ownership of the problem in that fatigue management is a shared endeavour between both the company and the employee. That said, alignment between training and operational reality is essential if the investment is to be beneficial. Organisational design is clearly one area where operational safety and efficiency can be influenced by means other than a training intervention.

Changes at the level of the organisation are clearly strategic, the impact of which would take time to work through but technological developments offer other, more immediate, non-training interventions. The use of flight data in safety has a long tradition but advances in data collection, accessibility, processing and visualisation offer considerable potential for non-training interventions. The biggest single barrier to progress in this field is the long-standing paranoia in the industry about data falling into the wrong hands. Pilots do not trust management not to abuse data while airlines fear information being made public. A case could be made for the data generated during a flight to be ‘owned’ by the crew operating that flight. A similar case is being made that internet companies that benefit from access to personal information, browsing and purchasing behaviour should somehow' recognise the rights of the individuals who created that data, either through stronger privacy rules or through a usage charge, possibly through corporate taxation. While it is recognised that smarter use of flight data can benefit the broader safety and operational efficiency agenda, it could also be used to develop individual pilot competence.

Post-event visualisation is now commonplace and many safety departments will permit pilots to request that a flight be made available for viewing if the individual had some concerns about performance. However, accepting that reflection and self- analysis are important in refining mental models, the routine access to personal information should be the next step in the use of flight data. One such approach is CEFA Aviation’s AMS tool (Figure 11.2). Using cloud-based technology and w'ith powerful confidentiality firewalls in place, it allows pilots to access data from their own flights (de Courville, 2019). It uses a conventional visualisation interface but pilots can look at the approach and landing flight path, controls and displays to see how the event was managed. Displays could be enhanced to direct attention to aspects of control that could be indicators of proficiency. For example, Ebbatson (2009; Ebbatson et al., 2010)

Flight data feedback. (© 2020 CEFA Aviation.)

FIGURE 11.2 Flight data feedback. (© 2020 CEFA Aviation.)

studied manual handing skills and found that tracking of speed, together with control inputs (magnitude and frequency) were indicative of recency and, thus, skill decay.

There are now machine learning (ML) algorithms that can make use of large data sets. The application of artificial intelligence and ML to flight data has tended to focus on anomalous events but some use has been made of these approaches to look at normal operations. Li (2013) and her colleagues (Li et al., 2015) used an algorithm to cluster flight data to look for significant patterns and also to detect anomalies. The algorithm can present, first, density-based relationships between data values which shows the normal distribution of a parameter. It can then, second, identify any flight that differs from the normal distribution in a statistically significant way. These outliers are typically those flights that would trigger a flight data monitoring (FDM) alert. In fact, in trials, the algorithm has been shown to be more accurate at detecting anomalous flights than conventional systems. However, my interest here is in how flight data could be used to provide non-training interventions.

Figure 11.3 shows the data for, in this case, pitch angle on final approach. The data was collected from 300 arrivals made by Airbus A-350 aircraft. The central darker zone represents 50% of the most tightly clustered data. The outer, lighter zones represent 20% of the data at each margin of the distribution. Because this is a statistical technique, there will always be some loss of data at each stage of the algorithm. Pitch angle is a parameter that is monitored in the FDM programme and by overlaying the trigger threshold, it is possible to illustrate how close the normal operation approaches an arbitrary boundary: the FDM programme is a surveillance mechanism and triggers are set at values that warrant attention but might not necessarily equate to a system boundary. In this case, it would be possible to set actual upper and lower limits. At the upper limit, the pitch angle on touchdown would result in a tail strike while, at the lower limit, in effect, the aircraft would not be flaring adequately and the result would be a hard landing. We have now established the buffering capacity of the system in relation to this one parameter.

The algorithm can also look at a specific flight and Figure 11.3 also shows the trace of a single flight superimposed on the general distribution. We can now let a pilot look at their performance compared to the normal distribution. Of course, a single parameter is unlikely to fully explain an event and to understand what happened on this approach we would need to look at, say, turbulence as measured in vertical acceleration and possible wind components (head/tail or crosswind). If a parameter is captured in flight data, it can be presented using this technology. Smarter use of data coupled with the use of leading indicators offers the prospect of intelligent tutor tools that can offer feedback to pilots post-flight based on actual data drawn from their own performance. There are, nonetheless, some problems with this concept. Data simply captures inputs to the aircraft and subsequent configurations and responses. It cannot capture either motive - the reason why an input was made - or the collaborative aspects of trajectory management.

One aspect of collaboration, of course, is communication. Looze et al. (2014) looked at the prosodic characteristics of speech. Prosody comprises intonation, tone, stress, and rhythm. Tools already exist that can evaluate cockpit communication based on the prosodic components such as pitch and cadence. Chapter 8 (Excerpt 8.2) discussed pauses and overlaps in conversation and prosodic analysis is capable of detecting these features (Figure 11.4).

Graphical analysis of speech can point to episodes during crew interaction where performance was suboptimal (time 06:30 in Figure 11.5). Semantic analysis is the

Cluster analysis of flight data. (Courtesy Dr. Lishuai Li.)

FIGURE 11.3 Cluster analysis of flight data. (Courtesy Dr. Lishuai Li.)

Pauses and overlaps in crew communication. (© 2020 Vocavio Technologies.)

FIGURE 11.4 Pauses and overlaps in crew communication. (© 2020 Vocavio Technologies.)

Prosodic analysis of pilot communication. (© 2020 Vocavio Technologies.)

FIGURE 11.5 Prosodic analysis of pilot communication. (© 2020 Vocavio Technologies.)

next stage and the Vocavio system is already capable of evaluating SOP compliance based on detecting keywords. The functional communication framework outlined in Chapter 8 points towards semantic triggers that could be used to identify types of acts based on crew dialogue.

The use of flight data and communications analysis are forms of operational feedback. They offer opportunities for reflection on performance and represent approaches that could be adapted to create individual targets and structured feedback for crew. They can also be adapted to provide opportunities to develop knowledge structures. Martin et al. (2011) used ‘what-if’ scenarios to develop crew skills. Candidates were presented with scenarios and asked to reflect on possible solutions to problems. A post-trial evaluation found that pilots felt that they benefitted from this type of activity. By linking repertoires of problem scenarios to samples from flight data and crew conversation, it would be possible to afford rich opportunities for reflection.

There is a long tradition of social learning in aviation, often called ‘hangar tales’. The sharing of experiences often allows a fine-grained approach to problems that might be beyond the scope of a classroom treatment. Learning through ‘stories’ has proven to be powerful, affords space for reflection and clearly support the development of mental models through aggregation and compilation. Consider this LOSA observer’s report:

Instruction from АТС: Reduce speed 270kts, expedite passing FL170. The captain (PF) initially reduced the speed target to 270kts, extended the speed brake and, after watching a slow deceleration to 270kts, then decided to increase the speed target to over 300kts in order to expedite to FL170. The Vertical deviation was showing the aircraft as 5000’ high on profile and the aircraft was above the 3degree path. The PF then selected V/S lOOOfprn, actually reducing the rate of descent. This was when АТС issued track miles to touchdown of 38nm. The PF decided to expedite the descent at this stage and extended the speed brake, triggering an EICAS SPEEDBRAKE message. FLCH was then selected. АТС then issued 210 kts, shortly followed by “Direct [waypoint], via [waypoint] cleared ILS 25R”. Approaching 5500’ the PF reduced speed to 180kts despite АТС requesting to maintain speed 210kts, as per the previous clearance. 180 kts was also 5 kts below the minimum speed for the current configuration (Flap5 185 kts). The PM did not pick up on this. As the speed reduced below the minimum flap speed the PF called for the next stage of flap (FlaplO). Shortly prior to [way- point], АТС then requested a spot wind check from us and then told us to reduce speed 180kts until 7nm. We had already reduced from 210kts to 180kts.

Crew familiar with the aircraft type and the local area would identify with the challenge of descent profile management, balancing the needs of slowing down while losing height, of remembering the current configuration of the aircraft, which automation mode is best for achieving a specific goal, when АТС clearances contain conditional instructions and so on. Stories extend our experience base. By reflecting on what others have encountered we can elaborate on our own experience of operations in a similar or same environment. We can reflect on similarities between our experience of the same event and that of the storyteller and establish causal factors. We can contrast their responses with our own and evaluate the efficacy of different interventions. Stories are often engaging and, so, the learning effect can be more powerful.

In Chapter 9 we looked at issues around safety reporting and the barriers to participation. Dijkstra (2015) describes an app-based approach to allowing crew to share stories in a context outside of formal safety reporting. Stories submitted are amenable to analysis but, importantly, because the narratives are usually richer than safety reports, the approach has the potential to tap into what constitutes expertise and, importantly, competence. An informal learning system built on a storytelling concept would be adaptive in that narratives would change over time to reflect crew coping with emergent threats, unexpected events arising from procedure changes or other external disturbances. Records, stored in a database structure, could be cross- referenced and aggregated into key themes. Where beneficial, links can be made to policy and procedures but also, if relevant, to external sources such as significant research or core texts.

The methods discussed here under the broader label of organisational and personal development represent an indirect approach to building system resilience. Unfortunately, providing smarter feedback and sharing stories do not meet the demands of a structured, standardised approach to crew training. However, these approaches do address many of the significant issues discussed in the book at source while offering solutions that support the concepts of autonomy and motivation.

<<   CONTENTS   >>

Related topics