Desktop version

Home arrow Management

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Factors That Shape Performance

These snapshots suggest that normal work has an inherent level of variability and that, occasionally, performance and task demands are at variance. Because of this variability, normative models of successful performance are difficult to define. There are some common quantifiable acts that might allow us to estimate the scale of normal variability. Procedural call-outs, for example, are a standard feature of the work process. In one sample, I looked at the specific call made to confirm that the aircraft had intercepted the ILS and found a failure rate (omitted calls) of 7%. The crew' are also supposed to call w'hen approaching a level off in the climb and descent. Of course, a level off is, in part, a function of the degree to w'hich air traffic control (АТС) intervenes in the departure and arrival process. I created a notional ‘average’ profile and estimated that around 4% of calls w'ere missed.

TABLE 6.3

FO Error Rates/Sector by Experience

Experience in the Airline

Slip

Lapse

Mistake

5 Years or less (n = 6)

0.33

1.33

0.5

6 Years or more (n = 6)

1.5

2.66

1.16

All of the examples we have looked at in this section are based on normal, routine operations. None of the flights experienced a technical failure or an emergency. The crew were familiar with the routes and destinations. Unlike the events described earlier in this chapter, these flights were unexceptional. That said, clusters of events point to factors that can shape variability. For example, one study looked at crew operating a new type of aircraft. Samples were taken 12 months apart to track the aircraft’s introduction into service. On both occasions, raw rates per sector were typically in double figures but if both crew had at least 12 months experience on the type, then the rate more than halved. Familiarity with new technology, then, seemed to be an issue.

The question of experience and recency possibly explains the data in Table 6.3, which shows error types committed by FOs in the pilot flying (PF) role. The specific aircraft type was used on regional flights as well as ultra-long haul routes. Those FOs with less than 5 years in the company would have recently upgraded from second officer (SO) and would be consolidating their experience flying shorter, regional sectors. However, the more experienced pilots would be used primarily as ‘cruise relief pilots’ on the longer routes and, as a result, be flying fewer flights in a month. The difference in error rate, which is possibly contrary to expectation, possibly reflects a recency that comes from the changed nature of their employment. This interpretation is reinforced by the fact that a common lapse was forgetting to arm the speed brake after landing gear selection on approach. This occurred on 83% of the flights with the >6-year FO as PF.

Procedure change can also induce error. After a change in the checklist sequence, the transponder was incorrectly set on 17 out of 29 occasions (58.6%). When the data were fed back to the crew, several pilots made the point that the recent procedure change was not the first. One captain joked that he had his own transponder setting procedure: T just turn it on when АТС tell me to’. In effect, the captain delegated an aspect of the task to АТС. The operation of the transponder on the ground allows АТС to monitor the aircraft taxying. There had been complaints from АТС that turning on the transponder was causing their screens to be cluttered when the airport was busy. As a result, by delaying selection only those aircraft in motion were visible and not those parked at the gate awaiting engine start. However, on one occasion the crew did not detect the omission until after the aircraft had started taxying and, on another, the error was detected just after take-off, during the climb. These last two examples illustrate how an error can initiate a system status that can propagate risk.

Variability

If errors represent feedback signals then it suggests that they must be capable of detection, else they would not serve a purpose. The LOSA process captures this data and Table 6.4 shows the percentage distribution of error types by mode of detection.

TABLE 6.4

How Errors Are Detected

Slip

Lapse

Mistake

Crew

59.2

55.4

73.2

a/c systems

0.9

5.2

2.8

АТС

16.5

2.3

2.5

Other (cabin or ground crew)

0.4

1.1

0

Not detected

22.7

35.8

21.3

The first thing to notice is the high rate of undetected error. On a procedural note, during the audits, an error was deemed to be ‘not detected’ if there was no discernible response, either through non-verbal or verbal behaviour, on the part of the operating crew. This rate was consistent across a number of audits. The table reveals some other interesting aspects of collaborative work. A significant proportion of slips are trapped by АТС. Slips include missed radio calls or incorrect read back of clearances or instructions. Therefore, it is not surprising that the counter party in the process, the controller, affords a means of protection. Lapses include acts of forgetting to set controls. Automated systems have in-built protections and will sound an alert if the configuration is not correct. These data illustrate feedback being acted on at the system level as well as at the level of the individual participants. Mistakes, as we saw, are decision errors. The higher rate of detection of mistakes by the crew suggests that the circumstances after the active intervention failed to meet expectations, drawing the crews’ attention to the error.

Amalberti (2013) offers the following modes of error detection (in order of effectiveness):

  • 1. Direct error hypothesis. Responding to an anomaly: ‘it’s because you did ...’
  • 2. Suspicion: T haven’t a clue’.
  • 3. Affirmative checking: ‘that looks about right’.
  • 4. Routine checks: standard, independent, periodic.

We will consider the process of monitoring work in Chapter 7, but the data on undetected events seem to support the idea that ‘monitoring’ - routine checks - is unreliable. Amalberti makes the point that routine checking trails the other three modes in terms of effectiveness by quite some distance.

What we are seeing here is system buffering. As performance becomes marginal, either the safeguards or other actors in the system respond and afford protection. It is conventional to talk in terms of error ‘trapping’, but the term is misleading. The data suggest that rather than identify an aberration and somehow block its progress, buffering in the system accommodates performance, and future actions restore margins. Errors represent a trace of the work being undertaken and allow us to explore variability at the margins of performance. In the context of Safety-II thinking, things tend to go right in part because the participants in the work process have detected ‘wrongness’ and responded before the discrepancy became consequential. In the next section, we will look at how that happens.

 
<<   CONTENTS   >>

Related topics