Desktop version

Home arrow Management

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Performance Approaching the Boundary

If, in the opinion of the observer, the actions of the crew place the aircraft in a state where safety margins have been reduced or eroded (itself a problematic judgement), then a UAS is recorded. In some cases, there can be a reference or recognised benchmark to refer to. For example, taxying the aircraft too fast would be considered a UAS. Some events might trigger a need for a safety report. In other cases, it falls to

TABLE 6.8

Consequential Outcomes by Error Types (as a % of all errors in class)

Outcome

Slip (%)

Lapse (%)

Mistake (%)

Additional error

2.5

3.7

4.6

UAS

4.2

1.7

7.4

TABLE 6.9

Crew Responses to Consequential Errors (as a % of all errors in class)

Management of Outcome

Slip (%)

Lapse (%)

Mistake (%)

Active

1.4

2.2

3.9

Passive

5.4

3.1

7.9

the judgement of the observer. In one study, I used a current line pilot to review 52 events classed as UASs (4.6% of total errors observed). First, events were categorised according to the pilot’s view, based on their experience of operations, of the probable rate of occurrence of the event (e.g., commonplace, occasional, rare), and then they were asked to consider, given the context, what could be the next most likely outcome if the crew had not resolved the situation satisfactorily, an exercise in counter-factual thinking. Only a single UAS out of the 52 was considered noteworthy in terms of the potential exposure to hazard. What we are interested in here is how behaviour modifies residual risk: the degree to which the current task status is the precursor to subsequent failure.

Table 6.8 shows outcomes in relation to error types. We saw earlier that mistakes are associated with decisions about possible future courses of action. Active intervention, then, seems to be associated with a higher rate of adverse outcome. Table 6.9 shows a sample of 80 consequential errors (i.e., the outcome was an additional error or a UAS) analysed by type of behaviour and crew management. We saw that passive management of errors was often associated with intentional departures from expected performance. In terms of adverse outcomes, where a crew fails to act in response to a marginal performance, then the risk of encountering a UAS was doubled (RR 2.013, Cl 1.05-3.859). In Table 6.5, the distribution of passive responses to undetected errors was more than double than that for ignored errors. The implication of these data seems to be that where crew actively intervene and the outcome is a mistake, the subsequent failure to detect the discrepant feedback signal is twice as likely to be at the level of sense-making (significance not perceived) rather than flawed mental models (signal significance not understood).

While these data suggest that performance variability is a product of both the type of behaviour (inputs) and the crew responses to events (feedback handling), they say nothing about how risk can be increased. What we need to understand is how margins are eroded and buffering is overwhelmed. To explore this aspect, I looked at a sample of 20 errors linked to a UAS. Three clusters of risk-shaping factors emerged:

Performance Degradation (40%)

Risk Transfer (35%)

Separation Reduction (25%)

Performance degradation arises from the configuration of the aircraft. Examples include taxying too fast and, thus, having excess energy that needed to be dissipated at some point, flying at the wrong speed for the current flap selection and applying thrust with the speed brakes extended. In one case, the crew failed to update the take-off data, thus reducing the performance margins should conditions change further when getting airborne. These examples point to the importance of efficient energy management and are all examples of mistakes. The next example is actually a commonplace lapse and is a classic illustration of the problem of habitual actions intruding in current processes. Traditionally, approaches have been flown in a full landing flap configuration. For a variety of reasons, airlines now routinely land at a lesser flap setting (Flap 25 instead of 30, for example). The crew had briefed a Flap 25 approach but, during the approach, the PF called for Flap 30. This was actioned by the PM and neither crew picked it up until they completed the landing checklist. The final example goes to the heart of modern aviation and the challenge of maintaining proficiency in manual handling. The pilot was hand-flying during the departure and, in a turn, allowed the speed to increase by 13 kts.

Risk transfer describes the condition where the actions of the crew exposed third parties to potential harm. In four cases, the crew either failed to allow adequate separation between the aircraft track and visible convective clouds or failed to turn on the seatbelt sign, having already encountered turbulence. The risk of harm as a result of being exposed to turbulence was borne by those inside the aircraft. In three other cases, crews forgot to turn on a landing light and the TCAS, thus removing potential warning to others of their presence, or forgot to turn off the weather radar after landing, which has potentially harmful effects for ground personnel. The additional risk in these cases involves others external to the aircraft. Interestingly, whereas the internal risk arose from mistakes by the crew, the external risk resulted from lapses.

Finally, airspace rules are designed to allow the maximum number of aircraft to operate in constrained airspace by maintaining a safe envelope around each aircraft. To maintain the envelope, aircraft are required to fly at specific speeds, tracks and altitudes. Any modifications to these parameters will result in reduction of the safe space around the aircraft. Separation reduction occurred on five occasions as a result of incorrect entry of data (speed and heading), manoeuvring before a nominated waypoint, taking up a wrong track and flying below a cleared altitude. The first three were examples of slips while the latter two were mistakes.

Two conclusions flow from this sample of errors. First, they were all the result of the normal behaviour of crew in relation to perturbations in the operational environment. They flowed from ‘normal’ work. Second, error is of interest because it is a risk multiplier. In the case of the excessive taxi speed, a change in the taxiway surface conditions could result in the aircraft losing grip and sliding. Typically, snow or standing water ought to bring about a change in pilot behaviour but, for example, a B-737, landing at an Indian airport, slid off the runway while trying to execute a 180-degree turn. The pilot had reduced speed after landing and rolled to the end of the runway, which was covered in sand and small stones. Because aircraft rarely used this part of the runway, no efforts had been made to keep it swept. The linear momentum of the aircraft, coupled with the reduced traction, resulted in the aircraft very gently sliding sideways off the end. Error, then, is of interest as much because of what might happen next as what it tells us about current performance.

Conclusion: Errors as Signals of System Behaviour

The study of error allows us to resurrect aspects of normal work. By looking at errors in context we can see what crews were trying to do and explore the ways in which normal work departs from organisational intentions. From a system’s perspective, errors serve two purposes. On the one hand, they are the feedback loop from the environment that allows us to judge the success of our attempts to achieve our performance goals. On the other hand, they are symptoms of individual competence. Clearly, competence must contribute to error commission (allowing for violations) because it reflects inadequate task management, but an inadequate response to an error can be indicative of failures at the level of perception or processing. The challenge will always be to separate the relative involvement of each component. Errors are not predictable, they are contextual. This chapter has relied heavily on numerical analysis of a set of samples but only for the purposes of triangulating on the real question, which is how does work get done.

References

AA1ASB. (2006). Helios Airways Flight HCY522 Boeing 737-31S at Grammatiko. Greece, on 14 August 2005. Hellenic Republic Ministry of Transport and Communications.

AAIB. (1998). AAR 2/98 Report on the incident to Aerospatiale AS332L Super Puma G-PUMH over North Sea, on 27 September 1995. HMSO. London.

AAIB. (2019). Bulletin 1/2019. Report No: EW/C2017/12/03.

AAIU. (2001). Boeing 737. EI-CDB, Nr Cork Airport, 7 December 2000. Report No: 2001-014.

Amalberti, R. (2013). Navigating Safety: Necessary Compromises and Trade-Offs - Theory and Practice. London: Springer.

Chou, C. & Funk, K. (1990). Management of multiple tasks: Cockpit task management errors. 1990 IEEE International Conference on Systems, Man, and Cybernetics Conference Proceedings (pp. 470-474), Los Angeles, CA. USA.

Hollnagel, E. (1983). Human error. Position Paper. NATO Conference on Human Error. Bellagio, Italy.

Marsh, R.L., & Hicks, J. L. (1998). Event-based prospective memory and executive control of working memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 24(2). 336-349.

Reason, J. (1990). Human Error. Cambridge: Cambridge University Press.

SIBF. (2006). Report No: C3/2006L Incident involving an airliner landing at Helsinki-Vantaa airport, on 20 February 2006.

 
<<   CONTENTS   >>

Related topics