Desktop version

Home arrow Management

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Thinking about Failure

Introduction

Aviation has a long tradition of investigating failure and taking remedial action where necessary. If crew resource management (CRM) is to support safety initiatives, then it would be useful to better understand how safety margins are eroded and what behaviour-based countermeasures can be developed. In this chapter, I want to look at the mainstream historical approaches to safety investigation, and in the next chapter, I will outline a new framework of analysis. I start with an overview of some safety concepts, and then, I will briefly look at a number of broad approaches to understanding accident causation to see how they throw light on why things go wrong. The first thing we will discover is that it is difficult to be specific about causation, about why something happened the way it did. Why is it that the same set of circumstances on one day will have no observable effect and yet, on another day, will result in an adverse event? It is clear that consequences shape our view of specific events. We are more likely to think negatively about the performance of the actors if we know that the outcome was a failure of some sort. Hollnagel (2009) observes, though, that success and failure can both flow from the same performance, the implication being that we need a change of perspective. By default, investigation tends to look at what the crew failed to do or did wrong. What we often overlook is that the crews were actively trying to solve problems as things unravelled around them. We need ways to explore why performance was insufficiently robust such that safety was put in jeopardy. The following two examples illustrate the, often banal, nature of failure.

On 16 December 2004, a Shorts 360 was substantially damaged, and the two crews were seriously injured when the aircraft failed to fly after an attempted go- around (TSB, 2005a). The crew was carrying motor vehicle parts from Toledo, Ohio to Oshawa, Ontario. The runway at Oshawa is 4000 ft long. Because of some inaccuracies in flying the approach, the aircraft touched down a third of the way along the runway. The Shorts 360, given the aircraft weight at the time of landing, required at least 4100 ft in which to stop. However, the runway was completely covered with wet, slushy snow. Under these conditions, the aircraft stopping distance was at least 7400 ft. Notwithstanding aircraft handling and performance issues, the crew should never have attempted to land at Oshawa in the first place. All of the landing constraints were known before take-off. When it was clear that the aircraft was not going to stop, the crew attempted to get airborne again, striking the trees just outside the airfield boundary.

On 10 September 1999, a Beech 1900 landed at Williamstown, New South Wales, Australia (ATSB, 2001). As the crew taxied the aircraft towards the terminal, they were presented with indications of an electrical failure and also a low fuel pressure warning. The electrical problem was dealt with using the checklist but, because they were close to the terminal, they took no action over the fuel warning. As they turned onto their parking position, the First Officer (FO) saw flames coming from the underside of the right-hand engine nacelle. Both engines were shut down, and the fire bottles discharged but to no effect. Ground personnel dealt with the fire using hand-held extinguishers. Subsequent investigation revealed that, at some unknown time, maintenance work involving the removal of cable looms in the wing had been undertaken. When the looms were replaced, cable ties were used to attach the wiring to the aircraft structure. However, special collars normally used to prevent the looms from chafing against the metal structure had been left out. Over time, the landing light cable had been rubbing against the fuel pipe in the area behind the right-hand engine. The cable insulation had finally worn sufficiently to allow electrical arcing between the wires and the fuel pipe, causing the pipe to burn through and ignite the fuel. The electrical failure indication on the flight deck was caused by the short-circuit, and the low fuel pressure warning was the result of the holed pipe. Ironically, the crews’ attempts to use the fire bottles were frustrated because the bottles were electrically operated, and the power supply had been destroyed by fire. However, the bottles work in a protected area around the engine, and the fuel pipe runs outside of that area; the fire bottles, had they been working, would have done nothing to control the fire.

These two examples clearly illustrate conditions that, with hindsight, we would call ‘unsafe’. The Shorts 360 outcome was clearly both predictable and preventable based on the information available at the time to both the dispatchers and the crew. The Williamstown event is more complex, involving both third parties (the maintenance event) and decisions about aircraft design (the functioning and the operation of the fire extinguishing system). If CRM is really about sustaining safety or, more accurately, mitigating risk, then we ought to be able to identify actions that would be required, or even expected, of the flight crew to prevent an adverse outcome. Based on the discussion in Chapter 1, we should be able to identify deficiencies in crew knowledge structures or their execution of skill-, rule- and knowledge-based behaviours in order to maintain control.

We might argue that the crew of the Shorts 360 should have demonstrated a professional ‘attitude’ towards pre-flight planning. After all, they should never have attempted to land at an airfield where the runway was known in advance to be too short. The risk of a runway overrun was present before they even took off. Unfortunately, such a critique of the crew is one based on hindsight. Because we know the outcome, it is easy to be critical of their actions. Nonetheless, there are questions I would have liked to have asked of both the dispatcher and the captain, especially, relating to planning and operational management that were not answered by the investigation report.

The crew of the Beech 1900, on the other hand, could not possibly have known what was wrong with their aircraft. The fire could have occurred at any point (and on almost any flight) whenever the landing light was turned on, and it is only by huge good fortune that the consequence of the failure could be readily dealt with. Ironically, by choosing not to act on the ‘low fuel pressure’ checklist, the crew inadvertently mitigated their problem. One of the actions on the checklist was to turn the fuel boost pumps on, which would have fed fuel to the fire at a higher rate. Part of the problem we face in making sense of events is that processes and technologies combine in ways we fail to understand or anticipate. These two relatively simple examples illustrate the range of factors that shape outcomes. They show how humans do work in complex systems that include technology, infrastructure and work undertaken by others removed in space and time. The fact that failure is usually the observable end point of sequences of events involving many factors is an unfortunate cliche in aviation. From the Oshawa report, it is not clear how often aircraft had been dispatched to an unsuitable runway but with no consequence. Again, it was not clear how long the unsafe condition was present on the Beech 1900 or if any subsequent maintenance activity had simply overlooked the missing spacers. These examples suggest that we are often unaware of the fragile state of normal operations.

 
<<   CONTENTS   >>

Related topics