Desktop version

Home arrow Management

  • Increase font
  • Decrease font

<<   CONTENTS   >>

Linear Models of Accident Causation

Lawsuits totalling more than US$63 million were brought as a result of the Pelee Island crash. In total, 11 defendants were cited, including the pilot’s estate, the airline, the airport, Cessna as the aircraft manufacturer and the shipping company that chartered the aircraft. Of course, legal processes seek to attribute blame. Some person or body must be culpable. The primary concern of safety investigation and event analysis, though, must always be to generate understanding. However, the fact that outcomes usually flow from actions and that ‘someone’ must be called to account is the starting point for our discussion of safety simply because it seems to be the most natural way humans deal with failure. Models of accident causation that link cause and effect have a long tradition in accident investigations. Their popularity lies, in part, because of an apparently intuitive need for humans to want to create stories to make sense of the world.

Linear models - the ‘accident chain’ metaphor - try to establish a causal relationship between actions and outcomes. They represent the earliest organised attempts to understand failure. By working backwards from the adverse outcome, it is hoped that the factors that contributed to the accident will be identified. Remedial measures

Linear model of Pelee Island

FIGURE 2.1 Linear model of Pelee Island.

can then be put in place to reduce the probability of a repeat performance. Various classes of linear models have been proposed, including the cluster of activities under the umbrella of root cause analysis and a family of approaches derived from engineering that can be represented by failure mode analysis. Linear models attempt to show relationships and influences. They have also been used to generate taxonomies of error. By cataloguing and categorising error, it is hoped that future errors can be better identified, trapped or mitigated.

Figure 2.1 attempts to apply a linear model to a subset of the events of Pelee Island. The first thing to notice is that no single line can be traced through the accident: several chains of events converge. The model is not meant to be exhaustive and it only deals with two aspects: the way in which the aircraft loading was managed, and also the way in which the pilot dealt with the problem of ice on the aircraft. The figure shows how some features of the event had multiple effects. For example, the ice on the wings both degraded performance through reducing lift and also added weight to an already over-loaded aircraft. Some of the boxes contain statements of fact while others refer to actions or, rather, inactions on the part of the pilot. The representation does not capture the motive of individual actors. For example, two people on the ground mentioned the ice on the wings to the pilot, but he took no action - why not? The portable de-icing spray was not carried on this flight - why not? After all, the weather reports suggested that ice would be encountered, and the pilot had already had to de-ice the aircraft on a previous flight that day. He elected to carry an additional 1000 lbs of fuel, which added to the aircraft’s total weight on take-off and aggravated the problem. This was a quantity far in excess of anything he would have required. Why? In order to understand how a situation becomes unsafe, we need to understand the motives of individuals and why actions made sense at the time. Of course, establishing motive will always be problematic, if not impossible, after a fatal accident. The arrows on the diagram indicate the notional direction of causality but, of course, there is no clear cause and effect relationship at work here except in the case of aircraft performance. Cause and effect are often judgements made after the event.

Linear models allow us to bring some order to the chaos and confusion that often surrounds an accident. Actions and outcomes are arranged in a temporal and spatial relationship. More complex varieties of analytical model suffer in that they identify causal factors from the perspective of the designers of the model and, so, are at risk of offering a skewed analysis. In effect, the analysis confirms the agenda of the investigator rather than establishing an objective version of events. For example, the Pelee schedule was designed as two sectors, out and back, with the appropriate planning and documentation completed for each sector. Pretty quickly, however, it seems that the pattern became a single sector with an intermediate stop, which possibly explains why the pilot did not complete paperwork for the return flight. From the company’s perspective (and that of an outside investigator), the pilot failed to follow procedures, but from a line pilot’s perspective, what we are seeing is how processes get adapted to make the job simpler.

Linear models are weak in terms of establishing the cognitive aspects of the performance. Although such approaches allow us to identify probable causal factors, they usually lack the specificity required to allow us to implement meaningful preventive measures. This point is important as the common response to a failure is to increase the level of oversight, change a procedure or introduce additional controls. This ‘barrier’ approach, as Hollnagel (Hollnagel et al 2006) points out, doesn’t prevent outcomes, rather it changes behaviour: we find ways around the barrier. One final criticism of linear models is that the different methods will often result in different results when applied to the same event, and different analysts will come to different conclusions when using the same technique. They lack any internal consistency simply because they rely so much on individual judgement. The concept of an ‘error chain’ has been enshrined in safety folklore, but the metaphor, despite attempts to develop tools and techniques to bring structure to the analytical process, does not offer a full account of how unsafe situations develop (for a fuller description of the problems, see Dekker 2002). Perhaps the biggest fallacy of the linear model is the thought that ‘breaking the chain’ will save the day.

Linear models are not really helpful in identifying how competence might affect safety. You might argue that if everyone involved had just done their jobs properly, this event might never have happened. Alas, that cannot be said for certain. I once listened to Captain Dan Maurino, the then Coordinator, Human Factors and Flight Safety at the International Civil Aviation Organisation, talking about the fatal 1990 Avianca B-707 crash at Cove Neck, New York (NTSB. 1991). He said that he wished he could have spoken to the captain of the aircraft before it left Bogota and prevented him from getting airborne, but he knew that, such was the general condition of the airline, if this crew were saved then another would probably crash. Dan was reflecting on the fact that competence is demonstrated across an organisation, not just on the flight deck. With hindsight, we could say that the pilot of the Pelee Island Caravan should have weighed his passengers, de-iced before take-off and done the paperwork properly. We can also say that he knew all of this but, for some reason, chose not to do so on the day. The Avianca captain had poor English language skills and needed assistance in understanding the АТС controller’s instructions. His aircraft’s autopilot was not serviceable, and so, he was required to hand fly for the entire route, a physically and mentally demanding task. Both pilots were employed by airlines that, it could be argued, also bore responsibility for subsequent events. We need to look more broadly at an organisation’s range of functions to identify the types of competence that support safe outcomes.

<<   CONTENTS   >>

Related topics