Desktop version

Home arrow Management

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Error as Performance Feedback

Introduction

In Chapter 2, we saw that the actions of individuals constitute a class of hazards. In the previous chapter, I addressed the question of how individuals engage with the world in order to get jobs done. I was interested in what constitutes normal work. Ironically, my examples all illustrated a failure of some kind. In two cases, the crew were presented with very unusual situations, under extreme time constraints. The third example showed how a flawed performance, unfolding over time, can still look ‘normal’. The cases were chosen because they illustrated how work comprises responses, typically within a procedural framework, to signals from the outside world. I also suggested that those responses, in turn, shape the world. Actions create signals that feedforward to bring about change which, in turn, feeds back to shape further action. In this chapter, I want to specifically address the fact that normal work is buggy: work is enacted in an imperfect manner. These imperfections are labelled ‘errors’.

The focus of this chapter, then, is on things going wrong. Such is the potency of the term ‘pilot error’ that I need to recalibrate the discussion. Things go wrong because the efforts of the individuals engaged in the activity did not meet the requirements of the task, which is to say that the constraint set of the intended goal was not satisfied. By default, we subsequently frame the debate in terms of failure. The Safety-II paradigm, correctly, claims that most work ‘goes right’, and that, simply focusing on what goes wrong distracts attention from exploring ‘why’ things go right so often. By looking at error in detail, I hope to uncover some properties of the world of work. First, I am interested in the conditions, or risk factors, that increase the probability of a mismatch between the actions of individuals and those needed to satisfy the goal state constraints. Second, what are the symptoms of, or precursors to, erroneous behaviour? What signals exist, especially in marginal states, that suggest that buffering is being overwhelmed? Finally, how does the system respond under conditions of mismatched behaviour and goal state constraints?

This last point is important. Once an error occurs, crew either adapt their future actions to the changed circumstances they now face or they fail. We said in the last chapter, though, that interventions change the environment. Error ‘management’ is not a case of ‘doing it again but right this time’. Error management is almost an oxymoron. Instead, we sustain control in the new context that flows from the prior insufficiency. If you accept this argument, then it suggests a possible flaw in the ‘Safety-II’ position: failure remains an object of interest. In fact, doing things right is often all about nudging ‘failures’ back on track. In effect, the control of the current task has drifted such that operators recognise that something is now clearly not right, and action is called for to restore equilibrium.

This chapter will draw on evidence from field observations using the line operations safety audit (LOSA) methodology. Given that LOSA is an audit, the benchmark of performance is contained in company documentation, and the findings are shaped by the taxonomy used to classify observations. Because error is an emotive term and, as we will see, the vast majority of negative events are inconsequential, I prefer to think of errors in the context of LOSA as examples of quality non-conformity. The evidence gathered by observers is primarily of departures from the prescribed task. One airline’s ‘error’ would not exist in another airline having different procedures. By design, then, the evidence base is constrained. That said, LOSA data are a rich source of (fairly) standardised information about workplace performance.

Error, in reality, is the trace that the process of work leaves behind. Errors are like archaeological remains that can be excavated and used to reconstruct what the individual was trying to do at the time. As with fossils and bits of pot, a degree of interpretation is needed. With these caveats in mind, this chapter will look at categories of error, linking error to performance and, finally, regaining control. We start with an example that left many observers bewildered at the time.

The Helios B-737 Crash Near Athens, 2005

On 14 August 2005, a Boeing 737-315 took off from Cyprus, heading for Prague with an en route stop at Athens (AAIASB, 2006). Having left Larnaca at 06:07, the aircraft passed over Athens and entered a holding pattern at 07:38. No radio calls had been received from the crew for most of the flight. At 08:24, as the aircraft made its sixth orbit, it was intercepted by F-16 fighters from the Greek Air Force. The military pilots were able to see that the captain’s seat was vacant and the occupant of the first officer’s (FO’s) seat was slumped over the controls. The oxygen masks had been deployed in the cabin. At 08:49, a person was seen entering the flight deck. A minute later, the left-hand engine flamed out due to fuel starvation. During the descent, two attempts were made by someone on board - thought to be a member of the cabin crew - to broadcast a Mayday call. The right-hand engine also flamed out passing 7000 ft, and the aircraft crashed at 09:03, 33 km north-west of the Athens International Airport. There were no survivors. The subsequent investigation attributed the accident to the incorrect configuration of the pressurisation system. Those passengers not killed in the final crash had probably already died of hypoxia.

 
<<   CONTENTS   >>

Related topics