Desktop version

Home arrow Management

  • Increase font
  • Decrease font

<<   CONTENTS   >>

Keeping Control

Buffering, then, is reflected in the way crew deal with errors and the subsequent consequence. The LOSA methodology has four ways of categorising crew responses, which can be either active or passive. Active responses include successful management (i.e., the issue was resolved) or mismanagement (i.e., the crew response created an additional error). The passive responses are to ignore the error or to fail to detect it in the first place (see explanation above). The final outcome from each action sequence will be either inconsequential or an undesired aircraft state (UAS), which is to say that, in the opinion of the observer, the margins of safety had been eroded. Of the 1107 errors captured in an audit of 177 flights, 92% were inconsequential and this rate was consistent across other audits.

The discussion that follows is based on errors observed in a sample of 86 sectors flown by one type of wide-body airliner. The rate of errors was 7.67/flight. The distribution of crew responses is given in Table 6.5. Looking at active responses first, the fact that over half of the errors were dealt with satisfactorily at the first attempt adds weight to the Safety-II argument that things, generally, go right more often than they go wrong. Active interventions rarely led to additional problems. Therefore, we can argue that the responses of the crew were, broadly speaking, adequate for the task. Active responses reflect crew working as intended, even if their actions were, at times, momentarily insufficient for the task. It is the cluster of passive responses that, I believe, throw more light on the nature of work. Passive responses suggest either a failure in sense-making or an idiosyncratic response threshold, by which I mean crew were aware of the non-conformity but chose not to act.

I said earlier that the high rate of undetected error (Table 6.4) asks questions of the validity of the concept of monitoring but we now see that crew also apparently ignore errors, too. In terms of commission, undetected errors are split more or less evenly between the captain and the FO (48.04% v 51.96%), but 65.4% of ignored errors are attributable to the captain. Given that the other crew member is the single most effective control on performance, as tracked by the error detection rate, this possibly reflects the social dynamic on the flight deck, suggesting a reluctance on the part of the FO to intervene.

Marginal performance needs to be viewed in terms of the task goal that was active at the time. The behaviours observed mainly fall into one of the three categories: incorrect sequencing of actions; communication (typically with third parties or


Distribution of Error Responses (n = 660)


Errors (%)

Active - Detect and manage


Active - Detect and mismanage


Passive - Detect and ignore


Passive - Not detected


procedural call-outs); wrong task actions. Sequencing issues include doing actions in the wrong order, at the wrong time or forgetting to do them, classic examples of slips and lapses.

In both passive management categories, we see that a similar percentage of errors were to do with the sequencing of activity: 13.09% of ignored and 11.17% of not detected. Examples would be:

Landing light not switched on after gear select down Taxi duties commenced before exiting runway Checklist actioned early

Communication errors included failure to initiate, incorrect response and inaccurate or incomplete messages. These represented 13% of the ignored and 50.83% of the not-detected acts. The data suggest that undetected errors are, in fact, simply the noise in the system. They are the imperfect use of language and issues around tracking tasks.

The ignored errors illustrate the other side of doing work. During discussions with pilots about the data, a common observation about the rate of errors that were ignored was that the act that was flagged as an error in LOSA terms was probably considered to be unimportant by the actors. One simple example of this behaviour is the failure to announce approaching a specified level off. The data for lapses detected by the aircraft in Table 6.4 include this type of event. Pilots often commented that there was little point in making the call as the aircraft would level off in any case. Of course, that assumes that the automation had been correctly configured, which was not always the case. This example reflects the fact that work is socially constructed rather than a normative process. Task design and procedural guidance are construed as over-specified by the crew, and alternative, non-standard, solutions are often constructed in real time. Task modification is shaped by a variety of factors such as convenience, simplification and workload. A pilot’s alignment with organisational goals and the nature of oversight also shapes performance.

<<   CONTENTS   >>

Related topics