Desktop version

Home arrow Management

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Understanding ‘Success’

The current popularity of ‘Safety-II’ thinking is, in part, a consequence of its shift in focus from failure to look, instead, at success. Without a doubt, it is a fundamental truth that there are more successful performances delivered on a daily basis in aviation than failures. But what does ‘success’ actually look like? I would argue that success flows from rich constraint sets that allow for early detection of departure from the desired task status and effective rule sets that restore acceptable performance before events become anomalies. Again, the upper right quadrant in Table 1.3 is the space occupied by expert performance. This leads to a final aspect of expertise: knowledge must be abstract and generative. Knowledge is abstract when it becomes detached from specific contexts (the left hand column of Table 1.2) and is rendered as generic sets of constraints and rules. Knowledge can then be transported across contexts. Knowledge is generative when it can be used to construct new rules to control novel situations (the right hand column of Table 1.2).

To conclude this section, I want to consider where knowledge resides in networked systems. Because of differences in training and experience, and given the limitations on memory, it would be impossible for any individual to have perfect knowledge. Knowledge is distributed between individuals and between humans and technology. For example, is: a pilot will insert a flight level or altitude into the mode control panel (MCP) and the autopilot will maintain that level until instructed to do otherwise. The aircraft now has ‘knowledge’ of how high it is supposed to be. The pilot might momentarily forget the assigned level but a quick check of the MCP will provide the information. Human and technology share knowledge. Here is another, more complex example. In June 2013, a Boeing 747-8 cargo aircraft was flying from Dubai to Frankfurt, cruising at FL320, when the outboard aileron section on the starboard wing unlocked and motored to the fully up position (personal communication). The ailerons on the B-747-8 are operated by fly-by-wire and the outboard section locks as the aircraft accelerates through 210 kts after take-off. The aircraft’s autopilot had sufficient control authority to stay engaged but the starboard inboard aileron was operating in the opposite sense to the outboard section in order to maintain that control. The increased drag caused by this configuration meant that there was now insufficient fuel to make Frankfurt and so a diversion would be required. More important, however, was that this was a configuration outside of the design scope of the aircraft: it should not have been possible for this to happen. The outboard aileron was not supposed to be able to unlock above 210 kts and, accordingly, there were no checklists or procedures to deal with the situation. Equally, the future behaviour of the control surface and any subsequent effects on aircraft handling were not known. The crew called the company Operations Centre.

The call was initially passed to the Duty Operations Manager, a B-777 pilot, who passed it on to a colleague who had led the team introducing the B-747-8 into service when he had been the B-747 Chief Pilot. He placed a call to a Boeing Vice-President, who he knew was visiting Frankfurt. At the time, it was the middle of the night in Seattle, Headquarters of the Boeing Aircraft Company, but it was breakfast time in Frankfurt. Permission was sought to wake up the B-747-8 design team in Seattle and get them working on the problem. In the Operations Centre, the Duty Engineering Manager contacted a colleague who worked for his previous employer and was based at Istanbul Airport, which was under the planned flight path to Frankfurt. He warned him that the aircraft might be diverting and requested support if needed.

In Seattle, the aircraft control systems design team convened and began brainstorming the problem. A second, Structures, team was set up as it was not known if the airframe could sustain the loads imposed on it in this unplanned configuration. There now followed an iterative cycle of possible solutions being proposed and debated by the teams in Seattle, the Management Pilot and the Engineering Manager, most of which were rejected. Finally, it was decided that, if the speed was reduced to below 210 kts, then the system might relock. In order to do that, the aircraft had to descend. The current position was over high terrain but 45 minutes further along track was the Black Sea. The flight crew were briefed on the plan and began discussing the various ways the aileron might respond and what they would have to do to maintain control. Finally, over the Black Sea, the crew descended to FL190, reduced speed and reset the system successfully.

This example is interesting because, in terms of ‘knowledge’, no single individual knew what the cause of the problem was and, therefore, what the answer might be. A virtual team was formed comprising people at several locations, spread across the globe. A solution was created which was not technically ‘correct’. It was a guess as to what might solve the problem. It was arrived at by debate and counter argument. It was created through iterative communication cycles. In this case, ‘knowledge’ was actually an approximation, not a truth, and it was created in the space between individuals, not inside the head of any one person. In some ways, this is the epitome of what we call CRM.

 
<<   CONTENTS   >>

Related topics