Desktop version

Home arrow Management

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Collaboration in Normal Work

The Little Rock accident is a stark illustration of the nature of collaboration and how established processes can quickly degrade under stress. Undoubtedly, the crew were dealing with a very demanding situation but the key elements of the scenario can be seen in routine, normal operations. Based on a LOSA sample of 100 sectors, a list of 660 errors was generated of which 147 (22.27%) could be considered examples of specific task-related behaviour. We saw that Little Rock crew were constantly having to reconcile competing goals. In fact, under normal circumstances, on 9% flights, crew were required to reconcile a conflicting or confusing task goal. Under the pressure of constant change, the Little Rock crew forgot an important checklist. Task-related lapses were evident on 10% of the LOSA flights. Much of the work of the Little Rock crew was activity directed at achieving a goal state within the applicable constraints. The largest single category of observation in the LOSA sample was interventions lacking sufficient magnitude to effect change in the time and space available (21%), the result of which was that crews failed to satisfy goal stare constraints. They followed an incorrect procedure (12%), their actions were in the wrong sequence (11%), were delivered too late (8%) or were inappropriate for the specific task (5%). A fairly common event (21% of flights) that illustrates the fragility of collaboration is what I labelled task partitioning. It is a maxim that ‘someone must be looking out at all times’ and yet situations where both pilots were individually, simultaneously completing tasks inside the cockpit were common. The issue of roles within teams will be discussed later but 13% of observations involved unilateral interventions by the captain.

Of these events, 40.81% were actively managed and had no bearing on the subsequent conduct of the flight. Of the passively managed events (ignored or not detected), seven (4.67% of task-related events) had a potential impact on risk. For example, while the FO was off the flight deck the captain was instructed by АТС to initiate a 10-mile track offset. The captain actually chose to insert a 12-mile offset without confirming with АТС and he then failed to explain what he had done when the FO returned to the flight deck. The ingredients of the Little Rock case are, actually, quite commonplace in normal operations and risk exposure can be influenced accordingly. The collaborative process, like all activities, has inherent variability even under normal conditions.

Engagement as a Transitory State

The challenge of collaboration, in part, stems from the fact that human actors can be both ‘a part of’ and ‘apart from’ the process of work. To illustrate this concept I want to look at three events with identical outcomes but each different in their development. In a 6-week period, three aircraft of the same type flew through the localiser (LOC) arriving into Hong Kong on Runway 07, one of which resulted in a loss of separation in controlled airspace. All three arrivals had one thing in common (other than the outcome): at a significant moment, one crew member became divorced from the work process (this section relies heavily on the invaluable input of five of the six crew involved, for which I am grateful. Any interpretation of their performance is my own, as are any errors in that analysis). In the case of the first event, the FO was pilot flying (PF) and was paired with a relatively new captain. Because traffic was light, given the time of day, the crew were told to maintain high speed in order to expedite the approach. In addition, the aircraft track intercepted the LOC at about 90°, requiring a tighter turn than normal. The aircraft passed through the localiser with LOC ‘armed’ and immediately turned back to capture from the wrong side. As the captain said in his report T didn’t realise the jet would do such a messy job’. The second event happened 11 days later and involved an FO training flight. Because the LOC was armed late, the aircraft flew through and, again, aggressively attempted to capture from the other side. The captain took control at this point but not before separation with the aircraft following in the pattern had been lost. Finally, 4 weeks later, an aircraft, with the captain as PF, again captured the LOC late, but not to the same degree as the previous two events. In a LOSA audit of 177 flights there was one instance of the LOC ‘bust’, six instances of incorrect configuration (arming the system) and 11

instances of missed procedural call-outs (declaring that the system was capturing the ILS). Of the configuration errors, 66% were trapped by the other crew members but of the call-out errors only one was trapped and that was in response to a check from АТС. The survey captured the equivalent of half a day’s flying for the airline and, by extrapolation, suggests that 23 crews a day omit an action while flying the ILS and two arrivals will approach the margins of accepted variability. The events we are looking at, then, are relatively commonplace.

Returning to our case studies, the first example was influenced by the intervention of АТС. In this case, arrivals were being managed by giving crew speed instructions and vectoring. The FO was not slowing the aircraft down quite the way the relatively new captain might have liked but he was reluctant to intervene, instead choosing to let the FO manage the operation. The FO was following procedures in a fashion that would work under normal circumstances but the captain felt that, perhaps, the task prioritisation was inappropriate given the speed and distance to run. However, he was not inclined to speak up. The FO flying the second arrival was trying out a new technique. Although very familiar with the VNAV automation mode, the training captain had suggested using a combination of different, commonly used, modes: FLCHG and vertical speed (V/S). The descent was going according to plan, with the occasional use of speed brake to stay on profile, but it nonetheless needed extra mental effort on the part of the FO to monitor progress. The captain, of course, was supervising his trainee as well as being the pilot monitoring (PM), which added to his own workload. In the final example, the captain, as PF, was managing the descent using flight path angle (FPA). The FO had not seen this technique before and he was watching, out of interest, to see how it worked out. Because his attention was elsewhere, he failed to keep track of the progress of the aircraft as it approached the LOC. In short, at this stage, none of the three crews were doing anything wrong but what they were doing influenced what happened next.

In terms of goal states, the aircraft is, first, descended to an appropriate height and position from which it can start its initial track to intercept the LOC. It is then turned onto an intercept heading before capturing the LOC. АТС gives instructions for speed and heading changes and then issues the formal clearance to intercept the ILS and begin the final approach. The final example was quite straightforward in terms of the degree to which it mapped on to the process. The captain had tuned and identified the ILS, in accordance with the standard operating procedures (SOP), but then became distracted by an indication of incorrect sensing. He then recalled someone saying that there was no need to verify correct sensing but he was not sure where he had heard that. The FO, meanwhile, reflecting on the captain’s use of FPA, had not fully re-engaged with current events. He did not pick up that the captain had become distracted and that, although the aircraft had been turned onto the ILS intercept heading of 040°, the LOC had not been armed. The late arming was the cause of the slight overshoot. The captain’s confusion about the current procedure says something about how pilot knowledge is recreated over time through formal and informal communication of change.

The first example was also quite straightforward in that the excess speed, coupled with the junior captain’s reluctance to intervene meant that the activity was time- compressed. Of interest is the captain’s belief that the FO’s capability to recover the situation matched his own, which was subsequently proven incorrect. This says a lot about the validity of concepts like ‘share the mental model’. It is not enough to simply have two crew with the same supposed understanding of the situation: both crew need to have the same ability to identify a corrective action and apply it within the constraints of time and space, a condition made more difficult by the simple fact that one individual’s true capability is not actually known to the other crew member. On the one hand, we have to make guesses about our colleague’s future performance based on little or no accurate information and then, on the other, we have to trust them to get it right.

The second of our case studies is actually quite complex. АТС issued an instruction to reduce speed to 160 kts, turn onto the intercept heading of 040° and then cleared the aircraft for the ILS. The clearance was acknowledged by the captain. On hearing the first instruction, the FO began to dial the speed back from 180 to 160 kts on the mode control panel (MCP). He then realised that this speed was below the minimum manoeuvring speed for the current flap setting. He interrupted what he was doing, selected the next stage of flap and waited for it to deploy before further reducing speed. This all took time which the crew did not have. From receipt of the АТС instruction to intercepting the LOC on this day took 40 seconds. The training captain was waiting for the FO to change heading and, much like the junior captain in the third example, was reluctant to intervene too soon. Finally, he spoke up and asked if the FO was going to turn. Having reduced speed, the FO’s attention was now drawn to the track and he realised that the centreline was fast approaching. He entered a heading of 039 in the MCP - a slip - and armed LOC, which was then almost immediately captured. The management of a sequence of activities and the interplay between the time it takes for events to occur and the distance travelled in that time lay at the heart of this problem.

In two of our cases, LOC capture occurred as the aircraft was passing through it and the crew were then surprised by the aggressive response of the automation. The aircraft turned onto a heading which would bring it back as quickly as possible to the ILS LOC beam. In the case of the FO training flight, the automation turned the aircraft on to the reciprocal and so was now it was heading back the way they had just come. АТС did give the trainee FO a heading of 100° to fly to intercept but, again, the captain assumed that the automation would be a little more benign in the way it tried to capture from the wrong side. The captain in the third example did consider using HDG Select and then rearming the LOC but noticed that LOC was captured and so assumed the system would sort itself out. Awareness of system performance and capabilities was deficient here. This type of event was not something included in training and the first time you would see it was when it was happening for real.

Three almost identical events in less than 6 weeks show how crews can go from doing nothing wrong to being in the wrong place in the blink of an eye. The way the job is imagined by the designers of procedures is often simple, singular and straightforward. The way the job is enacted by crews is variable and sometimes messy. In all three cases there were small distracting factors: novel techniques were being used and crew were overseeing the work of others. АТС, through the use of vectoring, can influence the task. In a 30-day sample of arrivals onto Runway 07, АТС positioned the aircraft to intercept the LOC at distances of between 7 and 12 miles. Simple geometry means that a 7-mile intercept will be rushed whereas a 12-mile intercept affords ample time. The procedure is designed for a 30° interception angle but, again, vectoring results in a variety of angles up to and beyond 90°. The interception angle places different demands on automation. The sources of variability, then, are more than simply the performance of the crew. The collaborative nature of aviation creates variation. These three examples illustrate buffering. The system was able to cope with a variety of solutions to the task of intercepting the inbound course to the runway. The first two events were similar in terms of the extent to which the crew flew through the LOC but, because of the time of day, the heavier traffic during the FO training flight’s arrival meant that margins were much reduced. The volume of space and time available to rectify the situation while maintaining separation from other aircraft was constrained. Time of day and, therefore, traffic density can turn an event from an embarrassment into a hazardous condition.

These examples bridge Levels 1 and 2 in the hierarchical model. All of the individuals involved in these three events privately engage with the world and conduct activity that was additional to the immediate needs of the task. This internal dialogue interfered with the normal conduct of work. In the case of the use of FPA the ‘additional activity’ on the part of the FO was no more than curiosity: he was interested in seeing how things worked out. The junior captain was distracted by the challenge of fulfilling his new role. In the case of the training flight, the supervising pilot was dealing with managing his intervention without adversely affecting the mental state of the trainee. What we would call ‘collaboration’ or ‘teamwork’ often involves the resolution of these internal mental states. These are the ‘interfaces’ we have to negotiate when acting in teams.

 
<<   CONTENTS   >>

Related topics