How Things Get Done in Aviation
An airline, then, is an assemblage of functions configured to deliver a service. In so doing, it has an inherent level of risk associated with the manner in which operations are conducted. Risk needs to be seen as a property of the design of the operation as much as it is a product of the way entities operate. The following two events have the same end result but get there by different routes. They illustrate the point already made that work is distributed and show how risk emerges from the way an entity is organised.
On 27 October 2002, an Airbus A320 was positioning from Manchester, England, to collect holidaymakers on the Greek island of Kefallinia (AAIB, 2004). En route, the captain discussed passenger loading with the cabin supervisor (CS). The plan was to board 69 passengers at the first destination and to collect more passengers from the nearby island of Zakinthos. The company used a standard passenger loading plan which required one-third of the passengers to be seated in each of the three ‘bays’ in the cabin, each bay comprising ten rows of seats. At Kefallinia, the company handling agent (HA) passed the load form to the captain indicating that the passengers had been seated in accordance with the procedures. In fact, the HA had assigned half the passengers to the front bay and the remainder at the rear, assuming that the middle rows of seats would be taken by the passengers boarding at Zakinthos. The HA’s local solution to the problem of managing the aircraft’s weight and balance requirements was ‘legal’ in terms of aircraft performance but not compliant with the company procedures. Equally, custom and practice were not communicated across all participants in the process of loading the aircraft. After boarding had started, the captain asked the CS about passenger locations and was told that most of the passengers appeared to be filling the front rows of seats. The captain then instructed the CS to move passengers to the rear of the aircraft. The HA questioned this action but was told that it was at the captain’s request. At this stage only half the passengers had boarded: those assigned to the front rows. The captain’s sampling of the boarding process was premature and resulted in an inaccurate understanding of the final status of the task. By the time boarding was complete the second batch of passengers had taken their assigned seats at the rear of the cabin. With all the passengers on board, the aircraft prepared to depart, and as it began to accelerate along the runway, the nose reared up striking the tail on the runway. In his attempt to avoid a nose-heavy condition, the actions of the captain resulted in the aircraft attempting to take-off in the opposite, tail-heavy condition. Despite the captain’s intervention, the HA did not challenge the modification to his plan.
In this example, the aircraft was deployed to collect two-part loads from different destinations. The loading procedure was locally modified to cope with the task. The role of communication between the various actors was, of course, of fundamental significance in this event and elaborates nicely on some of the key themes of the previous chapter. It was the way the aircraft was deployed to exploit a business opportunity that created the opportunity for normally adequate processes to fail. From an organisational perspective, we are interested in not just how the company designed the boarding procedure but also tested its rigour; what training was given to cabin crew around the topic of aircraft weight and balance; what quality controls were in place to ensure correct loading and accuracy of paperwork. In fact, the local working practice we have just explored was not unique, and I have spoken to many captains who fell foul of the opposite condition. Because a part load of passengers was seated at the front of the cabin, the aircraft could not rotate and so a high-speed rejection of the take-off was required, a much riskier manoeuvre.
This next example contains many of the same ingredients but with the addition of more sophisticated technology and what we described in Chapter 2 as drift. On 7 December 2003, a Boeing 737-800 tipped on its tail as it started its take-off roll from Goteborg Airport, Sweden (SHK, 2004) because all of the passengers were seated at the rear of the cabin. How this scenario unfolded is almost the opposite of the previous excursion to the Greek islands. The aircraft had left Salzburg, Austria, with a full load of 180 charter passengers, 59 of whom disembarked at Goteborg, the remainder destined for Stockholm. No new passengers boarded at Goteborg. Because the flight was being conducted as a charter, the responsibility for passenger seating fell to the travel agent who booked the charter, not the airline operating the flight. To ease the disembarkation, the Goteborg passengers had been seated at the front of the cabin. The airline used an electronic system to produce its load sheets. As passengers checked in, the IT system generated a ‘seat occupied’ message, which was then used to determine that passenger loading was within limits. In this instance, two load sheets were required: the first for the sector from Salzburg to Goteborg and the second for the Goteborg to Stockholm sector. As the check-in desk at Salzburg is not connected to the networked system, a hard copy passenger load sheet was faxed to the central passenger and load control office. In the meantime, a default value of ‘passengers evenly distributed’ is used to produce the load sheet. The system also sends a warning message to the next destination to the effect that a default value has been used and that the passenger distribution must be verified. Unfortunately, the faxed passenger details from Salzburg were sent to an unmanned machine because the Salzburg office had an out-of-date telephone directory. Furthermore, because no passengers were boarding at Goteborg, the check-in desk was not opened, and so the default message warning was not seen by gate staff. The load sheet prepared for the second sector was based on the ‘passengers evenly distributed’ default message whereas, in reality, the aircraft was tail-heavy.
In contrast to the previous example, the task was distributed with networked systems and automated processes being employed in order to get the job done. In this case, work had been delegated to a third party - the tour operator - who then had to integrate activity with the airline’s processes. Change, represented by the office relocation, was not communicated, resulting in essential information not getting to the appropriate human agent. We see processes designed for a scheduled operation being applied in a changed context without the implications being considered. So, it was assumed that passengers would both disembark and then board each time the aircraft was used. The process and supporting technology was predicated on that assumption. If you remove a component - boarding in this case - then the process fails. The use of default messages in IT systems is a common design element, but one that carries with it an inherent risk unless all possible use cases have been explored.
These examples cast some light on the complexity of airlines as an organisation and underscore the fact that we cannot look at safety as simply failure at the level of the individual. The airline, as an entity, is a business model embodied as a set of processes coupled w'ith technology. A deficiency in any part of the jigsaw will induce a risk, and the manifestation of risk might be in ways that were unexpected.
Table 9.1 shows data from a small European regional airline operating a fleet of turboprop and narrow-body jets. Because the operations were seasonal, there was a core of permanent cabin crew with temporary staff being recruited for the summer. Recruitment and training are two functions that are part of any organisation’s human resources function, even if they are subcontracted. Staff numbers must be matched to need. Too many staff are wasteful whereas insufficient staff represents a potential production limiting factor. Recruitment and selection involve finding a pool of applicants and then choosing the most suitable. Each part of the process has an inherent lead time and a cost. Having selected recruits, they need to be trained. Again, this requires a period of time determined by the training needed to meet the qualification standards defined by state regulations. Recruitment, selection and training need to be sequenced so that qualified staff enter service as the demand increases. New recruits need a period of on-the-job training and experience before they can be considered fully proficient.
The time series in Table 9.1 illustrates these processes at w'ork. The shift from winter to summer starts in April as the hours flow'n by airline starts to increase. Recruitment has started and staff numbers show that the cadre of ‘summer casuals’ has been recruited. The new staff are on the payroll but not yet productive: they will be in training or undergoing line training. From May to August the flying task is at its maximum, and the tasking per crew member reaches its peak. As the Autumn approaches, the task starts to ease, and the summer casuals are released. If we look at the data for sick days, we see that there is a lag between effort peaking and staff falling ill. Furthermore, the winter data suggest that the burden of sickness is borne by the permanent staff: the summer casuals have gone, but the sick rate remains above average. Of course, some of the winter data reflect seasonal effect, such as ‘winter flu’, but the rate rises in the late summer and, so, possibly reflects workplace stress. But there is one other indicator not shown in the data. On the day that I visited
Cabin Crew Productivity and Sick Leave
this airline, I was in the office of the head of cabin crew when a report came in that an aircraft escape slide had been accidentally triggered. From the next-door office, I heard someone call out ‘Summer’s here!’. The lag between the increase in the flying task and the arrival of competent new staff resulted in additional strain on the existing crew' that could be measured in the rate of accidental escape slide deployment during door operation.
Organisations, then, exhibit ‘behaviour’ in much the same w'ay as humans do. In part, that behaviour is a product of how organisations set about satisfying the demands of production. The hierarchical model proposes that levels exercise control over subordinate levels, and a large part of the work of ‘management’ comprises deploying a suite of control methods. Because management processes are dynamic, it is possible to identify inputs, on the part of management, and corresponding outputs manifested by the workforce. In a perfect world, all outputs should contribute to the safe and efficient delivery of the desired outcome. Alas, the world is not perfect, and often, we see unintended consequences arising from w'hat management might consider being logical inputs. Equally, management sets up feedback mechanisms, incident reporting being a classical example, and yet these processes, too, do not always work as intended. This chapter will examine the methods of control and the resulting unintended consequences. In the next section, I want to look at some fundamental economic problems airlines have to address before, then elaborating a framew'ork for understanding organisational behaviour.