Bringing Knowledge into Play
So far, we have looked at crew' applying rules that are mostly contained in procedures or are the result of training directed at specific contingencies. Rasmussen’s knowledge-based behaviour describes activity undertaken once rule sets are exhausted: what do you do when there is no rule to apply? In the previous chapter, I suggested that ‘problem-solving’ was a process of making decisions about seeking information and evaluating alternative courses of action. Knowledge-based behaviour allows us to run mental ‘what if’ scenarios from w'hich we can identify gaps in data and derive new rules that could work in the current situation. This second category of ‘mistakes’ is the result of the application of underpinning knowledge to a current state but with the result that does not satisfy the applicable constraints.
Returning to the Irish crew departing from Cork, we saw that, after the disrupted passenger boarding process and engine start sequence, there was a possibility, according to the investigation report, that the after-start checklist might have been missed. The next anomalous event to occur w'as the illumination of the Master Caution light while the crew' were completing the after-take-off checklist. The warning was cancelled, and the captain informed the FO that the Auto Fail warning light on the pressurisation panel was on. The FO later reported that he did not understand the significance of this warning. The Auto Fail light will come on under one of the four conditions:
The captain assumed that, as the pressurisation must already be on, the warning was related to a controller failure, and so, the ‘standby’ mode was selected. This action will have no effect if the pressurisation is not, as in this case, turned on in the first place. The FO’s comment illustrates how full participation in problem-solving is dependent upon robust technical knowledge. The episode also illustrates my description of ‘problem-solving’ as being a process of making decisions about decisions. The captain held a belief about the status of the aircraft. The triggering of the Master Caution was the result of one of the four possible conditions applying at the time.
I think we can assume that the crew could safely reject the first and last conditions. However, depending on the aircraft’s rate of climb, the third condition could still be true. In this case, the decision to disregard one possible cause constrained the subsequent information search. Instead, the action was driven by what the captain considered to be the most plausible cause.
The Helios crew, too, had to deal with confusing symptoms. As their aircraft passed 17,000ft, and while they were discussing the warning horn with the company dispatch office, the master caution light illuminated. In fact, two events occurred in close proximity, both of which would have triggered the warning, but we cannot say, based on the evidence, which occurred first. Because of the reduced density of air, the equipment bay cooling system was not functioning effectively, and so, the warning light came on. At about the same moment, the passenger oxygen light illuminated to indicate that the cabin masks had dropped. Communication between the cockpit and the ground at this stage revolved around the ‘configuration warning’ - the horn - and the status of the equipment bay cooling warning lights. From the evidence of the ground engineer who had spoken to the captain, it seems that the interpretation was that the equipment bay cooling fans had failed. The warning light indicated a loss of airflow mass, which can result from either the failure of a fan or from the reduced air density at altitude. The captain’s diagnosis of the initial problem - the warning horn - as a configuration warning was absorbing his attention when the second warning occurred, the Master Caution. Of the two conditions that could have triggered the caution, the cooling fan indicator warning took priority. The passenger oxygen light, strangely overlooked, and the warning horn were direct indications of the actual problem, whereas the cooling fan warning was an indirect symptom. In both of these cases, the crew response was shaped by the way signals were integrated into an understanding of the world. We also need to accept that crew cognitive functioning may already have been affected by hypoxia by this stage.
Providing access to knowledge is a decision made, primarily, by manufacturers but moderated to some degree by the company. Checklists and manuals are artefacts of knowledge embodiment and serve to communicate background information and preferred actions to the crew. Checklists cover both normal routine activity and also a subset of non-normal situations. Because they are, themselves, the result of prior decisions, checklists carry an inherent risk. Consider the case of a helicopter taking workers to an oil platform off the coast of Scotland (AAIB, 1998). At 0729hours, while cruising at 120 kts, at a height of 3000ft amsl, there was a sudden onset of severe airframe vibration. The captain immediately instructed the FO to reduce main rotor pitch. The FO lowered the collective lever to reduce the main rotor pitch from 15.6° to about 12.2° and, in addition, applied left yaw to maintain the heading. The captain reported a ‘major vibration’ to Aberdeen radar and requested an immediate return.
Both crews thought that the vibration was associated with the main rotor, and the captain decided to follow the FREQUENCY ADAPTOR FAILURE checklist and read aloud the following extract from the drill:
‘Reduce collective pitch slowly to the setting which reduces the vibration to a level
acceptable for continued flight - approximately^ to 14 degrees pitch if vibration level
does not decrease to an acceptable level - land/ditch immediately’
He made several small pitch adjustments in an attempt to find a setting that reduced the level of the vibration, but this appeared to make no significant difference, although the captain did think that the vibration had moderated. The crew then reviewed the situation. They noted that the vibration appeared to be constant, that the engine indications were normal and that there was no fluctuation in the torque readings. As the flight progressed, the crew became less convinced that it was a Frequency Adapter failure, but in the absence of any other evidence, they continued to associate the vibrations with the main rotor. The fact that there was no ‘buzz’ through the yaw pedals appeared to support that conclusion. The crew brought the aircraft back to the mainland, the passengers were all disembarked and the aircraft shut down. It was only then that they discovered that the problem was with the tail rotor.
The initial association of the vibration with the Frequency Adapter may have been triggered by the fact that it was the only drill in the Flight Reference cards relating to vibration. At this stage of the aircraft’s in-service history, there had been one failure of a main rotor gearbox, in July 1987, and a checklist procedure was subsequently developed. At the same time, the Frequency Adapter design was modified. At that point, the global fleet had not suffered a tail rotor failure, and so, no emergency drills had been developed for this contingency. Although reducing the collective pitch to 12°, the first action in the drill, did appear to reduce the vibration slightly, there was little or no lateral component present. It may be that the drill misled the crew rather than assisted them in their attempted diagnosis. Because the cyclic, collective and yaw pedal controls act through hydraulic servo units, any feedback felt through the controls would have been dampened. In fact, the vibration felt at the controls seemed to be in sympathy with that felt generally through the airframe. With the controls appearing to function normally, the only source of evidence was the frequency of the vibration and its relation to main rotor (265 rpm) or tail rotor (1279 rpm) rotational speeds. The main rotor blades rotate 4.4 times per second, and therefore, a 1R vibration would have a frequency of 4.4Hz and a 4R vibration, a multiple, would have a frequency of 17.6 Hz. A vibration associated with the tail rotor (IT) would have a frequency of 21.3 Hz. The closeness of these frequencies is such that a pilot would not normally be able to differentiate between a 4R vibration and a IT vibration. In addition, various frequencies will beat together to produce a resultant that could add to the confusion, making it extremely difficult to determine the source of vibration from its frequency alone. The Operations Manual gave no guidance to help pilots in determining the source of an unusual vibration. This was intentional as it was felt that it would not be practical to do so, given the large number of variables involved. Once again, a decision made at a higher level in the system shaped the performance of the crew. In this example, the specific mistake was the use of an irrelevant checklist, but the crew response was conditioned by system design.
The Helios and Cork B-737 examples showed how the knowledge of technical systems held by the pilots informed the decisions they subsequently made in relation to information seeking and option selection. In the case of the Super Puma, the technical knowledge held by the crew was constrained by prior decisions made with regard to what information to share. In-service experience shaped the manufacturer’s design of checklists and manuals, which fed through to the pilots’ knowledge structures. In this example, the crew followed a checklist that had no relevance, under the circumstances, and their decision-making focused on making sense of the information available to them. The successful return to land and safe disembarkation of passengers and crew was little more than luck. In the context of a discussion of error, all three crews were actively engaged in resolving the problems they faced and initiated courses of action that made sense in relation to their, albeit limited, understanding of the world.