Desktop version

Home arrow Management

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Sense-making, Rule-based and Knowledge-based Action

Routine performance largely comprises skill-based and rule-based behaviour enacted in response to cues from the world and directed at assuring the progress of the task towards a desired goal. Occasionally, disturbance in the world or insufficient progress requires a step change in the level of effort applied to the control of the task. Rule sets have to be searched for more appropriate responses, or knowledge resources have to be mobilised to clarify situations, establish new understandings and create new rules. Sense-making, once again, is central to action at this level. Reason (1990) notes that ‘[i]n any situation a number of rules may compete for the right to represent the current state of the world’. The uncertainty inherent in complex situations increases the risk of triggering wrong rules. The tendency to regard the warning horn as spurious in the examples we have looked at is an example of the misapplication of a previously good rule. Some properties of the situation might have supported the selection of the rule in question, but there were other situational cues that indicated that the rule was not appropriate. The confusion of the altitude warning horn with a configuration warning is quite common. In all probability, the crew will have heard the horn in other contexts. The sound of the horn elicits one response but all the environmental cues indicate that an alternative interpretation is needed. These ‘strong but wrong’ rules can be compelling. Reason argues that, within a situation, there will be signs, countersigns and non-signs. Signs are those cues that trigger a response, albeit one that is not always appropriate. Countersigns are the indications that, perhaps, the situation is not what we might think. These contra-cues are often missed because of cognitive overload: individuals have difficulty processing too much information. Non-signs are irrelevant signals that may be present. Rasmussen and Svedung call them invalid cues, noise that still makes demands on processing power. Chunking of information is sometimes seen as a way to simplify the world and reduce cognitive demand. Chunking can lead to certain properties of a situation having increased significance while others will be sidelined. In part, this is the price we pay for developing heuristics, or ‘rules-of-thumb’, to ease decision-making. These processes all contribute to some rules appearing to be applicable when in fact they are not.

Rule-based behaviour accommodates most situations where our initial intentions are thwarted, and we have to execute an alternative plan in order to sustain operations. Knowledge-based behaviour is invoked when activity is outside of the scope of our rule set. Because either the rule we have adopted is unsuccessful or we simply lack an appropriate rule, we need to go back to the basics and construct a solution. Rasmussen and Svedung point out that one of the metacognitive skills associated with rule-based behaviour is recognising the need for situation analysis at a knowledge- based level (see Table 5.5). In discussing the cognitive competences associated with knowledge-based behaviour, they comment that ‘this level is related to the extent and quality of the understanding of the relational, causal structure of the work system, that is, a correct mental model of system function’. They also observe that effective performance requires actors to possess ‘knowledge about system goals, safety conditions and regulatory constraints on performance’. These represent the criteria - the constraint set - to be met to successfully manage the transitions between goal states discussed in the previous chapter. Knowledge-based behaviour represents the conscious intervention in the work process in order to overcome barriers to progress that are not amenable to rule-based resolution. The key to all of this is the ability to perform mental experiments to generate rules for action and to test hypotheses about the cause and effect of abnormal system behaviour. Knowledge about information sources such as manuals, textbooks, diagrams, etc. and the ability to use them is important at this level.

Performance under Normal Circumstances

Almost by default, the discussion, so far, in this chapter has seen error as failure. In the case of the Helios crew, examples were presented that showed that the components of the event were actually quite commonplace. It was the tragic nature of the outcome that cast their performance in negative terms. Normal work, although inherently fragile, is still fundamentally successful. The problem is that error has become a token of failure deserving of punishment when, in fact, it should be expected. It has been argued that error is a symptom of a problem rather than a cause, but this still places error outside of the process of work. By positioning error as a feedback signal, we can see that it is, instead, a property of the current status of the system. Our response to error is a reflection of the efficacy of our attempts to restore the congruence between status and state. From the examples we have looked at, especially the helicopter vibration event, we can also see how cross-scale interactions shape the context of performance.

In the rest of this chapter, I want to look at the normal variability of work reflected in the distribution of workplace error. The LOSA surveys that form the basis of this analysis used the airline’s published task - the product specification - communicated in checklists, procedures or other formal texts as the benchmark, and ‘errors’ are departures from the prescribed activity. The fact that the job of work depends on crew being flexible and adaptive and, given that the product specification is not all- encompassing, we can say that errors represent nothing more than attempts to deal with operational challenges. I will start this discussion of normal variability with some ideas about the general distribution of error. Of course, there is no such thing as a ‘correct’ number of errors per flight. Leveson suggests that human error, especially when supervising complex, usually automated, systems, is not quantifiable. Error is not stochastic and, thus, cannot be estimated using probability. Given the system property of non-ergodicity, any attempt to prescribe work through procedures will encounter states of tension requiring intervention on the part of crew and, thus, be essentially error-producing.

TABLE 6.1

Errors Observed by Flight Duration

Flight Duration (minutes)

2013 (л = 53)

2017 (n = 22)

60-119

8.2

II

120-179

5.6

7.8

180-239

9.4

4.7

240-300

8.9

11.8

TABLE 6.2

Distribution of Error Types (% by Fleet)

Fleet

Slip

Lapse

Mistake

A (« = 269)

37.9

39.4

22.6

CO

II

to

25

50.5

24.5

C (/i = 156)

16.6

43.5

39.7

Accepting these caveats, it is still useful to consider event rates, if only to remind ourselves that work is fallible. Table 6.1 shows mean error rates by sector length for a sample of B-777 flights. We could argue that error rates should reflect flight time in that the longer the crew' work, the more opportunities they have to commit an error. This w'ould be the probabilistic approach in that risk is a function of exposure. Looking at the data, this does not seem to be the case.

Table 6.2 looks at error by type. The three fleets represent different manufacturers and generations of technology, but the fundamental task of flying each type was fairly generic. Fleets A and C were sampled on similar routes, and so, the crew' task was comparable. The Fleet В sample w'as significantly different in terms of the time of day and routes flown. The data do not point to any underlying predictable pattern.

 
<<   CONTENTS   >>

Related topics