Desktop version

Home arrow Management

  • Increase font
  • Decrease font


<<   CONTENTS   >>

A Systems Model of Aviation

Introduction

In Chapter 1, I suggested that the goal of CRM training might be to develop operational expertise but a problem we face is that we lack both a clear model of what airline pilot expertise looks like and also the tools for designing training in an operational context. In the previous chapter, I reviewed some safety concepts and considered how conventional approaches to accident analysis were weak at revealing underlying competence requirements. My intention was to see what light they might cast on performance in the workplace. The conclusion I drew was that a focus on preventing a repetition of a specific event offered little to help us understand the broader question of expertise. In this chapter, I want to take a different approach. We will start by looking at attempts to apply systems concepts to the issue of safety. Drawing on the key themes in a systems approach, I want to develop a model of aviation based on workplace performance. It is this framework that will shape the rest of the book as I attempt to derive a competence set that will help in the design of more valid training.

Systems Thinking and Safety

The shift to systems thinking (Leveson et al., 2003; Rasmussen & Svedung, 2000) was primarily a response to the inadequacies of linear models and those safety initiatives based on creating barriers that supposedly trap failure. As we saw, linear models are predicated on cause and effect relationships derived from analysis after the event. Hollnagel (1999) observed that barriers do not prevent anything, rather they change behaviour. Adverse events will still happen. As well as moves to apply systems thinking to safety, complexity theory is also a potentially rich source of insights into how systems behave. At the time of writing, two main approaches to systems and safety are competing for popularity. Nancy Leveson, at MIT in the USA, has developed the systems theoretic accident model and processes (STAMP) (Leveson, 2011) while, in Europe, Erik Hollnagel advocates his functional resonance analysis method (FRAM) (Hollnagel, 2012). The two approaches share some common elements but their differences reflect the perspectives of their authors. STAMP is rooted in engineering. It maps the structural elements that interact in order to achieve a task and then looks at the safety constraints that have to be satisfied in order to operate in a safe manner. FRAM, on the other hand, is a psychological model. It sets out to describe what an operator is actually doing while completing a task (rather than what someone thinks they should be doing) and then maps the inputs needed to do the job and outputs that flow from activity. Because of its origins in engineering and software, STAMP is less adept at dealing with the performance of humans within the system being modelled. FRAM is underpinned by a set of concepts collectively known as ‘resilience engineering’ and which do offer some pointers as to where we need to look in order to understand how humans and systems interact. I want to draw on both STAMP and FRAM, first laying out some of the fundamental concepts here before offering a view that pulls the key ideas together.

In the discussion that follows, a ‘system’ comprises assemblages of humans and technology configured to achieve a performance goal. These represent the static components but the system exists to make decisions. Rasmussen and Svedung (2000) viewed organisations not so much as collections of interrelated functional components, the organisation chart approach, but rather as nested decision-making processes. To me this is important because it reflects the fact that systems exist to achieve a goal and every component in a system is either the output from a set of decisions or, itself, exists to make decisions in relation to its specific responsibilities. For example, a commercial airliner is more than just a piece of equipment. It is the embodiment of a set of decisions about how to make money by carrying a payload over a distance. The structure itself is a manifestation of a set of solutions to mathematical, aerodynamic and engineering problems. Its utilisation is the output from a set of commercial decisions. The Airbus A-380 was hailed as a remarkable feat of engineering and was very popular with passengers. The economics of operating the aircraft were such that there were few markets it could profitably serve. As the embodiment of commercial decision-making, it was flawed. Very few carriers could adapt their networks to make the aircraft viable. In this view, then, failure is no more than suboptimal decision-making.

Leveson’s STAMP model proposes four characteristics of a system. First, functional units are arranged in a hierarchy with higher-order elements exercising control over lower-order elements. Position in the hierarchy embodies both authority and legitimacy but, on examination, we see that position is simply a reflection of the order of magnitude of the decisions made by the functional component. Second, control is exercised by higher-order functional units over those below it in the hierarchy. Control is exercised through artefacts that are the output from decisions and have different levels of force. Some are rules and regulations that carry the weight of law. Breaches of policies and procedures can often attract actual punishment or admonishment. Breaches of social contracts can attract ostracism. These controls constrain the freedom of operation of the lower-level units in order to keep activities within expected bounds. Third, systems function through communication. This includes the signals that serve to direct and validate actions and verify system status and progress. It includes rules and regulations, policies and procedures that are issued to guide the actions of others. Weather charts and reports, flight plans, notices to airmen are forms of communication that support a specific flight. Conversation facilitates collaboration. Different types of communication have different levels of force in terms of requirement for compliance and also different levels of validity. Communication can differ in terms of specificity i.e. some forms are generic while others are specific to a moment. Communication includes those signals that relate to actions taken and indicate the probability of success or failure. Finally, systems have emergent properties that cannot be explained by simple descriptions of how the elements at each level work. For example, safety is not an attribute of a functional component, it is an emergent property of the way the component works. It is important to remember that these characteristics are functions of processes rather than structural properties. They describe activity in a system rather than physical elements.

Systems have to contend with uncertainty. Bookstaber (2017), in his book on the financial crash of 2008, makes the point that complex systems demonstrate non- ergodicity: put simply, nothing happens the same way twice. The implication of this is that, despite the fact that most of commercial aviation is heavily proceduralised and takes place within defined spatial arrangements, on every flight pilots have to create new solutions to achieve the same goal. Every flight is a subtle blend of ‘the same as before’ and ‘not quite the same’. Furthermore, the author talks of ‘radical uncertainty’. Because of emergence and the issue of non-ergodicity, events can often unfold in manner that cannot be anticipated or easily explained. Competence can therefore be seen as a set of behaviours directed at accomplishing goals or solving problems in a dynamic, uncertain workplace.

The importance of taking a behavioural perspective is reinforced by Hollnagel’s view that success and failure are equally probable outcomes from the actions of individuals. Therefore, we need to have a way of looking at how systems create contexts for individual behaviour. This concept is an important contribution to safety thinking. It forces us to move away from retrospective judgements of performance to focus, instead, on the fitness of the goal-directed behaviour of individuals. In order to do work, we are all engaged in making judgements about the effort needed and the process to be applied to get a job done. We make trade-offs between being very thorough and being just effective enough - or efficient - to achieve the goal. Hollnagel (2009) calls this the efficiency/thoroughness tradeoff principle. These trade-offs affect the probability of a successful outcome. In all areas of activity, systems can cope with variations in performance up to a degree: there is buffering capacity. However, there comes a point where a threshold is reached and continued operations are impossible: we have reached a system boundary or limit of viability.

From an individual’s perspective, what resilience engineering tries to identify is how we detect the approach of a boundary and how we manage performance in those boundary layer states such that safe operations are restored. Both the identification of warning signals and the activity needed to manage marginal performance are forms of competence. There are two key problems that emerge from the discussion so far and which were touched on in previous chapters. First, procedures and technology are designed to accommodate a core set of anticipated conditions but workers still need to be able to maintain control while dealing with unplanned contingencies. Second, problems inherent in design will emerge at a point removed in space and time. Systems demonstrate cross-scale effects. Therefore, the relevant knowledge necessary to understand the specific problem might not be currently available. Both are serious challenges when considering what competence might look like. With that in mind, I now want to outline an organising framework that describes aviation from a systems perspective.

 
<<   CONTENTS   >>

Related topics