Instructional Design vs Competency-based Training
Innovation in the training and accreditation of pilots represents a challenge for aviation authorities. Because of the requirement to act as guarantor of the safety of the national aviation network, authorities tend to be conservative and accreditation regimes based on hypothetical ‘worst-case’ scenarios (engine failure after take-off or on approach in bad weather, for example) have an apparent logic. However, as risks emerge in the operational environment, events get added to the training and testing schedule (‘upset recovery’, for example). These accretions result in congested training, given that airlines are reluctant to increase resources. The frustration with the ‘one-size fits all’ model, especially given that many of the events and manoeuvres rehearsed are rare in real life (because of the high reliability of the technology), has prompted a shift towards a competence-based approach in Europe. The incorporation of training models into supranational audit frameworks, such as the IATA Operational Safety Audit (IOSA), means that airlines have been presented with a patchwork of training models they can adopt. Unfortunately, the position is complicated by the fact that these different approaches are not simply substitutes, one for another.
The ID model and the ‘competency’ concept are probably better seen as overlapping rather than different approaches to developing effective training systems but, before we look at them in more detail, we need to reflect on the nature of airline training. Broadly speaking, initial pilot training takes a novice and turns them into a licence holder. Armed with their licence, new pilots have a body of knowledge about general principles and equipment and have demonstrated their ability to control a small aircraft. The next step is to apply that basic framework of understanding to a specific, usually larger and more complex, aircraft type while learning to operate in a commercial environment. Once established in the industry, pilots then consolidate their skills, broaden their abilities by learning new pieces of equipment and procedures and occasionally apply their skills to a new aircraft type. Airline pilot training, then, involves acquiring new skills and knowledge, extrapolating from that basic foundation to different aircraft types and then, once in employment, consolidating and reinforcing through practice. We should also note that pilots are required to periodically renew and validate their licences and ratings, which are their approvals to operate. The testing required by the state authority to renew a licence or rating is typically delegated to an airline or a flight school. The required internal company- specific pilot validation processes and the requirements of the state in relation to licence renewal and revalidation are often combined into single events. In Europe, for example, a LPC and an OPC can be satisfied with a single simulator profile. This duality of intent, while efficient, can sometimes blur the purpose of an activity.
From this brief discussion, it is clear that initial training for the award of a licence or a rating differs from both the maintenance of skills once in employment and the periodic checking of proficiency through testing. All three areas of interest may fall under the generic umbrella of ‘training and checking’ but are different processes. The methods we use to design training for new hires will differ from the activities involved in developing in-service consolidation, and it is this distinction that seems to be at the root of much of the misunderstanding that surrounds ‘training’ as a function in airlines. And just to complicate matters, testing or ‘checking’ requires yet another approach.
Central to ID is the need for a task analysis (ТА) that describes the work done by a trained person. The ТА is used to create a set of training objectives that describe the skills and knowledge that underpin a performance. At the end of a training course, we can test against the objectives to see if the trainee has met the output standard. So, for a specific piece of, say, navigation equipment we might want to define some underlying theory, some general characteristics of navigation equipment, the broad functioning of a specific equipment installed in the training aircraft, its principles of operation, utilisation, interactions with other equipments, normal and reversionary modes and actions in the event of failure. With that fundamental knowledge base, the trainee can then gain experience by using the piece of equipment through practical training. Because ID deals with the training inputs needed to bring about change, the ТА can often be quite detailed.
A common complaint made about classical ID by advocates of the competency approach is that it is too rooted in task decomposition. It has always been recognised that task analysis methods run the risk of getting bogged down in minutiae and need a ‘stop’ rule. For example, one such rule would be that if a task element is so simple that most people can do it without training or is so inconsequential that failure or poor performance would have no effect on outcomes then it does not need further analysis. Again, if a piece of information does nothing to support the development of understanding or performance, then it is redundant. However, I think what lies at the root of the problem is that those that make the complaint perhaps do not recognise the difference between entry-level ab initio training and continuing professional development, which is what most airline recurrent training actually is. For an ab initio pilot we clearly need a robust understanding of the skills and the underpinning knowledge need to operate safely. As a result, the task analysis needed is quite fine-grained. For an experienced pilot, on the contrary, we can make assumptions about prior knowledge and simply identify the gap between the existing level of competence and that required to operate in any new context. We do not need a ТА but, instead, we need a gap analysis. For periodic checking, I have no need for an analysis of need at all. Instead, I can fall back on company procedures as the frame of reference against which to assess competence.
Another tenet of ID is that the training delivered should reflect the entry level of the trainee. So, having graduated from basic training, our new pilot would not need to be retrained on all the fundamentals of, say, navigation equipments when they start their first job. They would simply need an explanation of the differences between what they already know and what the new systems are capable of.
Designing a ‘competence framework’ or model still requires an element of analysis. To demonstrate ‘competence’, an individual still needs an underpinning body of knowledge and a repertoire of skills. Both ID and the competence approach come back to the same fundamental starting point: a description of the job. The difference is the scope of that job or task description. While it will be quite detailed and fine-grained for initial training, for airline recurrent training it can be a description predicated on a set of declared assumptions, most of which are contained in company documents.
Another commonality between ID and ‘competencies’ that seems to cause some confusion is the concept of ‘training to proficiency’. In the ID approach, this is interpreted as exposing the student to as many repetitions as are needed to demonstrate achievement plus some additional attempts to consolidate. By definition, faster learners will progress through the material more rapidly than slower learners. ‘Competence’-based approaches require that trainees do not progress to the next stage of the course, or be signed off as competent, until they have demonstrated mastery of the current learning task. The difference between these two interpretations is that the ID approach is a rule for assigning time to achieve proficiency, a resource allocation rule, whereas the competence-based usage is really a constraint on progress: a hurdle or gate. They are the same idea but, ironically, one that is difficult to implement in aviation. Airline recurrent training is usually based around a ‘crew’ of two pilots who are scheduled together for an event. Allowing a free flow model of training that adapts to the pace of the learner would seem to be at odds with the need to assign a ‘crew’ to the training device, if one pilot is of a higher ability than the other. True training to proficiency would require scheduling to be open-ended, with implications for subsequent rostering of crew. That said, implicit in both uses of the concept is that the trainee should be expected to actually show that not only have they learnt something but that they can actually do something to a desired standard.
The ID model requires that the standard of performance expected of a competent person is defined. For an entry-level course, it is quite easy to specify the output standard. For example, I can either name the parts of a jet engine or I cannot.
But how can I set a standard for, say, ‘communications’ or ‘decision-making’? Coupled with this is the issue of how to conduct assessment. I will cover the technical aspects of these issues in Chapter 11 but, for our purposes here, it is important to understand that both ID and the ‘competence’ approach have a requirement to assess and the problems inherent in judging performance are common to both. As an aside, we do have a secondary problem. In many jurisdictions, the assessment of individuals’ CRM skills in a classroom setting is actually expressly forbidden in regulations. If a competence-based approach requires trainees to demonstrate mastery, then this restriction on assessment reduces our opportunities to establish actual performance and, as a consequence, the effectiveness of our training.
This brings us to one final point. The classical ID model assumes that training will be evaluated. In short, evaluation requires us to take steps to establish that the training system actually works. For novice trainees, evaluation is typically covered by course satisfaction forms (‘happy sheets’), examinations and flight tests and, finally, success in gaining a licence. In the case of operational continuation training, we are really looking at how training contributes to successful performance in the real world: training transfer. The goal here is to estimate the added value that is represented by training. I was once told that the CRM programme I was running for an airline was, actually, no more than the cost of compliance: the regulator demands it so we have to deliver. The value of the course was its ability to satisfy a regulatory requirement.
This brief overview has hopefully indicated the scope of a function - training - that is crucial to safety. Quite often, an accident report will state that the crew were properly licensed and often reference is made to attendance at their most recent refresher event. Unfortunately, the content of training, the quality of its design and delivery, and the effectiveness of any measurement systems are never referred to. In short, the fact that the crew’s training record is up to date is of little relevance to the quality of their performance: we know that some boxes were ticked but know nothing of the competence of the crew.
To summarise, AQP is a conventional ID model. Over time, different regulatory frameworks have incorporated technological innovations and broader industry developments. So, while ATQP cherry-picked bits of AQP, it also included a version of Line Operational Safety Audits (LOSA) and required the routine use of flight data as a form of quality control - initiatives that had come into play since the original framing of the AQP regulation. More recently, evidence-based training (EBT), and its conjoined twin CBT, has been endorsed by International Civil Aviation Organization (ICAO) and adopted by EASA. The broad framework comprises some elements of the ID model but, now, with the emphasis placed firmly on assessing outputs. The use of behavioural markers in assessment is a requirement that has blossomed since AQP/ATQP was first launched. Once again, innovation has simply been the accommodation of existing developments.
The real problem we face in airline training is that we need to develop a programme of training that is aimed at already competent pilots but the tool kit we use was really designed for training novices. A competence approach, just as with ID, cannot get away from the fact that we need some sort of representation of the job as a starting point. We will discuss the idea of resilience in more detail in Chapter 3 but, in relation to training, the resilience engineering paradigm offers a telling critique of current thinking. Woods (2006) describes two components of a ‘textbook’ competence model. First, there must be a representation of the variability and uncertainty faced by an operator and second, there must be an understanding of how the strategies, plans and countermeasures provided by the organisation will handle these. Competence, from this perspective, represents a model of control and our training system, therefore, needs to be based around the skills of ‘control’ applied to a valid model of the operational environment. This insight offers a path through the ID/com- petence dichotomy and the implication is quite radical. While an ID approach might be sufficient to get a novice to the stage where they can function in the workplace, the real goal is to train individuals to cope with workplace variability. What we define as ‘competence’ will not be sufficient if performance is brittle under conditions of uncertainty. Consequently, the assessment of competence should aim to identify how well integrated are the pilot’s knowledge structures and how the pilot can apply those structures to novel situations. What follows from this is that training regimes designed to satisfy a regulatory requirement, while compliant, might not be fit for purpose. Airlines will probably need the flexibility to design training that meets the demands of their operating environment while accommodating the needs of their pilot cadre to sustain competence. The role of the regulator will change from one of verifying compliance, a task increasingly delegated to third-party audits, to one of evaluating effectiveness. Finally, airlines, themselves, will probably need more expertise in the field of training development.