Home Management
|
|
|||||
Monitoring and evaluation modelsTable of Contents:
AbstractMonitoring and evaluation (M&E) are undertaken for various reasons. The purpose of an M&E will define the approach to be adopted. This chapter presents the evaluation classifications of Vedung (1997), Worthen et al. (2004) and Stufflebeam (2000) and that of the 21st-century evaluation approaches. The chapter informs that Vedung’s evaluation classification was influenced by what evaluation is believed to achieve, while Stufflebeam’s evaluation classification was influenced by a desire to appraise which 20th-century evaluation approaches were relevant for future use. The 21st-century evaluation approaches are believed to be objective-oriented, management-oriented, consumer-oriented, expertise-oriented, adversary-oriented and participant-oriented evaluation approaches. The chapter argues that evaluators should be integral in the project implementation while using evaluation as a corrective measure in achieving the needed outcome having the broader project outcome in mind. IntroductionOmonyo (2015) asserts that evaluation models are points of view or conceptions that various groups of theorists or evaluators are inclined to or approve of. Understanding evaluation models or paradigms is important to generate knowledge on how project M&E should be undertaken to effect the needed or intended outcome on projects. Despite the existence of several evaluation models developed by theorists over the years, the selection of a model for evaluation became a problem and, as such, the need to merge common schools of thought or traditions (Gabriel, 2013). The classification of evaluation models of Vedung (1997), Worthen et al. (2004) and Stufflebeam (2000) is presented below: Classification of evaluation models5.3.1 Evert Vedung’s classification Vedung’s classification was influenced by what evaluation is believed to achieve (Gabriel, 2013). His focus was to ensure evaluation satisfied the demand for Monitoring and evaluation models 65 public service and government. He therefore identified 11 evaluation models and presented his classification as the effectiveness, economic and professional models (Gabriel, 2013). The effectiveness evaluation model concerns evaluation approaches that are instituted by a desire to assess the outcomes of a project, policy or programme. He classifies the following seven evaluation approaches under the effectiveness category: goal-attainment model, side-effect model, goal-free evaluation model, comprehensive evaluation model, client-oriented model and stakeholder model (ibid). The economic models define evaluation approaches that measure the outcomes of public policy or programme relative to the cost incurred like the productivity and efficiency model (ibid). Finally, the professional models focus on the question of who should perform the evaluation (Gabriel, 2013). 5.3.2 Stufflebeam’s classification Gabriel (2013) asserts that Stufflebeam’s evaluation classification was influenced by a desire to appraise which of the 20th-century evaluation approaches are relevant for future use and which are not. Stufflebeam classifies 22 evaluation approaches into 4 categories, namely the pseudo-evaluation approach, the question or method-oriented approach, the improvement or accountability approach and the social agenda or advocacy evaluation approach (Gabriel, 2013; Zhang et al., 2011). 5.3.3 Evaluation approaches for the 21st Century According to Hogan (2010), evaluation approaches relevant and applicable in the 21st century have also been categorized. Worthen et al. (2004) categorized evaluation approaches as objectives-oriented, management-oriented, consumer-oriented, expertise-oriented, adversary-oriented and participant-oriented evaluation approaches (Hogan, 2010). 5.3.4. Stufflebeam’s context, input, process and product (CIPP) model The context, input, process and product (CIPP) model originated in the late 1960s to deliver greater accountability for the US inner-city school district reform project which sought to address the limitations of traditional evaluation approaches (Zhang et al., 2011). The model has over the years been refined and applied in many disciplines, including education. The CIPP evaluation model is recognized as the foremost management-oriented evaluation approach developed by Daniel Stufflebeam (Hogan, 2010). CIPP is an acronym that corresponds to the following core concepts: Context, Input, Process and Product evaluation. According to Mathews and Hudson (2001), context evaluation scrutinizes the programme objectives to determine their social acceptability, cultural relativity and technical adequacy while input evaluation involves an examination of the intended content of the programme. Mathews and Hudson (2001) further opined that process evaluation relates to the implementation of the programme, that is, the degree to which the programme was delivered as planned. Finally, product evaluation is the assessment of programme outcomes (Mathews &. Hudson, 2001). Stufflebeam et al. (2000) noted that the model is intended for the use of service providers, such as policy boards, programme and project staff, directors of a variety of services, accreditation officials, school district superintendents, school principals, teachers, college and university administrators, physicians, military leaders and evaluation specialists. Stufflebeam et al. (2000) further stated that the model is configured for use in internal evaluations conducted by an organization’s evaluators, self-evaluations conducted by project teams or individual service providers and contracted or mandated external evaluations. The potential weaknesses of the management-oriented approach as opined by Hogan (2010) may occur from evaluators’ giving partiality to top management, from evaluators’ occasional inability to respond to questions, from costly evaluation processes and from the assumption that important decisions can be identified in advance. The C1PP model is a comprehensive framework for guiding formative and summative evaluations of projects, programmes, personnel, products, institutions and systems (Stufflebeam, 2003). Stufflebeam (2003) stated that the CIPP model emphasizes that the evaluation’s most important purpose is not to prove but to improve. Evaluation is thus conceived primarily as a functional activity oriented in the long run to stimulating, aiding and abetting efforts to strengthen and improve enterprises (Stufflebeam et al., 2000). However, the model also posits that some programs or other services will prove unworthy of attempts to improve them and should be terminated (Stufflebeam, 2003). By helping stop unneeded, corrupt or hopelessly flawed efforts, evaluations serve an improvement function through assisting organizations to free resources and time for worthy enterprises (Stufflebeam &. Shinkfield, 2007). Consistent with its improvement focus, the CIPP model places a priority on guiding the planning and implementation of development efforts (Stufflebeam et al., 2000). The CIPP model also provides for conducting retrospective summative evaluations to serve a broad range of stakeholders. Potential consumers need summative reports to help assess the quality, cost, utility and competitiveness of products and services they might acquire and use. Other stakeholders might want evidence on what their tax or other types of support yielded (Stufflebeam, 2003). The CIPP model hence brings to the fore the roles of management, that is project leadership, in the implementation of M&E to ensure successful service and product delivery. 5.3.5 Scriven’s goal-free evaluation model The goal-free evaluation model developed by Scriven in 1972 posits that in investigating the set objectives of a project or programme, a broader consideration of other project outcomes should be looked at. Therefore, it is necessary to widely consider beyond the intended outcomes and also look at the unintended outcomes of the project (Omonyo, 2015). While projects are ultimately considered Monitoring and evaluation models 67 successful when cost, quality and time are achieved, other indirect outcomes such as beneficiary satisfaction and environmental and socio-economic impact of the project should be evaluated as well. These assessments could be done while project implementation is ongoing or at the end or completion of projects while providing meaningful improvement measures. To this end, a logic or programme model is usually developed for the project and tested for validity with the data collected from the evaluation process. This model stresses the approach to monitoring and evaluation particularly regarding data collection and the utilization of the data. 5.3.6. Stake’s responsive evaluation model As early as 1975, Stake developed the responsive evaluation model, also referred to as the naturalistic or anthropological model. This approach emphasized the concentration of evaluation on the intended outcomes relating to the programme activities as compared to Scriven’s model which sought to place much emphasis on the unintended outcomes of projects. This model argues that the needs of clients are paramount to every project and hence satisfying them should be the main preoccupation of M&.E. Gathering project data is key in the M&E process; this notwithstanding, instead of depending on scientific methodologies of experimental psychology, human observations and judgments are heavily relied upon, drawing on a journalistic approach to the evaluation. While relying on qualitative methodologies in a naturalistic evaluation, precise methods for collecting, analyzing and interpreting data are optional. 5.3.7. Patton’s utilization-focused evaluation model A management-oriented evaluation model was developed by Patton in 1978. This was referred to as the utilization-focused evaluation model. As has been strongly articulated in earlier sections, M&.E serves many purposes, particularly for decision making by the project implementation team to inform ongoing activities (corrective measures) or to inform future projects. Patton argues that decision makers have often ignored evaluation findings; he suggests that as early as possible, in the project planning, key stakeholders such as relevant decision makers and the audience of evaluation reports who utilize evaluation findings must be identified. Establishing effective collaboration between the evaluation team and the consumers of the evaluation findings is therefore important. 5.3.8 Quba’s ethnographic evaluation model In 1978, Guba proposed the ethnographic evaluation model. As argued by this model, evaluators of projects are an integral part to the project implementation as they are involved in the project from the inception up until the completion of the project and participate in the day-to-day monitoring and supervision of the project. The philosophy behind Cuba’s model is to afford evaluators the opportunityto obtain a detailed description of the project being implemented and convey the same to the project stakeholders. The model advocates for the involvement and communication flow among the key stakeholders of the projects to get involved in the M&.E during project implementation. SummaryChapter 5 provides an understanding of the evaluation models and their under-pinnings. The classification of evaluation models by Vedung (1997), Worthen et, al. (2004) and Stufflebeam (2001) were presented and discussed. Similarly, the evaluation approaches of the 21st century were categorized as objective-, management-, consumer-, expertise-, adversary- and participant-oriented. Stufflebeam’s context, inputs, process and product (C1PP) model, Scriven’s goal-free evaluation model, Stakes responsive evaluation model, Patton’s utilization-focused evaluation model and Cuba’s ethnographic evaluation model were also discussed. The next chapter provides a conceptual understanding of M&.E in construction. ReferencesGabriel, K. (2013). A conceptual model for a programme monitoring and evaluation information system. Stellenbosch: Stellenbosch University. Hogan, R. L. (2010). The historical development of program evaluation: Exploring past and present. Online Journal for Workforce Education and Development, 2(4), p. 5. Mathews, J. M. & Hudson, A. M. (2001). Guidelines for evaluating parent training programs. Family Relations, 50(1), 77-87. Omonyo, A. B. ( 2015 ). Lectures in project monitoring & evaluation for professional practitioners. Germany: Lambert Academic Publishing. Stufflebeam, D. L. (2003). The CIPP model for evaluation. In: International handbook of educational evaluation, pp. 31-62. Dordrecht: Springer. Stufflebeam, D. L. & Shinkfield, A. J. (2007). Evaluation theory, models, and applications. San Francisco, CA: Jossey-Bass. Stufflebeam, D. L., Madaus, G. F. & Kellaghan, T. (eds.). (2000). Evaluation models: Viewpoints on educational arid human services evaluation (2nd edn.). Netherlands: Springer Netherlands. Vedung, Evert, 1997, Public Policy and Program Evaluation, 209-245, Piscataway, NJ and London: Transaction. Worthen, B.R., Sanders, J. R. & Fitzpatrick, J. L. (2004). Educational evaluation: Alternative approaches and practical guidelines (3rd edn.). Boston: Allyn & Bacon. Zhang, G., Zeller, N., Griffith, R., Metcalf, D., Williams, J., Shea, C. & Misulis, K. (2011). Using the context, input, process, and product evaluation model (CIPP) as a comprehensive framework to guide the planning, implementation, and assessment of service-learning programs. Journal of Higher Education Outreach and Engagement, 15(4), pp. 57-84. |
<< | CONTENTS | >> |
---|
Related topics |