Desktop version

Home arrow Management

  • Increase font
  • Decrease font

<<   CONTENTS   >>

Types of evaluation

Evaluation can be classified into several types depending on the intended purpose or stage of its implementation (Omonyo, 2015). Considering the timing of undertake evaluation, three types of evaluation exist according to Tache (2011) and Gudda (2011). They are ex-ante evaluation which is the evaluation conducted before the implementation of the project and in-vivo or mid-term evaluation, also known as interim evaluation, which is undertaken during the implementation of the project. The mid-term evaluation is formative in nature and can occur several times depending on the information needed while the final evaluation describes the evaluation which is conducted after project implementation. The final evaluation is summative in nature and is often conducted by external evaluators. The ex-post evaluation, also described as an evaluation conducted after the period of the final evaluation, is done purposely to ascertain the level of sustainability or impact of the project on the beneficiary community (1FRC, 2011). Other classifications are considered based on the implementers or those responsible for the evaluation, namely whether evaluation is undertaken internally by persons directly involved with the project or externally by stakeholders and donor agencies (Igbokwe-Ibeto, 2012). The IFRC (2011) further identifies participatory evaluation as one which is conducted with the involvement of beneficiaries and all other key stakeholders with the aim to empower, build capacity, ownership and support for the project.

Joint evaluation is also conducted in collaboration with more than one implementing partner, which can help build consensus at different levels as well as credibility and joint support. Joint evaluation has shown numerous benefits which include strengthening evaluation through harmonization and capacity development; shared good practice, innovations and improved programming; reduced transaction costs and management burden (mainly for the partner country); improved donor coordination and alignment; increased donor understanding of government strategies, priorities and procedures; and greater learning by providing opportunities for bringing together wider stakeholders. Learning from evaluation then becomes broader than simply organizational learning: it also encompasses advancement of knowledge in development (UNDR 2009). The IFRC (2011) brings to light the objectivity and legitimacy of joint evaluation. This enables a greater diversity of perspectives and consensus building, broader scope, being able to tackle more complex and wider-reaching subject areas and enhanced ownership through greater participation (IFRC, 2011).

Evaluation is also viewed based on its focus, whether for accountability purposes (summative) or learning and improvement of performance by management (formative). Also, the IFRC (2011) distinguishes the summative and formative types of evaluation based on the time they occur or whether they are undertaken within the project implementation process. Formative evaluation occurs during project implementation to improve performance and assess compliance, whereassummative evaluation occurs at the end of project/programme implementation to evaluate effectiveness and impact (ibid).

The IFRC (2011) subsequently describes five types of evaluation based on the methodology adopted. These include a real-time evaluation which concerns evaluation during project implementation to provide immediate feedback for modifications to improve ongoing implementation with emphasis on direct lessons learnt and meta-evaluation which describes the type of evaluation that is used to assess the evaluation process itself. The others are thematic evaluation focusing on one theme such as quality, cost or time and is usually undertaken across several projects. Cluster or sector evaluation also focuses on a set of related activities of a project across multiple sites and, finally, impact evaluation which focuses on the effect of a project, rather than on its management and delivery (ibid).

Need for evaluation

M&E of development activities provide government officials, development managers and civil society with better means of learning from experience, improving service delivery, planning and allocating resources and demonstrating results as part of accountability to key stakeholders. Within the development community, there is a strong focus on results and this helps explain the growing interest in M&E. The presence of sound evaluation does not necessarily guarantee high quality in services or that those in authority will heed the lessons of evaluation and take the needed corrective actions but provide only one of the ingredients needed for quality assurance and improvement (Stufflebeam, Madaus & Kellaghan, 2000). There are many examples of defective products that have harmed consumers not because of a lack of pertinent evaluative information, but because of a failure on the part of decision makers to heed and act on rather than ignore or cover up alarming evaluation information (Stufflebeam et al., 2000). One clear example was the continued sales of the Corvair automobile after its developers and marketers knew of its rear-end collision fire hazard. Here we see that society has a critical need not only for competent evaluators but evaluation-oriented decision makers as well (Stufflebeam & Shinkfield, 2007).

For evaluations to make a positive difference, policy-makers, regulatory bodies, service providers and others must obtain and act responsibly on evaluation findings. Stufflebeam and Shinkfield (2007) believed that everyone who plays a decision-making role in serving the public should obtain and act responsibly on evaluations of their services. Fulfilling this role requires each such decision maker to take appropriate steps to become an effective, conscientious, evaluation-oriented service provider. The production and appropriate use of sound evaluation are one of the most vital contributors to strong services and societal progress (Stufflebeam & Shinkfield, 2007).

<<   CONTENTS   >>

Related topics