Monitoring and evaluation in the public sector
Table of Contents:
As long as a monitoring process/exercise is a major factor for effective delivery, it is paramount that various government departments and units carry out this activity, especially in developed countries. Cracknell (1994) informs that M&E is a natural function of management. At the operational level of monitoring, data or information are urgently provided for the evaluation process. Evaluation is seen as a vital management procedure, hence the significant difference between both activities. While monitoring is general, evaluation is essentially selective (Cracknell, 1994). Resources available to agencies mandated to undertake M&E most often allow for relevant and specified evaluations to be done. Earlier, it was usual to evaluate “once-in-a-lifetime” projects, but currently, there are emergent tendencies to routinely evaluate projects, programmes and even portfolios, which could come up with useful lessons for future reference. The evaluation function is the responsibility of a particular unit or section in the public organization and it has to develop or start in those departments that spend considerable amounts of public funds supervised by effective M&E leadership. This is vital because of the necessity for accountability and also because these government departments are heavily into project assistance and needed a feedback mechanism as to the effectiveness of these project appraisal techniques in use, giving impetus to the effective communication in the monitoring and evaluation process. On the other hand, its development is at a slower rate when compared to those in ministries that spend public funds directly, usually in the form of nationwide programmes/ portfolios rather than easily identified projects. In this case, accountability appears to be less and M&E’s main purpose which is to enable improved performance in implementing broad policies is defined.
In recent years, the pressure for the privatization of these departments has provoked these ministries to demonstrate their cost-effective skills at a reasonably high level. This enhanced the status and importance of M&E in both the public and private sectors. Under these circumstances, evaluation is bedeviled by the problems of coping with competing (and sometimes incompatible) objectives, which are often rather abstract, such as those of welfare, security or health. M&E in the public sector in the UK, in particular, has made tremendous advances in recent years, with no sign that the momentum has slackened (Cracknell, 1994),
Monitoring and evaluation: A global view 133 though many challenges are still noticed in the continued development of evaluation techniques. One of these challenges is the self-view of some policy-makers in the government that policies on M&E may result in a complicated trade-off between competing interests, which cannot be evaluated in any scientific way. While Cracknell (1994) shares this view, he argues that there is plenty of scope for evaluating the effects of policy, even while shrinking from evaluating policy decisions directly and the lessons so learned can be a valuable input into future policy decisions.
Another major challenge among the potential political leaders is the fear that evaluation policy may reduce their selfish interests and may to a large extent make it difficult for future trade-offs that often typify policy-making. It is by no means uncommon for ministers to indicate that they do not wish particular policies to be evaluated. Here again, the best response may be simply to stress the value of learning what has been the outcome of past policy decisions as one of the inputs that go into the making of fresh policy decisions. None of these obstacles seems likely to prevent the continued development of public-sector M&E in the UK. Recent evidence is that M&E is becoming fully incorporated into the existing administrative and management procedures. Evaluation expertise is on an increase with well-published evaluation reports that provide a good basis for better informed public debate. Cracknell (1994) posits that M&E is well past its probationary period and has now established itself as a vital part of public sector investment management in the UK and Australia.
9.7 Mlonitoring and evaluation policy challenges
Monitoring and evaluation policy challenges in developed countries
Regarding the rapid acceptance of M&E systems, it is not without doubt that many are having difficulty in living up to the ambitious demands placed on them. Many projects or programme M&E systems have been criticized for their inadequacy and narrow efficiency. For instance, information arrives rather too late in most cases, sometimes the information does not answer the right questions and sometimes the information is costly to retrieve. This therefore requires a quick response by deploying strategies for effective communication of M&E data and findings. In other cases, the attention is narrowly focused on certain quantitative and financial aspects of the projects and most of the information refers only to the period of physical implementation. There are other challenges such as overly focusing on monitoring of project implementation; limited studies on how programmes operate, how they are sustained or whether they can produce intended impacts; capital budgeting being the focus rather than recurrent budgeting; M&E units being located in agencies created to oversee implementation; and short-term planning and budgetary cycles leading to a focus on short-term implementation objectives. The capacity of M&E leadership is noticed as a major pointer to the overall M&E process. Effective M&E leadership will drive the effective planning and implementation of M&E for the desired outcomes.
Findings and lessons learnt
The contribution of the implementation of M&E in the UK and Australia as per the review above shows strong linkage towards project/programme performance improvement and accountability in both the public and private sectors of their economies while measures to curtail the limited challenges are seriously implemented. M&E in the UK and Australia have received tremendous acceptance and improvement across many economic sectors such as construction and health. Therefore, based on the above review, the following lessons can be deduced:
In this chapter, a review of monitoring and evaluation practice in a developed country context, namely the United Kingdom (UK) and Australia, was presented. The chapter revealed the nexus of effective communication and leadership in delivering effective monitoring and evaluation in the context of the UK and Australia. The various tools for effective M&E were reviewed. Also, an overview of both countries’ construction industry/sectors was presented. Further, a discussion on the philosophical basis and policy regulation guiding M&E in both countries was presented. Finally, the challenges, findings and lessons learnt were all captured in this chapter. The next chapter focuses on the review of M&E practice in developing countries, namely Kenya and South Africa.
Abdul-Rahman, H., Wang, C. & Muhammad, N. B. (2011). Project performance monitoring methods used in Malaysia and perspectives of introducing EVA as a standard approach. Journal of Civil Engineering and Management, 17(3), pp. 445-455, doi: 10.3846/ 13923730.2011.598331
Al Group (2015). Australia’s construction industry: Profile and outlook. Economics Research. Cracknell, B. E. (1994). Monitoring and evaluation of public-sector investment in the
UK. Project Appraisal, 9(4), pp. 222-230, doi: 10.1080/02688867.1994.9726955
Crawford, P. & Bryce, P. (2003). Project monitoring and evaluation: A method for enhancing the efficiency and effectiveness of aid project implementation. International Journal of Project Management, 21(5), pp. 363-373, doi: 10.1016/50263-7863(02)00060-1
Department for Business Innovation & Skills (2013). UKconstruction: An economic analysis of the sector. Available online at www.bis.gov.uk. Retrieved on 31s' August 2018,
Gyorkos, T. W. (2003). Monitoring and evaluation of large scale helminth control programmes. Acta Tropica, 86(2-3), pp. 275-282, doi:10.1016/S0001-706X(03)00048-2.
Hummelbrunner, R. (2010). Beyond logframe: Critique, variations and alternatives. In: Fujita, N. (Eed). Beyond Logframe: Using Systems Concepts in Evaluation. Presented at the Issues and Prospects of Evaluations for International Development, pp. 1-34. Japan: Foundation for Advanced Studies on International Development.
Kamau, C. G. & Mohamed, H. B. (2015). Efficacy of monitoring and evaluation function in achieving project success in Kenya: A conceptual framework. Science Journal of Business and Management, 3(3), p. 82, doi: 10.11648/j.sjbm.20150303.14
Martinez, D. E. (2011). The logical framework approach in non-governmental organisations. CA: University of Alberta.
Myrick, D. (2013). A logical framework for monitoring and evaluation: A pragmatic approach to M&E. Mediterranean Journal of Social Sciences, doi:10.5901/mjss.2013. v4nl4p423
Rogers, P. (2008). Using programme theory to evaluate complicated and complex aspects of interventions. Evaluation, 14(1), pp. 29—48, doi: 10.1177/1356389007084674
World Economic Situation and Prospects (2014). Country classification: Data sources, country classifications and aggregation methodology.