Home Political science The tools of policy formulation
HOW AND WHERE DOES CBA WORK IN THE REAL WORLD?
The existence of official procedures for undertaking CBA for policy formulation, discussed above, provides a prominent and focal indicator of potential use of economic appraisal. However, the existence of such procedures cannot be taken as an indication that CBA is actually used or that it is influential. To make such claims, further evidence about actual practice must be sought (Hahn and Dudley 2007; Hahn and Tetlock 2008). 'Use', for example, might be equated to actual uptake - that is, its presence in an impact assessment - although this should also involve asking questions about how comprehensive these uses were as well as their quality. Assessing 'influence' on policy outputs and ultimately outcomes is arguably more difficult still, requiring further quantitative and qualitative investigation. In what follows, we comment on a selection of the evidence that appears to throw light on some of these issues.
The Use and Quality of CBA
One sobering reflection on the use of CBA in the World Bank was revealed in a recent assessment by the Independent Evaluation Group (IEG 2011). The most striking headline was that the requirement for CBA formally codified in the Bank's operational procedures (OP10.04) was followed far less in practice (see also Little and Mirrlees 1994). The proportion of World Bank projects using CBA dropped significantly from 1970 to 2000. According to the Independent Evaluation Group (IEG 2011), one (proximate) explanation for this trend was a shift in investment portfolio from policy sectors with a tradition of using CBA (for example, energy, transport and urban development) to those which do not (for example, education, environment and health). Nonetheless, the group's report still found a significant reduction in the use of CBA in traditional sectors in which the World Bank remains heavily committed in terms of its investments (for example, physical infrastructure). Moreover, given the strides made in extending CBA thinking and practice to novel project venues, a question inevitably arises as to why this progress has not been translated into actual appraisal in these new sectors.
How generalizable are such findings? While not straightforward to judge systematically, an earlier report by OECD (2004) states that despite the desirability of CBA, it is not used in many of its member countries because of the difficulties of placing money values on a comprehensive range of costs and benefits. In the US, a review of 74 impact assessments issued by the US EPA from 1982 to 1999 found that while all of the policy actions contained in these assessments monetized at least some costs, only about half monetized some benefits (Hahn and Dudley 2007). Fewer still (about a quarter on average), provided a full monetized range of estimates of benefits, although the number doing so increased notably over the sample period. This raises important points. Clearly, there is more to do to increase the use of CBA, not least to bring actual practice in line with official guidelines. However, it is not the case that use of economic appraisal is entirely lacking; it is usually present but often partial.
A logical further question is whether, when applied, CBA applications were any good in the sense of conforming to good practice, following official guidance that an institution itself has adopted or being judged as good quality according to some recognized criteria. Some of the indicators assembled by Hahn and Dudley (2007) for the US identify a number of relevant issues. For example, even for those (US EPA) applications which estimated costs and/or benefits, it was relatively uncommon for these estimates to be complete (rather than monetizing a small subset of impacts) and for point estimates to be accompanied by a range (that is, low and high estimates of the value of a given impact). Moreover, the consideration of different options or alternatives, in cost-benefit terms, was also infrequent. More commonly, practice involved simply comparing some (presumably) favoured single option for a policy change with the status quo. A similar finding emerged from another recent study of EU studies of environmental projects for which financing was requested under regional assistance schemes (COWI 2011).
Another way in which quality might be assessed is by asking how accurate CBA is in what it attempts to measure. Testing this might involve a mechanical exercise to compare the results of ex ante and ex post CBA studies of the same intervention. An ex ante CBA is essentially a forecast of the future: estimating likely net benefits in order to inform a decision to be made. Ex post CBA - that is, conducting further analysis of costs and benefits of an intervention at a later stage - can be viewed therefore as a 'test' of that forecast. In other words, what can we learn - for example, for future, similar applications or regarding the accuracy with which CBA is undertaken generally - with the benefit of this hindsight?
Flyvbjerg et al. (2003) provide a meta-study of the ex ante and ex post costs of transport infrastructure investment in Europe, USA and other countries (from the 1920s to the 1990s). The results are revealing: ex post cost escalation affected 90 per cent of the projects they examined. Nor are cost escalations a thing of the past according to these data. This illustrates one aspect of a broader problem afflicting real world CBA of 'appraisal optimism': offering ex ante estimates of costs that are lower than they turn out to be in reality. In reaction to this, HM Treasury (2003) states that capital costs estimates for UK public appraisal of physical infrastructure investments should be increased in any CBA by about two-thirds. This direction of bias is evident for projects which involve large investment in physical infrastructure. The opposite can be found in the case of policy regulations. For example, MacLeod et al. (2009) find evidence across the EU for lower regulatory costs ex post than predicted ex ante, a finding they attribute to firms affected by these burdens finding more cost-effective ways of complying with policy. For the US, however, Hahn and Tetlock (2008) find no systematic evidence of such bias.
|< Prev||CONTENTS||Next >|