Desktop version

Home arrow Political science arrow The tools of policy formulation

PARTICIPATORY ASSESSMENT IN PRACTICE

Unfortunately there is a lack of systematic evaluation of the policy impact and effectiveness of participatory assessment tools and methods or participatory assessment in general. One of the main reasons for this probably relates to the conflicting (and hidden) aims of participation noted above. There is often a discrepancy between how particular tools and methods are applied in practice and how they are prescribed by theory. While this chapter focuses on participatory assessment tools and methods for policy formulation, in practice they are often used to legitimize already decided policy. This has repercussions, for example when different expectations exist with regard to the role and intended impact of a participatory assessment. The policy evaluation literature shows numerous examples of disappointments among participants who have invested energy in participatory assessments only to find out in the end that policymakers did not use - or

BOX 2.1 PARTICIPATORY ASSESSMENT TOOLS AND METHODS: VENUES AND IMPACTS

Example 1: Consensus Conference (CC)

Consensus Conferences (CCs) have been documented for Denmark (below), New Zealand (Goven 2003), the UK (Joss 2005; Irwin et al. 2012), Norway (Oughton and Strand 2004), Belgium (Vandenabeele and Goorden 2004), Canada and Australia (Einsiedel et al. 2001), and the Netherlands (Jelsma 2001). Most CCs have been commissioned by government and organized by an external institute, either affiliated to parliament (e.g. the Danish Board of Technology, an NGO since late 2011) or independent. Evaluations of specific CCs show that practice may deviate from theory due to particular contextual and venue-specific factors. For example, limited interaction between citizens and experts was reported for a Belgian and Austrian CC (Vandenabeele and Goorden 2004, Joss and Bellucci 2002). Too little time for public debate, lack of transparency and overall mistrust were reported for CCs in the UK and the Netherlands (Joss 2005, Jelsma 2001). In Denmark, where parliament has recognized CC as an important policymaking tool, CCs have provided a base for policy directions (Grundahl 1995), but have not had an immediate policy impact (Einsiedel et al. 2001; Vandenabeele and Goorden 2004; Joss and Bellucci 2002). At best, evaluations report learning among the participating citizens and experts.

Example 2: Participatory Backcasting (PB)

Participatory Backcasting (PB) has been widely applied in cases ranging from: the future of natural areas in Canada (Tansey et al. 2002; VanWynsberghe et al. 2003); energy futures for the Netherlands (Hisschemoller and Bode 2011; Breukers et al. 2013), for the UK (combined with multi-criteria appraisal; Eames and McDowall 2010) and for Belgium (Keune and Goorden 2002); sustainable households in five countries (Green and Vergragt 2002); and long-term changes of Swedish city life (Carlsson-Kanyama et al. 2003). Participants are usually stakeholders, representing different sectors and groups. There is no evidence for immediate policy impact of PB. Yet, as Quist (2007) shows, it may encourage higher order learning among participants as well as follow-up programmes. Venue-specific factors shape how PB works out in practice. An example is provided by the Dutch Hydrogen Dialogue (2004-2008), funded by the Dutch Organization for Scientific Research. This addressed the question of how hydrogen can contribute to a future sustainable energy system (Hisschemoller and Bode 2011).

About 60 stakeholders (from the Netherlands and abroad) participated, including energy companies, small innovative firms, knowledge institutes, vehicle lease and transport companies, NGOs and one association of home owners considering the establishment of a hydrogen-based energy system in their neighbourhood. Since participants valued the utilization of policy-relevant results highly, the project team committed three former Dutch MPs as independent chairs of three dialogue groups. Participants were invited based on the outcome of a Repertory Grid exercise (van de Kerkhof et al. 2009), which unfolded three perspectives on a 'hydrogen economy'. PB was then used for developing (competing) hydrogen pathways.

At a 'Confrontation Workshop', the pathways were reviewed by international keynote speakers, a national Advisory Board including experts and policymakers, and the participants themselves. In this application of PB, creative conflict was a central design issue, intended to stimulate learning through interaction between stakeholders from different (international networks. However, the anticipated learning effect was hampered because the conflict on substance turned into a conflict of interests. Eventually, the participants from the national Energy Research Institute distanced themselves from the entire dialogue report, because in their view the dialogue facilitators did not sufficiently distinguish energy experts' views from non-expert opinions.

The dialogue did not have an immediate impact on policy. However, a few years later the Dutch National company, Gasunie, started implementing the option most controversial throughout the dialogue, adding large quantities of H2 into a local gas infrastructure. The actors taking most advantage of the dialogue were small innovative entrepreneurs, seeking like-minded stakeholders to start up transition experiments. even abused - their contribution. Processes considered unfair, biased or as pseudo-participation generally do not contribute to public acceptance. A related explanation for the lack of systematic evaluation may be that (hidden) conflicts between instrumental, substantive and normative aims of participation also undermine the evaluation itself, especially as the (main) goal, of gaining acceptance, remains implicit or is covered under the veil of substantive or normative aims.

Some authors observe resistance among policymakers and their techno-scientific advisers to participatory exercises (for example, Irwin et al. 2012). Participatory assessments and public participation in general increase uncertainty among policymakers with respect to the timing and actual outcome of a policy formulation process (Hisschemoller and Hoppe 2001). Whereas policymakers may like the idea that participatory assessments contribute to the public acceptance of policies, they dislike the idea that successful participatory processes may diminish their control. Hence, they may not have an interest to know about the impact of participatory assessment tools and methods.

Another explanation for the lack of systematic evaluation relates to difficulties in measuring the impact of such tools and methods. First of all, policy learning is a slow process, as is generally the case with the utilization of research. Evaluating effectiveness is difficult, as policy change is a complex process that takes place over periods of at least a decade (Sabatier 1999). Participatory assessments may therefore have an impact only in the longer term, which may be difficult to measure. Difficulties in measuring the impact of participatory assessments also relate to the fact that the impact may not be restricted to changes in governmental policy, but may affect other domains and actors as well. There is evidence that stakeholders learned, especially about the different perspectives on the topic (Cuppen 2012b). Academics, companies, innovative entrepreneurs, NGOs and (local) government officials then initiate follow-up activities beyond the level of (national) government (Quist 2007). Surprisingly, the authors' own participatory assessments on climate and energy have led to technological inventions and initiatives for collaboration among stakeholders. This may also confirm that the impacts of participatory assessments may be especially significant in the longer term.

Interestingly, this suggests that participatory assessment tools and methods are not primarily used in the venues where they were initiated. It suggests that participatory assessments, like participatory processes in general, can themselves create venues as well. Participatory assessment tools and methods are a vehicle for bringing together different actors, exchanging ideas and viewpoints and mobilizing resources. In other words, they create new networks, most of these starting as informal and at some distance from state policy venues. However, over time these venues may expand into new institutions for deliberating on policy objectives, options and strategies, as for example, Sabatier (1999) shows.

A last point to be mentioned here is that there are many participatory assessments of varying quality, which makes it hard to systematically evaluate their impacts. For some examples, it is even questionable whether they may legitimately be described as 'participatory'. In an evaluation of the Austrian trans-disciplinary programme, Felt et al. (2012) find that the researchers on the one hand strongly convey the participatory discourse, but simultaneously tend to protect the privileged position of the researcher vis-a-vis societal stakeholders.

In conclusion, there is still much work to do in evaluating the real impact and quality of participatory assessment tools and methods, starting with developing methodologies for categorizing and measuring these impacts. This may support quality and usefulness of future tools and methods.

 
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >

Related topics