Democratic innovations are en vogue and thus it is not surprising that empirical evidence is increasing rapidly. However, sufficient data does not yet exist to evaluate all innovations. We have a comprehensive body of data for direct democracy in Switzerland and some US states. These data are copious and detailed and can be easily used for statistical calculation - at least for most aspects of direct democracy (Kriesi). The situation looks very different for deliberative procedures, where case studies still prevail (e.g. Rucht, Talpin). Thus, the methods applied by authors in this volume reflect the methodological state of the art: research in direct democracy is often quantitative and conducted statistically to test hypotheses, whereas research on deliberative procedures is qualitative, case-study based and aimed at generating hypotheses.
Recently, reviews of literature on democratic innovations have tried to shed more light on existing research gaps by accumulating and comparing existing figures and reports. Some of the chapters in this volume follow this line (e.g. Beetham, Geissel). These chapters draw on empirical literature, gathering qualitative as well as quantitative evidence. What is still missing is a comprehensive data set combining available data systematically. However, even if a comprehensive data set became available, two main problems will continue to challenge research on participatory innovations. These two main challenges are heterogeneity and multi-collinearity.
The first major methodological difficulty is heterogeneity, i.e. the diverse variety of participatory innovations (Geissel). Concerning direct democratic procedures, for example, mediated and unmediated direct democratic procedures interrelate in different ways with representative democracy and they have very different impacts (Budge, Kriesi). Thus, evaluating ‘direct democracy’ is like measuring the speed of cyclists by taking all kinds of cyclists - from toddlers to racers, in different contexts, from lowland to alps - into account and calculating the arithmetic mean. It is obvious that the arithmetic mean, or any other average value, does not contain much information and gives at most a very vague idea about the impacts of direct democracy (Budge, Kriesi). The same is true for discursive procedures - showing even more varieties (e.g. Beetham, Smith). The only solution to this problem is to find out the most significant differences and to take them all into account. Such differentiated and sophisticated analyses are without doubt intricate and time-consuming. They require not only theoretically and empirically meaningful differentiations, but also a large amount of case studies to allow insightful and robust conclusions. Work has just started and this volume reveals the current state-of-the-art.
Multi-collinearity, i.e. strong correlations between explaining variables, is the second major problem. It is in many cases impossible to single out and prove that a change of attitudes, policies, skills or whatever is due to a particular innovation. One way to approach this problem would be to compare ex ante and ex post situations. However, ‘before and after studies’ are still rare. For example, political legitimacy before and after a participatory procedure is seldom analyzed - with one exception, namely research on citizens’ preferences. Several studies have scrutinized whether citizens change their preferences and attitudes whilst participating in a participatory procedure. Fishkin’s research on deliberative polls is the best known and probably also the most advanced in this field.
When starting this book we hoped to find out whether and which democratic innovations might cure the current democratic malaise. While all the chapters of the book provide enlightening and instructive findings, clear and definitive answers to our questions are few and far between. There is no fast track to evaluating democratic innovations and there is much more work to be done.