Evaluating a New Governing Institution: An Assessment of the Citizens’ Initiative Review
Democratic communities worldwide have implemented initiative elections as a way to allow voters to make policy decisions, but faced with highly complex and often strategic communication, voters can have difficulty making good choices. In 2010, the Oregon legislature approved the Citizens’ Initiative Review (CIR) in an effort to avoid such concerns while maintaining the public’s ability to engage in direct democracy. To see whether that process fulfilled its aspirations, our team of researchers evaluated the CIR for the state legislature.
The CIR convenes 24 randomly selected voters, chosen to mirror the electorate in terms of age, ethnicity, education, place of residence, and party identification, to spend five days deliberating about an upcoming ballot measure. During those five days, citizen panelists hear from proponents and opponents of the initiative, as well as public policy experts, and engage in facilitated small and large group discussion. At the end of the week, the panelists collectively write a Citizens’ Review Statement—containing key facts about the initiative and arguments for and against the measure—that appears in Oregon’s State Voters’ Pamphlet, delivered to every household in the state with a registered voter. The process was developed by Healthy Democracy, a non-profit that organizes and implements the week-long events, and is funded by private donations.
The CIR exemplifies the expansion of deliberation in local, state, national, and international governance. The past few decades have seen a proliferation of deliberative minipublics – highly structured, often face-to-face forums that allow a small and representative group of the public to engage in extensive discussion about a specific public policy issue. When incorporated into wider political discourse, minipublics can improve the quality of public discussion, serve as a decision-making shortcut for voters, and increase the legitimacy of democratic institutions. Meeting these ideals, however, requires such processes to maintain high quality deliberation.
To test whether the CIR lived up to its deliberative aspirations, my colleagues—John Gastil, Justin Reedy, and Katherine Cramer Walsh—and I conducted an evaluation of the 2010 CIR pilot process. We based our evaluation on three criteria identified as essential to deliberation: analytic rigor, democratic discussion, and just decision making. In short, we looked to see whether the process allowed panelists to thoroughly identify and discuss important information, engage in equal and respectful discussions, and reach their decisions through fair voting mechanisms.
To assess the CIR’s quality on each of these criteria, a team of three researchers attended the first two reviews held in August of 2010. The first review focused on mandatory minimum sentences, and the second explored a system for the production and dispensation of medical marijuana. Our research team developed a real-time evaluation scheme that allowed us to assess the quality of each segment of the CIR, and we compared our assessments throughout the week. In addition, at the end of each day the panelists spent deliberating and two months after their panels were conducted, we distributed surveys that asked the panelists to assess the quality of the process.
Our evaluation found the process highly deliberative. The review was structured to allow participants to learn the requisite information through the testimony and questioning of proponents, opponents, and issue experts. Throughout the week, panelists continually identified and honed key information and arguments and clarified lingering questions. The moderators, ground rules, and mixed discussion styles ensured equitable and respectful discussion, and a review of the final statements revealed that every piece of information stemmed from evidence presented during the review. In their survey responses, panelists were highly satisfied with the process, with most panelists responding they learned enough to make a good decision and they felt the process was free of bias.
Because the legislature implemented the CIR pilot process with the intent of evaluating its quality and impact, we presented our assessment of the CIR, along with our evaluation of its impact on voters, to the House and Senate rules committees. We used the following scorecard to quickly convey the CIR’s deliberative quality each week. The scorecard shows the process’s performance on each criterion for deliberation. You can see our full report, including our estimation of the impact the statement had on voters, here.
Criteria for Evaluating Deliberation |
Measure 73 |
Measure 74 (Marijuana) |
1. Promote analytic rigor |
||
1a. Learning basic issue information |
B+ |
B+ |
1b. Examining of underlying values |
B- |
B |
1c. Considering a range of alternatives |
A |
B |
1d. Weighing pros/cons of measure |
A |
A |
2. Facilitate a democratic process |
||
2a. Equality of opportunity to participate |
A |
A |
2b. Comprehension of information |
B+ |
B+ |
2c. Consideration of different views |
A |
A |
2d. Mutual respect |
A- |
A |
3. Produce a well-reasoned statement |
||
3a. Informed decision making |
|
A |
3b. Non-coercive process |
A |
A |
In addition to our report, the legislature heard testimony from the proponents and opponents of the CIR as well as former panelists, and ultimately passed House Bill 2634—which permanently implemented the CIR—with bipartisan support. The CIR took place again in 2012, and Healthy Democracy is working to implement similar processes in other states and contexts.
Aside from confirming the quality of the CIR, our evaluation served two additional purposes. First, it conveyed a way to evaluate deliberative public processes that we hope will be useful across events and contexts. Cross-comparisons are necessary to fully understand the quality and value of deliberative projects, and such work will better help practitioners develop the most efficient and effective formats for public input and deliberation. We hope that this evaluative scheme, and the scorecard presented above, will provide a means for such comparisons, ultimately aiding in the design and development of future processes.
Moreover, this evaluation yielded an opportunity to closely study one highly developed process and identify essential components and areas for improvement. We found that training in deliberative skills, mixed discussion styles, the presence of moderators, and panelists’ ability to take ownership of witnesses selection and statement writing all contributed to high quality deliberation. And though the process was quite deliberative, it still had room for improvement. Advocates were not always prepared for sustained deliberation and were oriented toward winning. Preparing advocates for such discussions and best managing their involvement are crucial to the sustainability of deliberative projects. Similarly, panelists would have benefited from a brief lesson in scientific and statistical interpretation. Though the panelists generally sorted out highly technical information, a brief introduction to understanding the production and output of research would have helped panelists sort through complex and conflicting claims. As the CIR and similar processes expand and adapt to new contexts, we hope that evaluations like this will ensure their continued improvement.