Print Page   |   Sign In   |   Register
News & Press: Faculty news

Will Informing Voters Lead to Better Public Services?

Thursday, July 16, 2015  
Posted by: Monica Illes on behalf of James Badham

To find out, Mark Buntaine pursues a social science experiment in Uganda.

Kampala, Uganda — Suppose your local government representatives were not getting the job done. The trash wasn’t being picked up, you didn’t have clean drinking water, your child was receiving a poor education, and potholes pitted local roads. Then suppose an election was coming up. You might vote for a changing of the guard. But if you were in Uganda, you might not. There are several reasons for this, and while tribal loyalty and electoral manipulations, such as bribery, are among them, Bren School assistant professor Mark Buntaine and a team of co-researchers suspect that a simple lack of information might also play a role. They have begun a new project to test their hypothesis as part of the EGAP Metaketa Initiative.

A resident walks past a refuse station in a town in Uganda. Can information help to improve roads, drinking water and other public services?

According to Buntaine, who has worked in Uganda for the past two years, plenty of evidence has been gathered indicating that residents in Uganda, like those in many other places and particularly in developing countries, often don’t know what their local officials are doing and therefore can’t use their vote to reward officials who do the right thing or unseat those who perform poorly. The goal of the research is to see whether information empowers people to engage, mobilize, and advocate for themselves, for instance, by electing local officials who will serve their needs and replacing those who don’t.

Funded by a grant from Experiments in Governance and Politics Network (EGAP) Metaketa, an initiative of the UC Berkeley Center on the Politics of Development, the project is intended to answer the question, Will providing people with the right information make them better off? It also reflects a recent trend toward experiment-based research in the social sciences.

“These days, a lot of work in the social sciences is experimental in nature,” says Buntaine. “That’s a real shift, and it’s happening all over. If you look at the top research outlets in the social sciences, you see a big turn toward experimental research.

Bren assistant professor Mark Buntaine with Philip Thawithemwra, a local colleague on the Metaketa Initiative elections project.

“Instead of just observing the world,” as has been customary in social science, Buntaine explains, in the experimental approach, “You try to find a partner out there in the world to experimentally implement a policy in some places and not in others. Then you compare the treatment group to the control group. Because the two groups in an experiment are similar, you can draw some pretty strong conclusions about the impacts of the policy.”

This type of research takes time, requiring many months first to find partners, then to design and implement interventions, and finally to see the effects of the various interventions. It can also be a challenge to accumulate knowledge when different groups of researchers work on different policies.

“There are two ways to accumulate knowledge about the impacts of policies, both of which have challenges,” Buntaine says. “One is the building-block approach, which involves accumulating credible results over time and across space from strong individual studies. The other way is to test whether the world generally fits patterns that we expect from theory. The first approach is often limited by a lack of similar studies, and the second is often limited by our inability to identify causation based on observed correlations. Our group is seeking the middle ground by joining a research initiative that will replicate the same information interventions across seven sites.”

The basic idea of this approach is to get research teams that are interested in the same types of policies to coordinate their projects. The teams work together to implement the same interventions — in this case, providing information to voters — in different countries. Ideally, you’ll get consistent results across various sites, making the results more generalizable.

A local enumerator conducts an audit of solid-waste services in Kampala.

Buntaine and co-researchers from four other universities — Temple University, the College of William and Mary, Brigham Young University, and the London School of Economics — will conduct their experiment around Uganda’s 2016 local elections. In parallel, other teams drawn from faculty and graduate students at dozens of universities will implement experiments for six other elections around the world.

Buntaine’s team will work with local organizations that aim to improve governance to select a nationally representative sample of about 25,000 Ugandans. They will then conduct independent audits of basic services that are the responsibility of local officials in the places where they live: the quality of water, roads, and solid waste collection, and educational outcomes for primary-school children. The researchers want to observe whether information of this type prompts voters to reward good politicians and punish bad politicians at the polls.

Once the information is collected, it will be put onto score cards indicating whether basic services are above or below average. The team will then send out the information about the relative quality of services weeks before the elections via mobile phones and then see if it – rather than ethnic and tribal affiliations, bribes, etc. – helps to determine voters’ choices and levels of participation.

Mark Buntaine
James Badham,
Bren School Media Office

One ethical concern with this type of social experiment is that the interventions could potentially change election results. “We don’t want to flip election outcomes,” says Buntaine, adding that while their experiment may indeed alter voter behavior, any such shift will be on a small scale, not enough to change the results. “This experiment is about voter-level effects, not election-level effects,” he explains.

The hope is that local organizations, armed with the results of the experiment, can better carry out their own efforts to improve governance. Regardless, the project must undergo four separate international and local ethical reviews.

Numerous challenges face the researchers in attempting to take knowledge gained in one country and applying it to another. “Suppose we try to decrease corruption in revenue sharing of government funds around national parks in Uganda, an experiment we are actually doing,” says Buntaine. “We can run this transparency experiment and find out that the villages that receive information about revenue sharing are better able to hold local officials accountable. However, based on this, a lot of people will say, ‘That’s a solid conclusion for Uganda, but what about India, Peru, or anywhere else? Do we have any conclusive evidence that we can take what we learned in a single field site and apply it elsewhere?’”

The answer is not always clear, but, as Buntaine explains, most experiments test not only “whether” a policy works, but also “when and why” it works.

“We thought carefully about measuring ’moderators,’” he says, factors that affect whether having information helps in some cases but not in others. For example, if you expect an election to be a landslide and don’t expect your vote to matter all that much, you may not change your behavior based on information; however, if the election were close, you might act differently. By testing whether the effect comes about only under certain conditions, we can build theory over time about the effectiveness of a class of policies.”

Theoretically, he explains, “We want to develop predictions about why policies work. Practically, we use this theory to advise policy-makers on how to design and implement policies.”

Buntaine expects the results to be available by late 2016.

Association Management Software Powered by YourMembership  ::  Legal