The past decade has seen an explosion of interest in the use of randomized controlled experiments to test public policies, on issues from health and public safety to agriculture and education. Practitioners are generating hard data and pushing evidence into the government sphere. But they are ruffling the feathers of conventional economists, who have long focused on models and qualitative field data. Some worry that the new focus on randomization could skew the questions that researchers look at, or could produce black-box results that cannot answer crucial questions about why something does or does not work. But more evidence is always a good thing, and there is plenty of room for academics of all stripes.

Despite disagreements, it is hard to dispute the value of the basic goal: to ensure that governments invest their limited resources in programmes that work as advertised, and look for ways to alter or eliminate those that do not. But in the messy space occupied by social scientists, it is not always easy to determine which programmes work, which do not, and why. Enter the randomized controlled trial, in which changes are measured in a selection of individuals or groups who have been randomly assigned to receive an intervention — or not. The medical industry has used such trials to tease out the effects of drugs for decades.

As discussed on page 150, development economists have led the way, and are now running hundreds of trials that are designed to improve the effectiveness of international aid and, ultimately, the much larger pool of domestic spending by governments in developing countries. These researchers have already produced valuable insights, and governments are scaling up some of the results. Buses in Kenya are becoming safer thanks to stickers that urge passengers to speak up when they feel unsafe, and residents of the Indian state of Gujarat may soon benefit from the implementation of a new pollution-auditing system for industrial plants.

Evidence-based intervention

Many of the trials focus on human behaviour in very particular circumstances, but researchers are targeting larger questions as well. Some studies have looked at the impact of community-driven develop­ment efforts, with mixed results. Others are considering how trials could provide the evidence that governments and aid agencies need to improve the delivery of humanitarian relief. And then there are the long-standing questions about how much aid can accomplish in lifting people out of poverty. In a study published in May, a team of researchers ran randomized trials in six countries looking at whether a package of interventions that included cash, food, health care and training could give a lasting boost to the poor (A. Banerjeeet al.Science;2015). The evidence suggests that the answer is yes, for at least one year after the intervention has ceased (see Nature 521, 269; 2015).

Contrast that with the Millennium Villages Project (MVP), which began in 2004 and has pushed a comprehensive aid package into villages in ten countries in Africa. MVP researchers have begun evaluating how the project has done so far (see page 144). But the research protocol that they published last month (see acknowledges that it will be difficult to definitively answer questions about how well these villages have fared compared with surrounding villages that did not receive the intervention, largely because the project did not use an experimental approach from the outset. Information on whether the project is effective would have been useful for policymakers who are facing difficult choices about where to invest limited resources.

The goal is to ensure that governments invest in programmes that work.

Many poverty-alleviation programmes focus on providing money with strings attached — only if families keep their children in school and attend health clinics, for instance — but some researchers are now advocating unconditional cash transfers. Paul Niehaus, an economist at the University of California, San Diego, co-founded the non-profit organization GiveDirectly in New York City to do just that. His team points out that many forms of development aid are complex and costly to administer, and suggests judging their effectiveness against that of simply giving money. This would be done in randomized trials, in which the intervention group receives development aid and the control group gets cash.

There is much to be learned from randomized trials, but researchers must acknowledge that such experiments have limitations, not least that they do not necessarily provide answers as to why something works or does not. In the case of community-driven development programmes, for instance, a randomized controlled trial can provide basic statistics about whether letting community councils make their own decisions hastens the delivery of basic services, improves the economy and advances the social well-being of women. But do these councils actually promote social cohesion in a meaningful way? The results so far are mixed, and this is where agencies and institutions can benefit from the soft, qualitative social-science survey data that researchers have been seeking to go beyond.

Randomized controlled trials will not be able to provide answers to all of the world’s enduring questions, and they are not the only way to gather solid data. Economists have developed other quasi-experimental approaches that do not include randomization but can nonetheless provide rigorous statistics with which to judge a programme’s effectiveness.

In combination, these methods are providing policymakers with more information every day. But perhaps more important is what happens next. Politicians, government bureaucrats, activists and phil­anthropists will all be happy to talk about the value of randomized trials when the results support their policies and programmes. They must also have the courage to do so when the evidence goes the other way.