In Bangladesh, women received nutrition advice through an aid programme, but few were able to use it. Credit: G.M.B. Akash/Panos

Not long ago, ‘research’ was a dirty word in international-development circles. The prevailing view was that the time and money available should be spent implementing aid projects rather than analysing their effects in detail. For most projects, assessment was limited to tracking how much they spent and whether they reached their end points.

That is now beginning to change. In recent months, studies have rigorously assessed aid projects such as farmer-training efforts and intestinal-worm-treatment programmes. These studies reflect a more analytical mindset that has emerged in the development community over the past decade, spurred by the need to assure weary donors that their investments are paying off. Drawing on methods used in clinical studies, the analyses could help to guide policy — but they are also raising fears that programmes could be axed prematurely if initial results are disappointing.

“As the larger and more careful impact studies come out, we will see more and more negative results,” says Macartan Humphreys, an international-development economist at Columbia University in New York.

The Millennium Challenge Corporation (MCC), a US foreign-aid agency, has taken the lead in self assessment, committing to using scientific methods to analyse the success of 40% of its projects. Its first assessments — of farmer-training activities in five countries including Armenia, El Salvador and Ghana — delivered a mixed verdict.

Published in October 2012, the evaluations showed that in three of the countries, efforts to train farmers in business and agricultural skills helped them to sell more produce, boosting farm incomes. But, contrary to the assumption that greater agricultural production reduces poverty, there was no evidence that the extra cash flowed to the farmers’ households — an effect that the MCC cannot readily explain. “We are pushing back the boundaries of ignorance by doing these studies,” says William Savedoff, an economic and social-development researcher at the Center for Global Development in Washington DC, who was not involved in the evaluation. “They are forcing us to grapple with what we do and don’t know about the links between agri­cultural extension and poverty.”

Some of the MCC’s farmer-training assessments relied on randomized controlled trials (RCTs), a mainstay of clinical research. In development research, RCTs randomly enrol people in aid projects — equipping households with bed nets to protect against disease-carrying mosquitoes, for example — and then track them along with an equal number of people not benefitting from that aid. This protocol allows researchers to eval­uate whether a given development strategy makes a measurable difference to people’s lives. “We think RCTs are very effective but still underused,” says development economist Rachel Glennerster, a director of the Abdul Latif Jameel Poverty Action Lab (J-PAL) at the Massachusetts Institute of Technology in Cambridge.

Glennerster says that J-PAL researchers rely heavily on RCTs in assessing aid projects, but that not everyone regards them as a gold standard. Jeffrey Sachs, a sustainable-development economist at Columbia University, worries that RCTs are not an ethical way to assess development projects, because they withhold aid intervention from control groups. Still, enough RCTs have been done for researchers to begin systematic reviews of particular interventions — but these meta-analyses are also attracting criticism.

Last year, for example, a systematic review of programmes to treat children in developing countries for intestinal worms found little evidence of nutritional, cognitive or educational benefit (D. C. Taylor-Robinson et al. Cochrane DB Syst. Rev. CD000371; 2012). The study was conducted by the Cochrane Collaboration, based in Oxford, UK, which is best known for its systematic reviews of medical treatments.

A group of prominent development researchers — some of whom, including Glennerster, are involved in deworming projects — argued that the review omitted or discounted key studies that showed benefits to school performance. “We were critical of the review because it just takes a take bunch of studies, averages them and then finds there is no effect, when actually if you look at high-quality [primary] studies you do see an impact,” says Glennerster.

David Taylor-Robinson, a population-health scientist at the University of Liverpool, UK, and lead author of the review, stands by its findings. “Our analysis was limited to [RCTs] comparing mass administration with placebo or no treatment,” he says, adding that three studies showing positive outcomes did not meet these criteria.

To aid such reviews, the International Initiative for Impact Evaluation (3ie), a non-profit organization based in Washington DC that funds and conducts aid-assessment research, is setting up a database in which researchers can register studies. Expected to launch later this year, the initiative aims eventually to provide a complete listing of assessments for various types of aid interventions, says Howard White, executive director of 3ie.

The goal is to help researchers avoid bias when conducting systematic reviews of development projects — by selectively reporting positive results or excluding negative ones. It is not yet clear whether development researchers will be required to register studies before publishing results in academic journals — as is the case for clinical trials in some countries.

Meanwhile, international-development researchers are increasingly applying ‘theory of change’, an analytical method that seeks to understand how a series of events leads to a particular result. “Philosophically, you don’t need to understand the causal mechanisms to say there is a link between a treatment and an outcome,” explains White. “But we would like to understand more about the causal chain to help inform analysis and understand why programmes work in some places and not others.”

In 2005, the World Bank carried out one such analysis, of its programme to reduce malnutrition in Bangladesh. Between 1995 and 2002, the project taught mothers about nutrition — for themselves during pregnancy and for young children. Falling malnutrition rates in programme areas were initially hailed as a success, but an eval­uation showed that similar trends had occurred in control areas, suggesting that the programme was not the driving factor.

The analysis found that one of the main reasons the programme failed to make a difference was that fathers tended to be in charge of what food entered their homes — so mothers were unable to implement the nutritional education they received in the programme.

Depressing as such discoveries might be, they are part of an important culture change in development circles, says Humphreys. Negative results are integral to the research process, he argues, and it is important for researchers and donors to become more tolerant of them. If they do not, “there is a fear that when people see negative results, they will stop funding and pull out of research altogether”.