It's not often that George W. Bush takes time out to attack a scientific paper on the day that it's released. But then few papers attract as much attention as the one that claimed that more than half a million people, or 2.5% of the population, had died in Iraq as a result of the 2003 invasion. Published last October in the run-up to the US mid-term elections, the interview-based survey attracted huge press interest and controversy.

Baghdad, February 2007: Iraqi civilians are still in danger from car bombs in their neighbourhoods. Credit: C. AZIZ/REUTERS

The media spotlight has moved on, but interest within the scientific community has not. The paper has been dissected online, graduate classes have been devoted to it and critiques have appeared in the literature with more in press. So far, the discussion has created more heat than light. Many of the criticisms that dogged the study are unresolved. For example, Nature has discovered that different authors give conflicting accounts of exactly how the survey was carried out. And although many researchers say the questions hanging over the study are not substantial enough for it to be dismissed, a vocal minority disagrees.

The controversy creates extra interest in the authors' decision, made last week, to release the raw data behind the study. Critics and supporters will finally have access to information that may settle disputes.

On paper, the study seems simple enough. Eight interviewers questioned more than 1,800 households throughout Iraq. After comparing the mortality rate before and after the invasion, and extrapolating to the total population, they concluded that the conflict had caused 390,000–940,000 excess deaths (G. Burnham, R. Lafta, S. Doocy and L. Roberts Lancet 368, 1421–1428; 2006). This estimate was much higher than those based on media reports or Iraqi government data, which put the death toll at tens of thousands, and the authors, based at Johns Hopkins University in Baltimore, Maryland, and Al Mustansiriya University in Baghdad, have found their methods under intense scrutiny.

Much of the debate has centred on exactly how the survey was run, and finding out exactly what happened in Iraq has not been straightforward. The Johns Hopkins team, which dealt with enquiries from other scientists and the media, was not able to go to the country to supervise the interviews. And accounts of the method given by the US researchers and the Iraqi team do not always match up.

Several researchers, including Madelyn Hicks, a psychiatrist at King's College London, recently published criticisms of the study's methodology in The Lancet (369, 101–105; 2007). One key question is whether the interviews could have been done in the time stated. The October paper implied that the interviewers worked as two teams of four, each conducting 40 interviews a day — a very high number given the need to obtain consent and the sensitive nature of the questions.

The US authors subsequently said that each team split into two pairs, a workload that is “doable”, says Paul Spiegel, an epidemiologist at the United Nations High Commission for Refugees in Geneva, who carried out similar surveys in Kosovo and Ethiopia. After being asked by Nature whether even this system allowed enough time, author Les Roberts of Johns Hopkins said that the four individuals in a team often worked independently. But an Iraqi researcher involved in the data collection, who asked not to be named because he fears that press attention could make him the target of attacks, told Nature this never happened. Roberts later said that he had been referring to the procedure used in a 2004 mortality survey carried out in Iraq with the same team (L. Roberts et al. Lancet 364, 1857–1864; 2004).

Other arguments focus on the potential for 'main-street bias', first proposed by Michael Spagat, an expert in conflict studies at Royal Holloway, University of London. In each survey area, the interviewers selected a starting point by randomly choosing a residential street that crossed the main business street. Spagat says this method would have left out residential streets that didn't cross the main road and, as attacks such as car bombs usually take place in busy areas, introduced a bias towards areas likely to have suffered high casualties.

The Iraqi interviewer told Nature that in bigger towns or neighbourhoods, rather than taking the main street, the team picked a business street at random and chose a residential street leading off that, so that peripheral parts of the area would be included. But again, details are unclear. Roberts and Gilbert Burnham, also at Johns Hopkins, say local people were asked to identify pockets of homes away from the centre; the Iraqi interviewer says the team never worked with locals on this issue.

Some researchers fear that the Iraqi interviewers might have inflated their results for political reasons.

Many epidemiologists say such discrepancies are understandable given that Roberts and Burnham could not directly oversee the survey, and do not justify accusations that the process was flawed. For those who disagree, access to the raw data is essential. Although previously reluctant to release them, Roberts and Burnham now say they are removing information that could be used to identify interviewers or respondents and will release the data within the next month to people with appropriate “technical competence”.

One researcher keen to see the numbers is Spagat. The 2004 survey used GPS coordinates instead of the main-street system to identify streets to sample, and when Spagat used the limited data available so far to compare the two studies for the period immediately following the invasion, he found that the 2006 study turned up twice as many violent deaths, suggesting that main-street bias may be present.

Roberts and others question Spagat's methods. But the issue could be checked using the raw data. If main-street bias exists, says Spagat, then death rates will fall as the interviews move away from the main street.

The raw data may also help address a fear that some researchers are expressing off the record: that the Iraqi interviewers might have inflated their results for political reasons. That could show up in unusual patterns within the data.

Roberts and Burnham say they have complete confidence in the Iraqi interviewers, after working with them directly for the 2004 study. And supporters say that criticisms should not detract from the fact that the Iraqi team managed to produce a survey under extremely difficult circumstances. Security threats forced the team to change travel plans and at one point to consider cancelling the survey altogether. Since its completion, one interviewer has been killed and another has left Baghdad, although it is not known whether either case is linked to their involvement in the survey. Either way, the continuing violence in the country is enough for the remaining interviewers to say that they are not willing to repeat the exercise.