Rio de Janeiro

The world’s largest gathering of specialists in research misconduct kicked off on 31 May in Rio de Janeiro, Brazil, shortly after science’s latest scandal broke. On the evening before the start of sessions on how to diagnose and remedy ethical faults in research, delegates to the 4th World Conference on Research Integrity sipped caipirinhas, Brazil’s national cocktail — and swapped views on what could be gleaned from a flawed political-science study.


Nature’s reporters discuss research misconduct in the wake of the retracted Science paper about political canvassing.

The paper in question, which claimed to show that short conversations with a canvasser who is gay could encourage voters to support same-sex marriage, made headlines across the world when it was published in Science last December (M. J. LaCour and D. P. Green Science 346,1366–1369; 2014) — and again when it was retracted last week (Science;2015). “The case is very much on our minds,” said Melissa Anderson, a co-organizer of the meeting who studies scientific integrity at the University of Minnesota in Minneapolis.

Although the case throws up new instances of misconduct, and of inadequate supervision by senior academics, delegates to the Rio conference felt that, in general, the case illuminated little about the academic system that a steady drip-drip of research misconduct has not already highlighted. The main challenge, said Brian Martinson, a social scientist at the HealthPartners Institute for Education and Research in Minneapolis, is how to create a supportive environment that incentivizes reliable, reproducible research. “A lot of people think the bad stuff in science comes from academics being greedy or narcissistic — but that ignores how the structural arrangements in science, like the decline of funding and stable academic positions in the United States, leads people into bad behaviour,” he said.

In the latest twist in the debacle, co-author Michael LaCour, a graduate student in political science at the University of California, Los Angeles (UCLA), has admitted to misrepresenting his funding sources and the incentives he used to attract people to take part in the study. In a 29 May online reply to researchers who had spotted irregularities in his survey data (see, LaCour said that he had deleted his raw data for reasons of confidentiality and admitted that he did not get ethical approval from an institutional review board before he did the work, or before he submitted it to Science. The document did not include convincing evidence that he had conducted the surveys.

LaCour told The New York Times that he stands by his finding — but his co-author Donald Green, a political scientist at the University of Columbia in New York City, does not: Green requested the paper’s retraction after three outside scientists told him about irregularities in its survey data, and he apologized for not adequately supervising LaCour’s work.

Delegates in Rio broadly agreed that the case highlights the need for better super­vision by senior academics. “Academia should be concerned that its system of checks and balances has problems,” said Nicholas Steneck, who studies research integrity at the University of Michigan in Ann Arbor. “It will never be perfect, but it is far from perfect now.” Sabine Kleinert, a co-organizer of the research-integrity conference and senior executive editor at The Lancet, said: “The wider lessons are still the same as many of these cases throw up — that of the role of the co-authors in taking steps to be accountable for the data, and the role of institutions in safeguarding or having repositories for the data underlying research that is done there.”

Academia should be concerned that its system of checks and balances has problems.

On the plus side, the retraction came swiftly after queries were raised about the data, noted Ivan Oransky, a journalist who runs the blog Retraction Watch, which first reported that Green had asked for the study to be retracted. Researchers posted their objections online on 19 May (see and Science retracted the study on 28 May. That is in stark contrast to an earlier misconduct case — involving the cancer geneticist Anil Potti — in which whistle-blowers tried for years to quietly raise concerns with Potti’s institution, Duke University in Durham, North Carolina, before papers were finally retracted and Potti resigned.

Mysteries still linger in the LaCour case. In the 23-page reply that he posted on 29 May, LaCour raises statistical objections to the criticisms levelled at him. These “couldn’t possibly be more beside the point”, said Jelte Wicherts, a statistician at Tilburg University in the Netherlands.

LaCour also posted snapshots of an apparent survey set up with the firm Qualtrics, but these actually relate to a pilot study that was abandoned, according to Chris Skovron, a political scientist at the University of Michigan. He had worked on the study until LaCour cut off the collaboration, he says.

As to whether canvassing changes voters’ attitudes, Brian Calfano, a political scientist at Missouri State University in Springfield, says that other literature suggests that it can, but that replication or extension of the LaCour–Green work would have to be done to know for certain that it does so in this particular scenario. LaCour wrote in his 29 May document that Calfano had replicated his study, but Calfano says that his own work is only a preliminary finding relating to a different kind of canvassing of voters. He shared the finding with LaCour at an early stage, but is not willing to stand behind it until further tests are completed.

LaCour did not respond to a request for comment. His graduate supervisor, political scientist Lynn Vavreck, says that UCLA has an ongoing inquiry into the issue.