Every conference has its own brand of comedy, and the humour is deliciously dark when the subject is misconduct. James Kroll, who investigates misconduct allegations at the US National Science Foundation, knew he would get a laugh with his classic “excuses for plagiarism” slide, which included one scientist who blamed acid reflux, and another who was “distracted by bird vocalizations”.

But any ‘would-you-believe-that?’ jocularity at the 3rd World Conference on Research Integrity, held last week in Montreal, Canada, was light relief from the serious concerns that attendees had come to tackle. Allegations of misconduct are rising, retractions are on the up and concern is growing that sloppy lab practices are leading to unreliable research.

Experts debate whether the trends represent real increases or simply growing awareness. But attendees at the meeting were brimming with plans to combat problems ranging from out-and-out fraud to selective publication of experiments. Among the potential solutions are spot audits of research data; independent replication of results; requirements for data-sharing; ethics codes and training; forced accountability for institutions; and greater protection for whistle-blowers. Still, the attendees acknowledged that it is hard to measure whether these strategies work — and hardest of all to provide incentives for change in a system in which scientists are rewarded for speedy success.

“We know that research misconduct is more common than expected,” Nicholas Steneck, an ethicist at the University of Michigan in Ann Arbor, told the meeting of more than 360 attendees from 45 countries, “but we don’t know if it is getting any worse and whether it can be prevented or deterred.”

Credit: Sources: ORI/NSF/PubMed/UKRIO

In recent years, the number of misconduct-related inquiries has spiked at national oversight and advisory bodies (see ‘Questions of integrity’), and at research journals. Veronique Kiermer, executive editor of Nature Publishing Group in New York, says that anonymous allegations are growing, “taking a toll” on editors who must investigate them, although many turn out to be ill-founded.

Lots of journals already scan manuscripts with plagiarism software. Bernd Pulverer, chief editor of The EMBO Journal, said at the meeting that his publication began systematic pre-publication examination of images in papers last November; around 4% contain “serious manipulations”, he said. But more discoveries do not mean that more misconduct is occurring. The Journal of Cell Biology has consistently revoked about 1% of accepted publications for image manipulation over the past decade, with no upward trend.

Meta-analyses suggest1 that 1–2% of scientists admit to misconduct in anonymous surveys. A similar proportion admit to plagiarism, and 31% say that they have witnessed plagiarism by others, according to unpublished work by Daniele Fanelli, who studies science policy and misconduct at the University of Edinburgh, UK. “The field needs quantitative research — other­wise, we are just talking around our own assumptions,” he says. He thinks that analyses of the literature, for example to see when data look statistically improbable, will help to spot misconduct.

A new watchdog website, Integru.org, could also ferret out dishonesty. Launched last year to highlight evidence of plagiarism by scientists in Romania (see Nature 488, 264–265; 2012), the site is now widening its focus so that anyone can submit cases in which they suspect plagiarism. The hope is that volunteer academics and the public will cooperate, using tools on the Integru platform to resolve each case.

For all the damage done by outright misconduct, meeting attendees said, lazy or sloppy research is a bigger concern. Institutions are now pouring resources into educating researchers about responsible conduct, but there is little evidence that the training is effective, according to the preliminary results of a systematic review presented at the meeting by Ana Marušić of the University of Split in Croatia.

Independent replication of studies — if it can be funded — could help to make sure that the literature is reliable. This idea has gained ground in the past year, after drug companies such as Bayer HealthCare in Leverkusen, Germany, and Amgen in Thousand Oaks, California, reported that they had trouble replicating published biomedical results2. One study-validation mechanism, the Reproducibility Initiative based in Palo Alto, California, will task third parties with replication. This year, it has secured funding to replicate some 50 key biomedical experiments at a cost of around US$20,000 each, said its founder, Elizabeth Iorns (see Nature 492, 335–343; 2012). Almost 1,900 scientists have volunteered to have their results retested, she said.

Many ideas put forward at the conference — such as independent audits of raw data — are already in practice in clinical research, but not in bench science unless lab heads choose to enforce them. As far back as 1987, Adil Shamoo, a biochemist at the University of Maryland in Baltimore, argued in Nature that scientists would have to expect routine data-auditing3. But the idea has never been accepted by the community, says Shamoo, who is also editor-in-chief of Accountability in Research.

One emerging concern is that modern research increasingly involves international collaborations. Melissa Anderson, who studies scientific integrity at the University of Minnesota in Minneapolis, says that integrity efforts will not be robust unless countries adopt common rules for factors such as ethical conduct and authorship. Simon Godecharle, a bio­medical ethicist at the Catholic University of Leuven in Belgium, agrees. “Not one definition of research integrity or misconduct is the same in any two European countries” apart from Denmark and Norway, he says.

“Many countries are just at the start of their oversight integrity policies,” said Anderson. But compared with the first world meeting, six years ago in Lisbon, said Steneck, countries’ awareness that they need to foster integrity is at an all-time high. The difficulty now, he said, is in making sure that integrity codes and training requirements are actually affecting lab practice. “Everything we’re doing is above where the researchers live their day-to-day lives — and the challenge is to reach them.”