Over the past year, the reputation of the biomedical literature has taken a bit of a beating. Controversy has centered around studies by researchers at Bayer Pharmaceuticals and Amgen, which independently have shown that a troublingly high number of papers in certain areas of translational research cannot be replicated. Last month, as a response to this problem, the Palo Alto, California–based Science Exchange announced the launch of the Reproducibility Initiative (http://www.reproducibilityinitiative.org/). This initiative is one way in which online platforms can facilitate rapid and independent corroboration of published results. But major progress in improving the reproducibility of research will likely require more sweeping changes to the way in which science is published and validated.

Published research findings are often modified or refuted by subsequent evidence. There is nothing unusual about this; it is the way scientific knowledge progresses. But there is an increasing concern that publication bias toward positive results, rising competition to rush findings into print, an overemphasis on publishing conceptual breakthroughs in high-impact journals and a lack of incentives for academic researchers to retract irreproducible findings may be increasing the incidence of false claims in the literature. All of which has implications for translational research.

Misleading papers result in considerable expenditure of time, money and effort by researchers following false trails. This affects the careers of postdocs and academics. It affects companies and investors, presenting yet another barrier for the translation of academic discoveries into new medicines by diverting funds away from real advances. And most troublingly, it can result in patients being exposed to drugs on the basis of wrong information. In the past year, two studies have brought into sharp focus just how bad the problem may be.

In September 2011, a team of researchers at Bayer provided a retrospective survey of four years of work in oncology, women's health and cardiovascular target validation (Nat. Rev. Drug. Discov. 10, 712, 2011). They asked 23 of their R&D scientists to tally papers they'd acted upon and whether or not the findings had panned out. The analysis revealed that only 20–25% of the relevant published data could be corroborated internally.

Five months later, a collaboration involving Amgen scientists published the results of their efforts to replicate findings from recent publications in the clinical oncology literature (Nature 483, 531–533, 2012). The data were disturbing. Of 53 papers, only 6 (11%) were reproducible. A particularly troubling aspect was the disclosure that in return for cooperation, several of the authors required the company to sign a confidentiality agreement preventing the identity of their paper from being revealed.

Which brings us to the Reproducibility Initiative.

The effort takes advantage of Science Exchange's existing network of >1,000 core facilities and contract research organizations. After authors of an original publication submit their study design, the initiative matches the work to qualified facilities (the identity of which is masked from authors), which then attempt to replicate the studies for a fee. Once the results are returned, the initiative's advisory board determines whether the study has earned a 'certification of reproducibility' (authors cannot appeal the board's decision).

It is entirely the author's prerogative as to whether the findings are written up; if they are, they can be published in a special section of PLoS ONE. Nature Publishing Group and Rockefeller University Press have also agreed to link from the original publication to the PLoS ONE paper, and Figshare (http://figshare.com/) will host the data from the verification. Whether authors will elect to publish findings that fail to replicate their original results is unclear—if they don't, then the arrangement clearly fails to address the problem of research reproducibility.

Another big question is, who pays for the work? According to Science Exchange, initially authors will be paying. But this is less than optimal, given the competing interest and the scarcity of research funding.

A different scenario would be for funding agencies to bankroll the effort. Alternatively, tech transfer offices—or some other, richer part of the author's institution—could support validation work as a means of making academic assets more attractive to potential licensing partners. With the current vogue for 'capital efficiency', perhaps venture capitalists and pharma companies will use the service if it's cheaper than replicating work in-house.

So what about the other papers, validation of which won't find support from investors, companies, institutions or funders?

Online commenting on papers—available on Nature for the past 2.5 years but yet to be rolled out to other Nature research journals—remains an easy way for the community to highlight problematic studies. More could also be done during the peer review process in terms of rejecting papers in which only 'representative' data are reported or proper statistical analysis is not used (Nat. Neurosci. 14, 1105–1107, 2011).

Journal editors and reviewers can also do more to ensure that the relevant information is captured about an experimental protocol and conditions, instrument settings and parameters. Too often, there is more than a little secret sauce to the procedures followed in a laboratory. Detailed description and the publication of full protocols (together with videos depicting experiments) could help.

Perhaps most importantly, different research communities need to come together to address particularly troublesome research questions in their field; in oncology, for example, a clear need exists for guidance on the preclinical cancer models to use in particular settings and to address particular questions.

Clearly, more should be done to increase the quality of published work. The Reproducibility Initiative, and other efforts like it (e.g., http://www.sciencecheck.org/), will help by validating results and ensuring that supporting data are placed in openly accessible repositories. Greater attention also needs to be paid during peer review to the completeness of the experimental protocol disclosed, the supporting data and the robustness of the authors' analysis. Most of all, a change in publication culture is needed. Sometimes replication is as important as discovery.