In 2011, Bayer researchers made a splash with news that they could only replicate 25% of the preclinical academic projects that they took on (Nat. Rev. Drug Discov. 10, 712; 2011). Amgen fared even worse when trying to recreate the findings from cancer papers, with just an 11% success rate (Nature 483, 531–533; 2012).

Both surveys galvanized the biomedical community to address the reproducibility crisis, but neither provided raw data or pointed to the specific papers that had been assessed. To get a more transparent and actionable understanding of the scope and causes of the problem, the Center for Open Science launched the Reproducibility Project: Cancer Biology in 2013. They picked 50 papers from Nature, Science and other high-profile publications, and hired independent laboratories to try to replicate the findings.

Their first, messy, results have now been published in eLife. Out of five completed studies, two substantially reproduced the initial results, two yielded uninterpretable results and one could not reproduce the initial results.

Erkki Ruoslahti, an author of the paper that could not be reproduced and a researcher at Sanford Burnham Prebys Medical Discovery Institute in La Jolla, California, USA, says that at least 10 other labs have validated his findings. This discrepancy, along with the uninterpretable results, highlights the multifold challenges of replication studies. Many papers, for example, have incomplete methodology sections that make for time-consuming and inexact follow-up.

Because of the unexpectedly high cost of replication studies, the Reproducibility Project has scaled back its ambitions to now only assess around 30 studies.