Nature | News

Cancer reproducibility project releases first results

An open-science effort to replicate dozens of cancer-biology studies is off to a confusing start.

Article tools

Stanley Flegler/Visuals Unlimited, Inc./Science Photo Library

Dozens of papers reporting efforts to attack cancer cells are being checked in an open-source project.

Erkki Ruoslahti was on track to launch a drug trial in people with cancer this year, but his plan may now be in ­jeopardy. A high-profile project designed to gauge the reproducibility of findings from dozens of influential papers on cancer biology publishes results for its first five papers this week, including one by Ruoslahti. And scientists who tried to replicate his findings say that they can’t get his drug to work. For the other four papers, the replication results are less clear.

Ruoslahti, a cancer biologist at the Sanford Burnham Prebys Medical Discovery Institute in La Jolla, California, disputes the verdict on his research. After all, at least ten laboratories in the United States, Europe, China, South Korea and Japan have validated the 2010 paper1 in which he first reported the value of the drug, a peptide designed to penetrate tumours and enhance the cancer-killing power of other chemotherapy agents. “Have three generations of postdocs in my lab fooled themselves, and all these other people done the same? I have a hard time believing that,” he says.


Reporter Kerri Smith finds out about efforts to repeat high-profile cancer research.

You may need a more recent browser or to install the latest version of the Adobe Flash Plugin.

A single failure to replicate results does not prove that initial findings were wrong — and shouldn’t put a stain on individual papers, says Tim Errington, the manager of the reproducibility project, who works at the Center for Open Science in Charlottesville, Virginia. Investigators should take results as information, not condemnation, says Errington. “If we just see someone else’s evidence as ­making it hard for the person who did the original research, there is something wrong with our culture.”

But Ruoslahti worries that the failure to reproduce his results will weaken his ability to raise money for DrugCendR, a company in La Jolla that he founded to develop his therapy. “I’m sure it will,” he says. “I just don’t know how badly.”

Repeated attempts

The Reproducibility Project: Cancer Biology launched in 2013 as an ambitious effort to scrutinize key findings in 50 cancer papers published in Nature, Science, Cell and other high-impact journals. It aims to determine what fraction of influential cancer biology studies are probably sound — a pressing question for the field. In 2012, researchers at the biotechnology firm Amgen in Thousand Oaks, California, announced that they had failed to replicate 47 of 53 landmark cancer papers2. That was widely reported, but Amgen has not identified the studies involved.

The reproducibility project, by contrast, makes all its findings open — hence Ruoslahti’s discomfort. Two years in, the project downsized to 29 papers, citing budget constraints among other factors: the Laura and John Arnold Foundation in Houston, Texas, which funds the ­project, has committed close to US$2 million for it. Full results should appear by the end of the year. But seven of the replication studies are now complete, and eLife is publishing five fully analysed efforts on 19 January.

These five paint a muddy picture (see ‘Muddy waters’). Although the attempt to replicate Ruoslahti’s results failed3, two of the other attempts4, 5 “substantially reproduced” research findings — although not all experiments met thresholds of statistical significance, says Sean Morrison, a senior editor at eLife. The remaining two6, 7 yielded “uninterpretable results”, he says: because of problems with these efforts, no clear comparison can be made with the original work.

Muddy waters

Results of the first five replication studies run by the Reproducibility Project: Cancer Biology.

Paper Conclusion Focus of key experiment Replication results Example problem
Sirota, M. et al. Sci. Transl. Med. 3, 96ra77 (2011) Public gene expression data can identify unintuitive uses for old drugs Growth of tumours treated with an anti-ulcer drug Substantially reproduced Disagreements over appropriateness of statistical analysis
Sugahara, K. N. et al. Science 328, 1031–1035 (2010) A tumour-penetrating peptide enhances the effects of cancer drugs Growth of peptide-treated tumours Not reproduced Potential differences in peptide synthesis or solutions
Willingham, S. B. et al. Proc. Natl Acad. Sci. USA 109, 6662–6667 (2012) Blocking contact between CD47 and another protein inhibits tumour Growth and metastasis of treated tumours

Uninterpretable (Treated tumours were larger, but not significantly so)

Some tumours spontaneously regressed
Delmore, J. E. et al. Cell 146, 904–917 (2011) Blocking a protein sequence damps down pro-cancer genes Gene expression in treated cells; growth of treated tumours Substantially reproduced Bioluminescence/survival for the control groups differed markedly
Berger, M. F. et al. Nature 485, 502–506 (2012) Sequencing reveals gene that is frequently mutated in melanoma and accelerates growth Tumour formation in cells carrying mutations Uninterpretable Tumours without mutations grew too fast for any accelerated growth to be detected

“For people keeping score at home, right now it’s kind of two out of three that appear to have been reproduced,” says Morrison, who studies cancer and stem cells at the University of Texas Southwestern Medical Center in Dallas.

Nature spoke to corresponding authors for all of the original reports. Some praised the reproducibility project, but others worried that the project might unfairly discredit their work. “Careers are on the line here if this comes out the wrong way,” says Atul Butte, a computational biologist at the University of California, San Francisco, whose own paper was mostly substantiated by the replication team.

Paul Wellman

Erkki Ruoslahti says he’s worried that the reproducibility project’s inability to validate his findings will affect his ability to launch a cancer drug trial.

The reason for the two “uninterpretable” results, Morrison says, is that things went wrong with tests to measure the growth of tumours in the replication attempts. When this happened, the replication researchers — who were either at contract research labs or at core facilities in academic institutions — were not allowed to deviate from the peer-reviewed protocols that they had agreed at the start of their experiments (in consultation with the original authors). So they simply reported the problem. Doing anything else — such as changing the experimental conditions or restarting the work — would have introduced bias, says Errington.

Such conflicts mean that the replication efforts are not very informative, says Levi Garraway, a cancer biologist at the Dana-Farber Cancer Institute in Boston, Massachusetts. “You can’t distinguish between a trivial reason for a result versus a profound result,” he says. In his study, which identified mutations that accelerate cancer formation, cells that did not carry the mutations grew much faster in the replication effort7 — perhaps because of changes in cell culture. This meant that the replication couldn’t be compared to the original.

Devil’s in the details

Perhaps the clearest finding from the project is that many papers include too few details about their methods, says Errington. Replication teams spent many hours working with the original authors to chase down protocols and reagents, in many cases because they had been developed by students and postdocs who were no longer with the lab. Even so, the final reports include long lists of reasons why the replication studies might have turned out differently — from laboratory temperatures to tiny variations in how a drug was delivered. If the project helps to bring such confusing details to the surface, it will have performed a great service, Errington says.

Others think that the main value of the project is to encourage scepticism. “Commonly, investigators take published results at face value and move on without reproducing the critical experiments themselves,” says Glenn Begley, an author of the 2012 Amgen report.

That’s not the case for Albrecht Piiper, a liver-cancer researcher at the University Hospital Frankfurt in Germany. Piiper has replicated Ruoslahti’s work in his own lab8. Despite the latest result, he says, he has “no doubt” about the validity of Ruoslahti’s paper.

Journal name:
Date published:


  1. Sugahara, K. N. et al. Science 328, 10311035 (2010).

  2. Begley, C. G. & Ellis, L. M. Nature 483, 531533 (2012).

  3. Mantis, C. et al. eLife 6, e17584 (2017).

  4. Aird, F. et al. eLife 6, e21253 (2017).

  5. Kandela, I. et al. eLife 6, e17044 (2017).

  6. Horrigan, S. K. et al. eLife 6, e18173 (2017).

  7. Horrigan, S. K. et al. eLife 6, e21634 (2017).

  8. Schmithals, C. et al. Cancer Res. 75, 31473154 (2015).

For the best commenting experience, please login or register as a user and agree to our Community Guidelines. You will be re-directed back to this page where you will see comments updating in real-time and have the ability to recommend comments to other users.


Commenting is currently unavailable.

sign up to Nature briefing

What matters in science — and why — free in your inbox every weekday.

Sign up



Nature Podcast

Our award-winning show features highlights from the week's edition of Nature, interviews with the people behind the science, and in-depth commentary and analysis from journalists around the world.