Dozens of papers reporting efforts to attack cancer cells are being checked in an open-source project. Credit: Stanley Flegler/Visuals Unlimited, Inc./Science Photo Library

Erkki Ruoslahti was on track to launch a drug trial in people with cancer this year, but his plan may now be in ­jeopardy. A high-profile project designed to gauge the reproducibility of findings from dozens of influential papers on cancer biology publishes results for its first five papers this week, including one by Ruoslahti. And scientists who tried to replicate his findings say that they can’t get his drug to work. For the other four papers, the replication results are less clear.

Ruoslahti, a cancer biologist at the Sanford Burnham Prebys Medical Discovery Institute in La Jolla, California, disputes the verdict on his research. After all, at least ten laboratories in the United States, Europe, China, South Korea and Japan have validated the 2010 paper1 in which he first reported the value of the drug, a peptide designed to penetrate tumours and enhance the cancer-killing power of other chemotherapy agents. “Have three generations of postdocs in my lab fooled themselves, and all these other people done the same? I have a hard time believing that,” he says.

LISTEN

Reporter Kerri Smith finds out about efforts to repeat high-profile cancer research.

A single failure to replicate results does not prove that initial findings were wrong — and shouldn’t put a stain on individual papers, says Tim Errington, the manager of the reproducibility project, who works at the Center for Open Science in Charlottesville, Virginia. Investigators should take results as information, not condemnation, says Errington. “If we just see someone else’s evidence as ­making it hard for the person who did the original research, there is something wrong with our culture.”

But Ruoslahti worries that the failure to reproduce his results will weaken his ability to raise money for DrugCendR, a company in La Jolla that he founded to develop his therapy. “I’m sure it will,” he says. “I just don’t know how badly.”

Repeated attempts

The Reproducibility Project: Cancer Biology launched in 2013 as an ambitious effort to scrutinize key findings in 50 cancer papers published in Nature, Science, Cell and other high-impact journals. It aims to determine what fraction of influential cancer biology studies are probably sound — a pressing question for the field. In 2012, researchers at the biotechnology firm Amgen in Thousand Oaks, California, announced that they had failed to replicate 47 of 53 landmark cancer papers2. That was widely reported, but Amgen has not identified the studies involved.

The reproducibility project, by contrast, makes all its findings open — hence Ruoslahti’s discomfort. Two years in, the project downsized to 29 papers, citing budget constraints among other factors: the Laura and John Arnold Foundation in Houston, Texas, which funds the ­project, has committed close to US$2 million for it. Full results should appear by the end of the year. But seven of the replication studies are now complete, and eLife is publishing five fully analysed efforts on 19 January.

These five paint a muddy picture (see ‘Muddy waters’). Although the attempt to replicate Ruoslahti’s results failed3, two of the other attempts4,5 “substantially reproduced” research findings — although not all experiments met thresholds of statistical significance, says Sean Morrison, a senior editor at eLife. The remaining two6,7 yielded “uninterpretable results”, he says: because of problems with these efforts, no clear comparison can be made with the original work.

Muddy waters

Table 7.41724 Muddy waters

“For people keeping score at home, right now it’s kind of two out of three that appear to have been reproduced,” says Morrison, who studies cancer and stem cells at the University of Texas Southwestern Medical Center in Dallas.

Nature spoke to corresponding authors for all of the original reports. Some praised the reproducibility project, but others worried that the project might unfairly discredit their work. “Careers are on the line here if this comes out the wrong way,” says Atul Butte, a computational biologist at the University of California, San Francisco, whose own paper was mostly substantiated by the replication team.

Erkki Ruoslahti says he’s worried that the reproducibility project’s inability to validate his findings will affect his ability to launch a cancer drug trial. Credit: Paul Wellman

The reason for the two “uninterpretable” results, Morrison says, is that things went wrong with tests to measure the growth of tumours in the replication attempts. When this happened, the replication researchers — who were either at contract research labs or at core facilities in academic institutions — were not allowed to deviate from the peer-reviewed protocols that they had agreed at the start of their experiments (in consultation with the original authors). So they simply reported the problem. Doing anything else — such as changing the experimental conditions or restarting the work — would have introduced bias, says Errington.

Such conflicts mean that the replication efforts are not very informative, says Levi Garraway, a cancer biologist at the Dana-Farber Cancer Institute in Boston, Massachusetts. “You can’t distinguish between a trivial reason for a result versus a profound result,” he says. In his study, which identified mutations that accelerate cancer formation, cells that did not carry the mutations grew much faster in the replication effort7 — perhaps because of changes in cell culture. This meant that the replication couldn’t be compared to the original.

Devil’s in the details

Perhaps the clearest finding from the project is that many papers include too few details about their methods, says Errington. Replication teams spent many hours working with the original authors to chase down protocols and reagents, in many cases because they had been developed by students and postdocs who were no longer with the lab. Even so, the final reports include long lists of reasons why the replication studies might have turned out differently — from laboratory temperatures to tiny variations in how a drug was delivered. If the project helps to bring such confusing details to the surface, it will have performed a great service, Errington says.

Others think that the main value of the project is to encourage scepticism. “Commonly, investigators take published results at face value and move on without reproducing the critical experiments themselves,” says Glenn Begley, an author of the 2012 Amgen report.

That’s not the case for Albrecht Piiper, a liver-cancer researcher at the University Hospital Frankfurt in Germany. Piiper has replicated Ruoslahti’s work in his own lab8. Despite the latest result, he says, he has “no doubt” about the validity of Ruoslahti’s paper.