First report from Reproducibility Initiative highlights importance of publishing details of experimental methods.
An initiative that aims to put important biology findings to the test by having them repeated by independent teams has published its first result. It confirmed the validity of a study on the leishmaniasis parasite — but only after a first attempt at replication had appeared to be a partial failure. The consortium of scientists that led the effort says that the experience has provided important lessons on the importance of scrupulously describing experimental protocols in research papers.
The report, published on 17 December in PLOS ONE, was a test case for the Reproducibility Initiative, a broader drive to validate preclinical research that was prompted by a stream of studies suggesting that many important experiments in biology cannot easily be reproduced.
The lead author of the original study, Robert McMaster of the University of British Columbia in Vancouver, Canada, volunteered his paper for validation when asked by Elizabeth Iorns, a former biologist who co-founded the Reproducibility Initiative. She is also chief executive of the company Science Exchange in Palo Alto, California.
The work of McMaster and his team, which was published in 2011 in PLOS Neglected Tropical Diseases1, had found that peptide hormones made naturally by cattle are efficient killers of the parasite Leishmania major. McMaster was confident that the results would live up to the added scrutiny. “When we publish a result, we’ve probably already tested it out ten times in our own laboratory,” he says. Iorns says that she, too, had expected the attempt to be straightforward.
By October 2013, a first set of replication experiments by researchers at the New York University School of Medicine found that the peptides did kill the parasitic cells, but only when applied at doses ten times greater than McMaster’s team had reported. The difference mattered because a higher dose of drugs might be more difficult to maintain in a human’s bloodstream. McMaster’s group accepted the result, says Iorns, and the Reproducibility Initiative’s board — a team of scientists from around the world — debated whether to declare the replication a success or a failure.
Then the Reproducibility Initiative team realized that McMaster’s paper had not precisely described the molecules involved. His team had attached an amide group to the peptide hormones, mimicking a common modification made to the peptide when it is produced in cattle. But because the original authors hadn’t explicitly noted this in their paper, the replication team had tested unamidated peptides. McMaster’s team had reviewed and approved the validation methods without spotting the difference. “I think we just assumed they were amidated,” McMaster says.
When the replication team repeated the entire experiment with amidated peptides, it got a result much closer to McMaster’s finding.
The experience points to a wider communication problem within science, Iorns says. “There is a huge problem with not accurately recording the steps of a protocol,” she says. McMaster says that in future, he will add more detail to the methods sections of his papers.
As a result of internal debate around what had appeared to be a partially failed effort, the Reproducibility Initiative has now changed its methods for running a replication. At first, the scientists had thought that their validation team would simply get the initiative’s board and the original authors to first approve the protocol. Then the team would collect and analyse the data and declare the study a success or a failure. Now, the initiative has decided to publish peer-reviewed protocols before they undertake experiments.
Combine and compare
The first of these pre-registered protocols, for the initiative’s forthcoming project to validate cancer-biology studies, were published on 10 December in the journal eLife. The first results should be out in the middle of 2015 and most will be completed by early 2016, according to Tim Errington, a project manager at the Center for Open Science, which is coordinating the effort with Science Exchange.
And rather than declare a single success or failure, the initiative will instead report the statistical significance of the result when both data-sets are combined — much as a meta-analysis combines the results of different data-sets. The same approach has also been adopted by the Many Labs Replication Project, a sister attempt to validate work in psychology (see 'Psychologists strike a blow for reproducibility').
It is still unclear whether funders will be prepared to bankroll validation attempts, which take cash from research budgets. The leishmaniasis test study cost only US$2,000 (and was paid for by a German-based reagents supplier, Antibodies Online), largely because the laboratory where it was conducted has specialized expertise and equipment to do leishmaniasis studies.
The initiative will now validate around 50 findings in cancer biology, for which it obtained $1.3 million — around $25,000 per study — from the Laura and John Arnold Foundation in Houston, Texas. Beyond that, says Iorns, it has obtained funding for only a few further studies, from the US National Institute on Drug Abuse (part of the National Institutes of Health).
As for the particular work on peptides against leishmaniasis, the molecules tested haven’t progressed to human experiments. They are so large that it would be too expensive to synthesize them at the purity that would be required, McMaster says. His team is working instead on shorter-chain peptides as potential treatments.
The motivation behind replication efforts is still unfamiliar to some scientists — at least to judge from Iorns’ experience. Even though the validation team had specifically agreed with PLOS ONE to publish the study there, one referee recommended that the journal not publish the paper, Iorns says ruefully. “The referee rejected it because it was a replication and didn’t contain any novel work.”
Lynn, M. A. et al. PLoS Negl. Trop. Dis. 5, e1141 (2011).
Additional reporting by Ewen Callaway.
Related links in Nature Research
Related external links
About this article
Cite this article
Van Noorden, R. Parasite test shows where validation studies can go wrong. Nature (2014). https://doi.org/10.1038/nature.2014.16527