A biotechnology firm is releasing data on three failed efforts to confirm findings in high-profile scientific journals — details that the industry usually keeps secret.
Amgen, headquartered in Thousand Oaks, California, says that it hopes the move will encourage others in industry and academia to describe their own replication attempts, and thus help the scientific community to get to the bottom of work that other labs are having trouble verifying.
The data are posted online at a newly launched channel dedicated to quickly publishing efforts to confirm scientific findings. The 'Preclinical Reproducibility and Robustness' channel is hosted by F1000Research, the publishing platform of London-based publishers Faculty of 1000 (F1000). Scientists who are concerned about the irreproducibility of preclinical research say that they welcome the initiative — but are not sure whether it will gain traction.
Open to criticism
The idea emerged from discussions at a meeting focused on improving scientific integrity, hosted by the US National Academy of Sciences in 2015. Sasha Kamb, who leads research discovery at Amgen, said that his company's scientists have in many instances tried and failed to reproduce academic studies, but that it takes too much time and effort to publish these accounts through conventional peer-review procedures.
Bruce Alberts, a former editor-in-chief of Science who sits on F1000Research’s advisory board, suggested that Kamb try the faster F1000 route — an open-science publishing model in which submitted studies are posted online (for a fee that ranges from US$150 to $1000) before undergoing peer review; submissions are subject to checks by F1000 editors to ensure that data are freely available and that methods and reagents are adequately described.
“The idea is to get the data out and get it critically looked at,” Alberts says. The editors then invite open peer review of the studies. If reviewers recommend the work, it is indexed in databases such as PubMed and Scopus.
F1000, in turn, has created a designated channel for these studies in the hope that they will garner attention, give credit to researchers doing careful confirmatory experiments and provide a place where the original researchers of a study and other scientists can discuss reasons behind different outcomes.
In 2012, Amgen researchers made headlines when they declared that they had been unable to reproduce the findings in 47 of 53 'landmark' cancer papers1. Those papers were never identified — partly because of confidentiality concerns — and there are no plans to release details now either, says Kamb, who was not involved with that publication. He says that he prefers to focus on more-recent publications.
The three studies that Amgen has posted deliberately do not make a detailed comparison of their results to previous papers, says Kamb. “We don’t want to make strong conclusions that someone else’s work is wrong with a capital W,” he says.
One study adds to existing criticism of a Science paper that suggested that a cancer drug might be a potential treatment for Alzheimer’s disease2; a second counters earlier findings (including some by Amgen researchers) connecting a gene to insulin sensitivity in mice3, 4; and a third counters a Nature paper reporting that inhibiting one particular protein could enhance degradation of other proteins associated with neurodegenerative diseases5.
“We believe that interested scientists can look at our methods and results and draw their own conclusions,” Kamb says. Amgen researchers did not contact the original authors when they conducted their studies, he says, but future postings could be collaborative.
Right now, the main way that the scientific community spreads the word about irreproducible research is through innuendo, which is inefficient and unfair to the original researchers, says Ricardo Dolmetsch, global head of neuroscience at Novartis’s Institutes for Biomedical Research in Cambridge, Massachusetts. “Anything we can do to improve the ratio of signal to noise in the literature is very welcome,” he says.
“Non-replication does not necessarily mean 'not true'”
The F1000 initiative is useful, but previous efforts have tried and failed to encourage the reporting of replications and negative results, cautions John Ioannidis, who studies scientific robustness at California's Stanford University. That is because, in general, the scientific community undervalues such work, he says.
But Kamb says that he has spoken with several industry leaders who have expressed support, and he hopes that they will contribute eventually. Roger Perlmutter, head of research and development at pharmaceutical giant Merck, says his colleagues can participate in the channel "at their own discretion". Morgan Sheng, a vice-president at biotechnology company Genentech in South San Francisco, says he can forsee his company's scientists submitting data to the venture too.
"I believe the main risk of a publication venue like the F1000 channel is that it becomes a place for “bashing" good science, because biological experiments are complex and beset by many variables that are hard to control. Non-replication does not necessarily mean 'not true'," Sheng adds. He says the site should be careful to emphasize publication of positive replication data as well.
Academic researchers are unlikely to risk alienating their peers by publishing disconfirming results, predicts Elizabeth Iorns, head of Science Exchange in Palo Alto, California. Her firm provides an online marketplace where scientists can offer to do others’ experiments, which she used to launch a reproducibility initiative in 2012.
But providing industry scientists with a low-barrier way to share their attempts might prove a winning strategy, she says. “Hopefully, the awareness of the reproducibility issue has been raised such that people are no longer afraid to talk about it.”
- Journal name:
- Date published: