New policy follows efforts by other journals to bolster standards of data analysis.
The journal Science is adding an extra round of statistical checks to its peer-review process, editor-in-chief Marcia McNutt announced today. The policy follows similar efforts from other journals, after widespread concern that basic mistakes in data analysis are contributing to the irreproducibility of many published research findings.
“Readers must have confidence in the conclusions published in our journal,” writes McNutt in an editorial today1. Working with the American Statistical Association, the journal has appointed seven experts to a statistics board of reviewing editors (SBoRE). Manuscript will be flagged up for additional scrutiny by the journal’s internal editors, or by its existing Board of Reviewing Editors (more than 100 scientists whom the journal regularly consults on papers) or by outside peer reviewers. The SBoRE panel will then find external statisticians to review these manuscripts.
Asked whether any particular papers had impelled the change, McNutt said: “The creation of the [statistics board] was motivated by concerns broadly with the application of statistics and data analysis in scientific research and is part of Science’s overall drive to increase reproducibility in the research we publish.”
Giovanni Parmigiani, a biostatistician at the Harvard School of Public Health, is a member of the SBoRE group. He says he expects the board to “play primarily an advisory role”. He agreed to join because he “found the foresight behind the establishment of the SBoRE to be novel, unique and likely to have a lasting impact. This impact will not only be through the publications in Science itself, but hopefully through a larger group of publishing venues that may want to model their approach after Science.”
John Ioannidis, a physician who studies research methodology at Stanford University in California, says that the policy is “a most welcome step forward” and “long overdue”. “Most journals are weak in statistical review, and this damages the quality of what they publish. I think that for the majority of scientific papers nowadays statistical review is more essential than expert review,” he says, but he noted that biomedical journals such as Annals of Internal Medicine, the Journal of the American Medical Association and The Lancet pay strong attention to statistical review.
Professional scientists are expected to know how to analyse data, but statistical errors are alarmingly common in published research, according to David Vaux, a cell biologist at the Walter and Eliza Hall Institute of Medical Research in Parkville, Australia. Researchers should improve their standards, he wrote in Nature in 2012, but journals should also take a tougher line, “engaging reviewers who are statistically literate and editors who can verify the process”2. Vaux says that Science’s idea to pass some papers to statisticians “has some merit, but a weakness is that it relies on the board of reviewing editors to identify [the papers that need scrutiny] in the first place”.
Journal reform is starting to happen, says Bernd Pulverer, chief editor of the EMBO Journal in Heidelberg, Germany. “We have been discussing the level of statistics in our papers for some time. All too often, data in molecular cell-biology papers are indeed still published with ill-defined, underpowered or plain wrong statistics,” he says. But Pulverer adds that the EMBO Journal and other publications are planning to launch checklists of basic statistical information that should be reported in research papers, and it also plans to add statistics experts to its editorial board.
Statistical checklists are emerging as standards after workshops organized by the US National Institutes of Health in 2012 to discuss the problems leading to irreproducible research findings. In April 2013, for example, Nature announced it that had created such a checklist, and that to help improve the statistical robustness of its papers, it would employ statisticians “as consultants on certain papers, at the editors’ discretion and as suggested by referees”3. (Nature’s news and comment team is editorially independent of its research editorial team.)
“Nature and Science have shared their experiences of measures to improve their systems,” says Veronique Kiermer, executive editor at Nature. “We welcome their new initiative, just as we welcome any undertakings by publishers to improve statistical analyses.”
References
McNutt, M. Science 345, 9 (2014).
Vaux, D. L. Nature 492, 180–181 (2012).
Anonymous. Nature 496, 398 (2013).
Related links
Related links
Related links in Nature Research
Scientific method: Statistical errors 2014-Feb-12
Policy: NIH plans to enhance reproducibility 2014-Jan-27
Announcement: Reducing our irreproducibility 2013-Apr-24
Related external links
Rights and permissions
About this article
Cite this article
Van Noorden, R. Science joins push to screen statistics in papers. Nature (2014). https://doi.org/10.1038/nature.2014.15509
Published:
DOI: https://doi.org/10.1038/nature.2014.15509
This article is cited by
-
The fickle P value generates irreproducible results
Nature Methods (2015)