Nature's roundup of the papers and issues gaining traction on social media.
Social-media merriment surfaced recently over a peer-reviewed paper investigating supposed evidence of Bigfoot. Other, more-sombre discussions by the science community bubbled up online after a report in the British Medical Journal (BMJ) found serious shortcomings in aspects of peer review. Researchers also discussed Science's announcement that it will overhaul its review process to improve the evaluation of statistics.
Researchers had some fun with a rare appearance of Bigfoot in the scientific literature. A team led by geneticist Bryan Sykes at the University of Oxford, UK, ran DNA tests on 30 hair samples reputed to come from “anomalous” primates, including Bigfoot and the Himalayan yeti. As it turned out, the origins of the hairs could be explained without invoking any elusive hominins. Malcolm Campbell, a cell biologist at the University of Toronto, summed up the paper in his tweet: “Cows, and horses, and bears, oh my. 'Bigfoot' & 'Sasquatch' samples come from extant mammals.” And plant scientist David Baltrus of the University of Arizona in Tucson tweeted: “That clump of Bigfoot hair you found outside your cabin ... yeah, prolly not Bigfoot.”
Chiming with some scientists' frustrations, a BMJ article looked at the effect of open peer review — in which reviewers' identities are revealed — on the quality of papers about clinical trials. The authors, led by researchers at the University of Oxford, compared original manuscripts with the published reports of 93 clinical trials, concluding that “peer reviewers fail to detect important deficiencies in reporting of the methods and results of randomised trials.” Primary health-care physician Trisha Greenhalgh, who linked to the BMJ paper on Twitter, shared an even harsher verdict: “Peer review is rubbish — official.”
The study found that of the 93 papers, even flawed ones generally passed through peer review relatively unscathed. The changes made were often for the better; for example, a few reviewers rightly suggested that manuscripts needed to explain their processes for patient randomization more clearly. But the end results were far from perfect: more than half of the papers failed to adequately report estimated effect size or confidence intervals — statistical measures that are crucial for putting results into perspective. The authors acknowledge that it's not clear from their work whether reviewers are taking a softer approach because their identities are revealed.
Reached for comment, Greenhalgh, of Barts and the London School of Medicine and Dentistry, says that peer review “isn't always rubbish — it's one of the crucial ways of progressing the scholarly agenda”. The BMJ paper suggests that open peer review isn't perfect, but Greenhalgh believes that anonymous reviews are often worse. “Some reviewers seem to consider the request to review as an opportunity to play power games,” she says.
Earlier this month, Greenhalgh used Twitter to complain about an anonymous reviewer who had called one of her manuscripts “mere opinionating” and “not an academic paper”. According to her Twitter feed, she responded with an 11-page retort because “anonymous reviewers need to be held to account”. Ayelet Kuper, a clinician–scientist at the University of Toronto, Canada, tweeted her support: “Good luck! Reassuring to know you're still fighting for all of us ....”
Days after the BMJ paper was published, Science's editor-in-chief, Marcia McNutt, announced in an editorial that the journal was revamping its own peer-review process. A Statistical Board of Reviewing Editors will now assess the statistics in number-heavy papers, largely relieving other reviewers of that duty. In a phrase that later did the rounds on Twitter, McNutt wrote that “it is not realistic to expect that a technical reviewer, chosen for her or his expertise in the topical subject matter ... will also be an expert in data analysis.” In a similar vein, Nature announced in 2013 that it would start employing experts to consult on statistics on certain papers.
Science's move provoked lively reactions on social media. Cognitive psychologist Daniël Lakens at Eindhoven University of Technology tweeted: “Science gives up on researchers understanding statistics.” To which Thomas Lumley, a statistician at the University of Auckland, responded: “I would have never guessed that medicine was decades ahead. Wow” — referring to the fact that major medical journals, such as The Lancet and the Journal of the American Medical Association, have long given special scrutiny to statistics. Katherine Denby, a plant geneticist at the University of Warwick in Coventry, UK, tweeted “excellent but more journals need to do this”.