Nature 's roundup of the papers and issues gaining traction on social media.

Three papers that highlight key controversies in the modern practice of science have set off lively discussions on social media. The return of a once-retracted study raised questions about peer review and journal standards, a massive Facebook experiment stirred a debate about privacy and ethics, and an examination of clinical trials revived the vexing issue of bias.

Based on data from Altmetric is supported by Macmillan Science and Education, which owns Nature Publishing Group.

Many scientists have denounced the republication of a retracted study by Gilles-Eric Séralini and colleagues that linked genetically modified (GM) maize (corn) to tumours in rats. Kevin Folta, a horticultural scientist at the University of Florida in Gainesville tweeted “Holy lumpy rats! Republished Seralini #GMO paper had no scientific peer review before published in the new journal.” In reply, Mary Mangan, an independent bioinformatics researcher based in Boston, Massachusetts, tweeted “So it was more like exhumed than republished”.

In an interview, Séralini said that the study needed to re-enter the scientific record so that government agencies assessing GM corn have access to the data. But Steven Novella, a neurologist at Yale University School of Medicine in New Haven, Connecticut, wrote on his Neurologica blog that republishing the paper did not fix the many shortcomings that led to retraction. “The threshold for publication should be higher for controversial topics,” he continued. “If anything, the peer-review process ... should be tightened, not loosened.”

According to Séralini, further peer review wasn't necessary because the scientific audience had already examined and discussed the paper after it was first published in Food and Chemical Toxicology in 2012. Séralini says he has received compliments from thousands of scientists; however, much of the public reaction was highly critical. In a letter to the editor of Food and Chemical Toxicology in 2013, the European Society of Toxicologic Pathology noted that the rats used in the study were highly prone to tumours. They also say that Séralini et al. had failed to follow the normal protocols for testing possible carcinogens in rodents. And, citing the paper's photos of live rats with massive tumours, they accused the researchers of waiting too long to kill the animals.

Séralini said that five separate journals offered to republish the paper, but critics wonder why it was revived at all. In his Code for Life blog, New-Zealand-based computational biologist Grant Jacobs wrote that “the paper was republished with no consideration of the criticism offered since the original publication.” Heather Mak, sustainability manager for the Retail Council of Canada, tweeted: “What does this mean for [the] credibility of scientific journal processes?”

Séralini, G.-E. et al. Environ. Sci. Eur. 26, 14 (2014)

A study exploring the spread of emotions among users of Facebook gained modest attention when it first came out in early June. But interest skyrocketed when the conversation shifted to the research methods employed by Facebook. Thousands of observers took to twitter, blogs, and yes, Facebook, to discuss the fact that the site had essentially enlisted nearly 700,000 users into an experiment without warning or directly obtaining consent.

Alexandre Coninx, robotics engineer at Imperial College London, posted on Facebook that “in human behaviour research, when you want to lie to subjects or mislead them in any way (even for trivial things) you have to follow specific deontological rules and obtain and agreement from an ethics committee.” Brian Keegan, computational social scientist at Harvard University in Cambridge, Massachusetts, argued in his blog that the study “demands a re-evaluation of prevailing research ethics, design values, and algorithmic powers in massive networked architectures.” But he also wrote that the “hysteria” surrounding the story “got well ahead of any sober reading of the research.” And he asks: “Is raising online mobs to attack industry researchers conducive to starting dialogues to improve their processes for informed consent?”

Kramer, A., Guillory, J. & Hancock, J. Proc. Nat. Acad. Sci. 111, 24 (2014)

A paper in PLoS Medicine has rekindled a long-running discussion about deep-seated biases in clinical research. Researchers combined analyses of 3,140 randomized control studies to underscore a problem: published studies often differ significantly from the protocols established at the start of the trials. For example, the studies may use different statistical methods than the researchers originally envisioned. Such shifts sometimes reflect genuine and justifiable changes in the scientific approach, but they can also be a signal that researchers are selectively manipulating their data to make their results more impressive. The researchers conclude that preliminary protocols should always be published, a suggestion that sat well with David Soybel, professor of surgery at Penn State Hershey College of Medicine, who tweeted: “Bias isn't always avoidable but it could be accessible for reviewers to explore.”

Dwan, K. et al. (2014)