Based on data from Altmetric.com. Altmetric is supported by Macmillan Science and Education, which owns Nature Publishing Group.

Nearly a decade after writing a scathing critique of biomedical research, ‘Why Most Published Research Findings Are False’, health-policy researcher John Ioannidis has published a follow-up.

Ioannidis, at Stanford University in California, suggests a blueprint for making scientific results more reliable, including increasing the statistical certainty of discoveries, giving more weight to negative results and changing how researchers earn kudos1.

Many commenters chimed in with support, even if they did not believe that change could come easily. Simon Wheeler, a public-health nutritionist at the University of Cambridge, UK, endorsed Ioannidis’s suggestions, tweeting that scientists should be “creating a culture where these are norms and expectations, not just lofty ideals.”

Mick Watson, a computational biologist at the University of Edinburgh’s Roslin Institute, UK, tweeted:

Echoing criticisms raised in his previous paper and by other observers many times over the years, Ioannidis argues that the current system pushes researchers to publish as frequently as possible, but does not encourage more collaborative, cooperative processes — such as data sharing or rigorous peer review. He places much of the blame on academic pressure to publish or perish, but adds that corporations can also drive science in the wrong direction; for instance, as papers in influential journals and invitations to prestigious meetings become part of companies’ marketing strategies. He argues that the calibre of a scientist should not be gauged by grant money, number of publications or academic titles, but by the quality of his or her work.

Ioannidis also questioned the statistical processes behind a large number of supposed discoveries. In many cases, he writes, the definition of a meaningful result is so lenient that many false-positive results make it into the literature.

Adam Jacobs, a medical statistician at medical consulting company Premier Research in Wokingham, UK, tweeted the paper's title and his opinion of one of its recommendations:

Speaking to Nature, Jacobs said that the standard measure for statistical significance (a P value less than 0.05) “is not a very high bar”. Raising that standard would help in many ways, but replication is also vital, he says: “We should almost never believe something on the basis of a single trial with a P of less than 0.05.”

Jacobs strongly supports another of Ioannidis’s prescriptions: public registration of clinical trials. Currently, studies with positive results get published but negative studies can be completely forgotten, he says. If the trial is listed in a public database, “the researchers can’t quietly pretend that the study never happened”.

Some commenters felt that Ioannidis wasn’t exactly breaking new ground. Wheeler says that many of Ioannidis’s recommendations are already being followed by what he describes as responsible research organizations, including the UK Medical Research Council, which funds some of Wheeler’s work. But he agrees that there is much room for improvement. For example, many epidemiological studies fail to adequately control for important variables such as smoking, but “journals seem happy to publish providing the study is big enough and the finding is sufficiently novel”. At the same time, he said, carefully designed studies with negative results may never make it to print. “This is an utterly perverse situation, and Ioannidis quite rightly calls for the research community to get its act in order.”