Masses of experimental results lie unpublished in social scientists' file drawers, potentially skewing the reliability of those that do get into print. Credit: Markus Brunner/Getty

When an experiment fails to produce an interesting effect, researchers often shelve the data and move on to another problem. But withholding null results skews the literature in a field, and is a particular worry for clinical medicine and the social sciences.

Hundreds of sea-floor methane plumes spotted by sonar Imprint of primordial monster star found Neanderthals: Bone technique redrafts prehistory

Researchers at Stanford University in California have now measured the extent of the problem, finding that most null results in a sample of social-science studies were never published. This publication bias may cause others to waste time repeating the work, or conceal failed attempts to replicate published research. Although already recognized as a problem, “it’s previously been hard to prove because unpublished results are hard to find”, says Stanford political scientist Neil Malhotra, who led the study.

His team investigated the fate of 221 sociological studies conducted between 2002 and 2012, which were recorded by Time-sharing Experiments for the Social Sciences (TESS), a US project that helps social scientists to carry out large-scale surveys of people's views.

Only 48% of the completed studies had been published. So the team contacted the remaining authors to find out whether they had written up their results, or submitted them to a journal or conference. They also asked whether the results supported the researchers’ original hypothesis.

Of all the null studies, just 20% had appeared in a journal, and 65% had not even been written up. By contrast, roughly 60% of studies with strong results had been published. Many of the researchers contacted by Malhotra’s team said that they had not written up their null results because they thought that journals would not publish them, or that the findings were neither interesting nor important enough to warrant any further effort.

“When I present this work, people say, ‘These findings are obvious; all you've done is quantify what we knew anecdotally’,” says Malhotra. But social scientists often underestimate the magnitude of the bias, or blame journal editors and peer reviewers for rejecting null studies, he says. His team's findings are published today in Science1.

Poisoned by success

The problem may be bigger than the TESS sample suggests. Each survey design proposed to TESS is peer-reviewed, to ensure that it has sufficient statistical power to test an interesting hypothesis; weaker studies in these fields would probably have an even lower rate of publication. “It’s very likely that this study underestimates the true extent of the problem,” says Daniele Fanelli, an evolutionary biologist who studies publication bias and misconduct, and is currently a visiting professor at the University of Montreal in Canada.

In 2010, Fanelli surveyed the publication bias across a range of disciplines, and found that psychology and psychiatry had the greatest tendency to publish positive results2. “But it’s not just a social-science issue — it’s also common in the biomedical sciences,” says Hal Pashler, a psychologist at the University of California, San Diego, in La Jolla. “Both are really poisoned by only hearing about the successes.” (See '‘Ethical failure’ leaves one-quarter of all clinical trials unpublished'.)

Social scientists are already trying to tackle publication bias (see ‘Replication studies: Bad copy’). Malhotra is involved in the Berkeley Initiative for Transparency in the Social Sciences, which advocates a range of strategies to strengthen social-science research. One option is to log all social-science studies in a registry that tracks their outcome — a model that is already used to help ensure that null results from drug trials see the light of day. Meanwhile, Pashler has set up a website, PsychFileDrawer, to capture null results generated by attempts to replicate findings in experimental psychology.

These remedies have not been universally welcomed, however. “There’s been a lot of pushback,” says Malhotra. Some social scientists are worried that sticking to a registered-study plan might prevent them from making serendipitous discoveries from unexpected correlations in the data, for example. But most accept the need for change, adds Pashler: “We’re all waking up to this.”