Rewarding negative results keeps science on track

figure 1

Credit: Grove Pashley/Getty

Publishing replication attempts saves other researchers’ time.

Two research prizes signal a shifting culture. One, announced earlier this month by the European College of Neuropsychopharmacology, offers a €10,000 (US$11,800) award for negative results in preclinical neuroscience: careful experiments that do not confirm an accepted hypothesis or previous result. The other, from the international Organization for Human Brain Mapping, is entering its second year. It awards US$2,000 for the best replication study — successful or not — with implications for human neuroimaging. Winners, to be announced next year, are chosen for both the quality of the study and the importance of the finding being scrutinized.

Research cannot be self-correcting when information is missing. The sorts of information most likely to stay in the shadows come from the negative results and replication studies that these two prizes put into the limelight. Indeed, in many fields, independent replication can be an advance in itself. Biomarkers and drugs, for example, must be tested in different patient populations from those of the initial studies to show that they work broadly and reliably — or, more importantly, that they don’t. Working out why can inform clinical approaches and elucidate the underlying biology.

Then there is all the time wasted when many scientists attempt the same thing. A researcher might not need to explore a particular hypothesis if others have spent months carefully doing so. But in today’s science system, those who toil but find no evidence for a hypothesis, or find similar evidence to others, have scant means or reason to publicize their efforts.

Why do such useful results remain hidden? They are often harder to assess. There are so many reasons why two researchers might get different results. (We at Nature are developing ways of more deliberately supporting significant replications and refutations of Nature papers.) Another barrier is cultural. Those who publish replication studies that yield different results risk the wrath of the original researchers, who may be more concerned with preserving their status than with understanding why someone else found something different. Those who argue that the entire burden of proof should be on replicators are putting scientific reputations before science itself.

New kinds of papers that encourage replication are gaining traction, particularly in psychology. For registered replication reports, journals adjudicate proposals for studies that could be replicated. These are then carried out, often by more than a dozen labs in consultation with original researchers. Just this week, Perspectives on Psychological Science released such a report. Replication studies by 23 labs could not confirm much-cited work claiming that priming people to think of themselves as scholarly improves performance.

What these prizes attempt to do is counteract risks (including the risk of wasted effort and potential backlash from original researchers) and boost rewards for negative results and replications. They provide a line on winners’ CVs that committees can see and value. The hope is that such recognition will encourage researchers to present work that would otherwise languish on their hard drives.

Another tack is to fund researchers to actually do replication studies. This July, the government of the Netherlands gave grants to nine projects intended to replicate studies that have become important in public policy. One of these — on whether pupil dilation reflects personal interest in a subject being viewed — is more than 50 years old. So far, a total of €3 million has been allocated for such investigations over a planned three funding rounds. Encouragingly, they are competitive: with 85 applications in the first round, fewer than 1 in 9 was successful.

That fact suggests strong interest in doing replications if duly supported, whether results are negative or confirmatory. It may take only small shifts in encouragement to realize their value to science.

Related Content

Faculty promotion must assess reproducibility:

Go forth and replicate!

No publication without confirmation

Receptive to replication

Replication studies offer much more than technical details