Negative results need airing too

Journal name:
Nature
Volume:
470,
Page:
39
Date published:
DOI:
doi:10.1038/470039a
Published online

The problem of the invisibility of negative results is underlined by the media storm over a paper supporting extrasensory perception being published in a reputable psychology journal (see The New York Times, 5 January 2011). Although individual reports might be statistically valid in isolation, their conclusions could still be questionable — other test results of the same hypothesis must also be taken into account.

Say a study finds no statistically favourable evidence for a hypothesis at the predetermined significance level (P=0.05, for example) and, like most with negative results, it is never published. If 19 other similar studies are conducted, then 20 independent attempts at the 0.05 significance level are, by definition, expected to give at least one hit. A positive result obtained in one of the 19 studies, viewed independently, would then be statistically valid and so support the hypothesis, and would probably be published.

Statistical corrections are routinely made for multiple testing within a study, but they are important across studies too. The difficulty lies in determining the number of parallel investigations of the same hypothesis. Perhaps different disciplinary research societies could help bring these covert experiments to light.

Author information

Affiliations

  1. NICHD, National Institutes of Health, USA.

    • Nitin Gupta &
    • Mark Stopfer

Corresponding author

Correspondence to:

Author details

Comments

  1. Report this comment #17809

    Michael Brandeis said:

    I fully agree with this view which addresses only a rather small part of the greater issue of "unpublishable" data that should be published for the benefit of the scientific community. A forum for publishing negative data would help to prevent duplication of work. A similar forum for publishing "scooped" data would not only benefit the scooped scientists that end up empty handed but will add much confidence to the data that has got published. If someone publishes a discovery it is often hard to verify how significant it is if however several groups that have reached similar results will be able to publish theirs is would add much confidence. In the age internet publishing creation of such a forum should not be too challenging.
    Michael Brandeis, The Hebrew University of Jerusalem, Israel

  2. Report this comment #17814

    Kenneth Pimple said:

    I happen to know of one journal of the type mentioned by M. Brandeis – the Journal of Negative Results in Biomedicine (http://www.jnrbm.com/). I'm in no position to comment on its quality, however.

  3. Report this comment #17815

    Gerald Pier said:

    This is highly needed, one venue that has worked is PLoS One, good hypothesis and technically correct but our findings were mostly negative-increased transcription of a bacterial gene during infection did not identify factors needed to cause infection. Rejected from other journals due to its "negative findings" but accepted here.

  4. Report this comment #17818

    Nitin Gupta said:

    Here are links to the New York Times article and a "discussion forum on their website" on this topic:http://www.nytimes.com/roomfordebate/2011/01/06/the-esp-study-when-science-goes-psychic

  5. Report this comment #17837

    Armando Remondes said:

    Absolutely agree with all the points. Would add another one, beside the practical needs (avoid duplication of work, test robustness of findings and so on..). "Negative results" are indeed positive findings, since to know that "a" does not affect "b" is at least as important as to know that it does.

  6. Report this comment #17848

    Harold Dibble said:

    As everyone agrees, negative results are still results, but I'm not sure if there is a problem in not having outlets dedicated to their being published. First of all, negative results are published all the time when they contradict current theories or models. That's basically how science advances — i.e., through falsification of current hypotheses. Second, when a positive result like the one referred to here is published, you can bet that it will stimulate new research or lead to the publication of earlier work that has shown negative results.

  7. Report this comment #17978

    Sonia Muenchow said:

    It is always a very frustrating thought to have in the lab, when we have completed experiments to come to the conclusion that an hypothesis has been shown to be incorrect. Whenever this occurs, I wonder if anyone else had done the same experiment, or a similar series of experiments, and also gotten negative results. This is wholly inefficient and wastes not only grant money but time as well. I would also appreciate a forum to see what experiments led nowhere so that I would know where not to spend resources.

  8. Report this comment #18208

    Guillaume Susbielle said:

    As noted by Kenneth Pimple, the _Journal of Negative Results in Biomedicine _ offers a venue for negative results.

    Another Open Access journal by the same publisher , _BMC Research Notes _ also aims to correct publication bias by allowing publication of confirmatory or negative results regardless of their interest or novelty.

    It is interesting that, although publication bias is not a new problem, only a few solutions or tentative to address this issue have been proposed.

    Conflict of Interests:
    Guillaume Susbielle is in-house editor of BMC Research Notes and an employee of BioMed Central

  9. Report this comment #19013

    Yurii Dumin said:

    In fact, the negative (or "zero-value") results are published quite often by reputable physical journals dealing with such issues as testing a possible time variability of the fundamental physical constants, searching for violation of the fundamental physical principles, etc. Unfortunately, these "zero-value" results are often noncomparable to each other and, therefore, poorly verifiable. The point is that such experiments usually measure some non-zero signals which are interpreted as noise or artefacts; and the accuracy of elimination of this noise is estimated very differently by different researchers.

    A typical example of such kind are the laboratory experiments searching for violation of the "equivalence principle" (between the inertial and gravitational mass). It is written in the most of textbooks on General Relativity that this equivalence is verified with accuracy about 10^{-12}, the authors of other textbooks believe that the reliable limit is only 10^{-11}, while some researchers insist that they got the constraint as good as 10^{-14}. In fact, there is no reliable means to resolve this dispute, because the experimental design and the data processing procedures were not absolutely the same. So, verification of the negative (or "zero") results is a matter very different from the verification of "non-zero" ones.

    I am not a specialist in biomedical sciences, which initiated the discussion of dissemination of the negative results, but think that the same problem of verification will be severe in any field of science where the "negative" results are widely published.

Subscribe to comments

Additional data