Nature | Column: World View

Beware the creeping cracks of bias

Article tools

Alarming cracks are starting to penetrate deep into the scientific edifice. They threaten the status of science and its value to society. And they cannot be blamed on the usual suspects — inadequate funding, misconduct, political interference, an illiterate public. Their cause is bias, and the threat they pose goes to the heart of research.

Bias is an inescapable element of research, especially in fields such as biomedicine that strive to isolate cause–effect relations in complex systems in which relevant variables and phenomena can never be fully identified or characterized. Yet if biases were random, then multiple studies ought to converge on truth. Evidence is mounting that biases are not random. A Comment in Nature in March reported that researchers at Amgen were able to confirm the results of only six of 53 'landmark studies' in preclinical cancer research (C. G. Begley & L. M. Ellis Nature 483, 531–533; 2012). For more than a decade, and with increasing frequency, scientists and journalists have pointed out similar problems.

Early signs of trouble were appearing by the mid-1990s, when researchers began to document systematic positive bias in clinical trials funded by the pharmaceutical industry. Initially these biases seemed easy to address, and in some ways they offered psychological comfort. The problem, after all, was not with science, but with the poison of the profit motive. It could be countered with strict requirements to disclose conflicts of interest and to report all clinical trials.

Yet closer examination showed that the trouble ran deeper. Science's internal controls on bias were failing, and bias and error were trending in the same direction — towards the pervasive over-selection and over-reporting of false positive results. The problem was most provocatively asserted in a now-famous 2005 paper by John Ioannidis, currently at Stanford University in California: 'Why Most Published Research Findings Are False' (J. P. A. Ioannidis PLoS Med. 2, e124; 2005). Evidence of systematic positive bias was turning up in research ranging from basic to clinical, and on subjects ranging from genetic disease markers to testing of traditional Chinese medical practices.

How can we explain such pervasive bias? Like a magnetic field that pulls iron filings into alignment, a powerful cultural belief is aligning multiple sources of scientific bias in the same direction. The belief is that progress in science means the continual production of positive findings. All involved benefit from positive results, and from the appearance of progress. Scientists are rewarded both intellectually and professionally, science administrators are empowered and the public desire for a better world is answered. The lack of incentives to report negative results, replicate experiments or recognize inconsistencies, ambiguities and uncertainties is widely appreciated — but the necessary cultural change is incredibly difficult to achieve.

Researchers seek to reduce bias through tightly controlled experimental investigations. In doing so, however, they are also moving farther away from the real-world complexity in which scientific results must be applied to solve problems. The consequences of this strategy have become acutely apparent in mouse-model research. The technology to produce unlimited numbers of identical transgenic mice attracts legions of researchers and abundant funding because it allows for controlled, replicable experiments and rigorous hypothesis-testing — the canonical tenets of 'scientific excellence'. But the findings of such research often turn out to be invalid when applied to humans.

“A biased scientific result is no different from a useless one.”

A biased scientific result is no different from a useless one. Neither can be turned into a real-world application. So it is not surprising that the cracks in the edifice are showing up first in the biomedical realm, because research results are constantly put to the practical test of improving human health. Nor is it surprising, even if it is painfully ironic, that some of the most troubling research to document these problems has come from industry, precisely because industry's profits depend on the results of basic biomedical science to help guide drug-development choices.

Scientists rightly extol the capacity of research to self-correct. But the lesson coming from biomedicine is that this self-correction depends not just on competition between researchers, but also on the close ties between science and its application that allow society to push back against biased and useless results.

It would therefore be naive to believe that systematic error is a problem for biomedicine alone. It is likely to be prevalent in any field that seeks to predict the behaviour of complex systems — economics, ecology, environmental science, epidemiology and so on. The cracks will be there, they are just harder to spot because it is harder to test research results through direct technological applications (such as drugs) and straightforward indicators of desired outcomes (such as reduced morbidity and mortality).

Nothing will corrode public trust more than a creeping awareness that scientists are unable to live up to the standards that they have set for themselves. Useful steps to deal with this threat may range from reducing the hype from universities and journals about specific projects, to strengthening collaborations between those involved in fundamental research and those who will put the results to use in the real world. There are no easy solutions. The first step is to face up to the problem — before the cracks undermine the very foundations of science.

Journal name:
Nature
Volume:
485,
Pages:
149
Date published:
()
DOI:
doi:10.1038/485149a

Author information

Affiliations

  1. Daniel Sarewitz is co-director of the Consortium for Science, Policy and Outcomes at Arizona State University, and is based in Washington DC.

Author details

For the best commenting experience, please login or register as a user and agree to our Community Guidelines. You will be re-directed back to this page where you will see comments updating in real-time and have the ability to recommend comments to other users.

Comments for this thread are now closed.

Comments

20 comments Subscribe to comments

  1. Avatar for Christopher Yau
    Christopher Yau

    Systematic bias may also be compounded in large, complex projects where the main body of work is done by graduate students and postdoctoral researchers. Mistakes made by junior researchers early in the project cycle may not be detected until much later on (if at all) and in some cases after the original investigator has left the research group. It is therefore important that senior investigators work hard to manage complex group dynamics, build mechanisms to ensure continuity and provide strong support for junior researchers.

  2. Avatar for Boris Cvek
    Boris Cvek

    I have read Daniel Sarewitz column â&#x80&#x9cBeware the creeping cracks of biasâ&#x80&#x9d with great interest. Especially important are, in my view, his words about â&#x80&#x9creal-world complexity in which scientific results must be applied to solve problemsâ&#x80&#x9d. We are not able to translate our knowledge from the lab to the bedside. It resembles to me a comment by Francis Collins, the Director of NIH, published in June issue of Nature Reviews Drug Discovery â&#x80&#x9cMining for therapeutic goldâ&#x80&#x9d. Collins criticizes current translational process which is, according to him, â&#x80&#x9cfraught with frustrationâ&#x80&#x9d and offers new direction called â&#x80&#x9cdrug repurposingâ&#x80&#x9d. Simply, if you have a drug and a patient taking the drug for her illness A, the drug may also accidentally cure another illness (illness B) of the same patient. By building new type of pharmacovigilance (cf. Boguski et al. Science 2009) we can monitor â&#x80&#x9cpositive adverse effectsâ&#x80&#x9d of all currently used drugs. This kind of clinical evidence represents â&#x80&#x9creal world complexityâ&#x80&#x9d Sarewith writes about. Basically, we donâ&#x80&#x99t know precise mechanism of action of those repurposed drugs in new diseases they would be used for. But we can test their efficacy in clinical trials even without repatenting them and, in the case of success, dramatically lower cost of our health care systems (cf. Cvek Drug Discovery Today 2012).

  3. Avatar for Willy Billy
    Willy Billy

    While I would applaud Dr. Sarewitz for addressing this issue, I'm a little dismayed that the most obvious source of this creeping bias is completely ignored. The problem here isn't with science, it is with politics. Our funding is controlled politically. It is increasingly subject to the whims of any innuendo slinging lunatics who may wish to vault to prominence on the political stage. Add to this the fact that there are so few tenure track positions that most Ph.D.'s feel they have no hope of obtaining a coveted position lest they publish in the "best" journals, and you have a recipe for disaster.

    We are supposed to be "objective" scientists, yet we don't hesitate when the search committees are formed to rely upon completely arbitrary criteria to winnow the field. We all bemoan how ridiculous the impact factor is, yet it is, without doubt, the overriding criteria for all scientific funding. The history of science is replete with stories of how discoveries that would lead to Nobel prizes were dismissed by the "top" journals for their "lack of novelty" or any number of equally ridiculous excuses for why legitimate, groundbreaking science should be relegated to less prominence.

    There are a number of simple solutions here. While it would be impractical to expect to divorce scientific funding from politics completely, we need to eliminate the pretense that once the money has been allocated, it is free of such influence. It is the unscientific, unobjective political criteria that drives science funding throughout the grant review process. In the life sciences, if you can't convince the reviewer you may be able to provide direct insight into the components of a disease process, you might as well save the time you would spend writing. Perhaps no greater example of the inanity of the current system is the likely universally acceptable statement that many of the orphan GPCRs are of undoubted biological importance, yet the ability to get a grant funded to study one would depend on first linking it to a "biologically relevant" problem.

    Any objective scientist can, I think, immediately recognize that this is not the way science should work. We cannot predict what will be most important in the future based on what we already know. If we could, science would find the end-point of all potential human knowledge pretty quickly. We might arguably have no human genome sequence at this point in history if the scientists of the 60's and 70's had their funding tied to the same ridiculous, arbitrary political criteria that exist today.

    Finally, I would like to point out that diversity in the approach to scientific questions is achieved by the employment of a diversity of researchers. The problems outlined above create a snowball effect by continually winnowing the field of scientists only to those who manage to fit the increasingly stringent, increasingly arbitrary constraints imposed by the politicization of scientific funding. If you want to discover the culprit for the problem of creeping bias, perhaps no more obvious source could be named.

    The solution is to enshrine into law a requirement that once funding is allocated to the duly appointed agencies, it will be the scientific community as a whole that will be the sole arbiter of how best to allocate the available funds. This will require many of those at the heights of the current scientific bureaucracy to prove that they are indeed still worthy of being labeled an objective scientist, as opposed to say...politician. Furthermore, I would argue that a significant majority of the available funding should be allocated first, to funding people. The number of permanent, tenured positions should be dramatically increased. Each tenured position should be able to rely upon an adequate, if modest, level of funding allowing the researcher to pursue any course of research they consider worthy. The remainder of available funding could then be apportioned in a grant review process similar to the current regime for projects requiring excess funds that are found suitability worthy by a scientific consensus. A scientific consensus free of the whims of the political elite, and those, for whose votes they feel the need to pander.

  4. Avatar for Biswapriya Mishra
    Biswapriya Mishra

    Majority of the â&#x80&#x98significant scientificâ&#x80&#x99 discoveries made till date was prior to 1950s and what is being presently done is finding their applications [usually roaming around their peripheries]: this is an established fact and claim. There by, â&#x80&#x98findingsâ&#x80&#x99 lingering around â&#x80&#x98known factsâ&#x80&#x99 are safer; while those with the power of â&#x80&#x98funding, networks, positionsâ&#x80&#x99 tend to â&#x80&#x98claim bigâ&#x80&#x99 and â&#x80&#x98sensationalizeâ&#x80&#x99 their discoveries/ stories [media, press, pod cats and so on!]. It is easy to find out that, few PIs [and their labs] tend to go on â&#x80&#x98publishing rampageâ&#x80&#x99 while many tend to â&#x80&#x98hold back/ hideâ&#x80&#x99 their research. Not uncommon to find â&#x80&#x98scientists/ students/ PIsâ&#x80&#x99 calculating/citing the Impact Factor of their so called publications up to â&#x80&#x985-decimal placesâ&#x80&#x99 and boast around for being greater researcher. Even a science-career starter [for example myself] would notice, that there are institutions, both in â&#x80&#x98developedâ&#x80&#x99 and â&#x80&#x98developingâ&#x80&#x99 world which work â&#x80&#x98mechanically day and night outâ&#x80&#x99 generating â&#x80&#x98dataâ&#x80&#x99 to satisfy the need for â&#x80&#x98funding agenciesâ&#x80&#x99 than towards a conscious efforts to â&#x80&#x98rationalize the tax-payers hard earned moneyâ&#x80&#x99. Majority of the PIs in due course of time have changed their field of research, not aware of the development in the new area, do not work anymore and depend on the system in place to guide them through [basically unaware of the bench level goof-ups by a doctorate/ MS/ Postdoc]. Collaborative efforts, convincing abilities within the laboratories [and the lab members] are non-existent, lack of communication between laboratories and departments working on similar areas galore. Retractions found out based on falsification of data, plagiarism, manipulations are seldom found out, and when so, we ignore, esp. the whole network of people involved who already cited this work for their publications or those who climbed in academic ladder based on these scientific discoveries/ publications. Bureaucracy, hierarchical systems, unaware public, top-tier personnel in funding agencies, perks based on â&#x80&#x98publications/ patentsâ&#x80&#x99 leading to promotions, jobs and positions! No wonder, â&#x80&#x98corporate houses/ business firms and industriesâ&#x80&#x99 tend to be addressing more â&#x80&#x98relevant and worthyâ&#x80&#x99 research than â&#x80&#x98academiaâ&#x80&#x99. Further relevant questions also are applicable to the â&#x80&#x98distribution of funding to appropriate and worthy institutionsâ&#x80&#x99. For example, with mushrooming of research institutions, universities and researchers, where is the QC or a panel to decide which projects to fund [more realistic ones] and which not to, at all!
    We all have come across Graduate students â&#x80&#x98presumingâ&#x80&#x99 that when they treat with substance â&#x80&#x98xâ&#x80&#x99 the M effect will be thereâ&#x80&#x99 and then all positive results lead towards the validation of claims, and negative results towards the â&#x80&#x98synthesis of a new hypothesisâ&#x80&#x99 altogether. How many times we see funding agencies sponsoring grant proposals on â&#x80&#x98Effect of hexavalent Mo ions on the roots hairs of primary roots of Japonica rice under N,P,K- depleted conditions in a rainy season!â&#x80&#x9d or for that matter â&#x80&#x98Exploration of a biodiversity in the tropical rain forest of Borneoâ&#x80&#x99. How much potential these 2 studies mentioned hold, in terms of deliverables? No wonder, it delivers â&#x80&#x98known factsâ&#x80&#x99 based on re-claimation of previously established findings. Questions also are related to lack of guidance and mentorship from â&#x80&#x98clean, intellectual, knowledgeableâ&#x80&#x99 mentors or reflection of fraudulence in the â&#x80&#x98present/ younger generationâ&#x80&#x99 of researchers?
    Of course my opinion reflects wider issues than just the â&#x80&#x98scope of discussionâ&#x80&#x99 in the above-said comments, but are intricately related as â&#x80&#x98some of the potential reasonsâ&#x80&#x99 leading to this misery in R, D & T. Moreover, though, we cannot label everyone with the same tar, but the tendency towards â&#x80&#x98rationalization of this bias/ negative useless resultsâ&#x80&#x99 is growing alarmingly, and is a bigger social stigma that we all have to suffer through, at some point of time in our lives. But in spite of these alarms, it is heartening to believe, that there are numerous â&#x80&#x98good, clean, thoroughâ&#x80&#x99 researchers with â&#x80&#x98moral and ethicsâ&#x80&#x99 worldwide, that majority of our experiments DO NOT fail, and the system is still in place!

  5. Avatar for Yvan Dutil
    Yvan Dutil

    @Dilip G. Banhatti Your point about astrophysicist assuming that the mass in concentrated in one point in galaxies is completely ridiculous. Mass distribution is indeed calculated from the observed star star distribution. And, this distribution happen to be strongly peaked at the core, but it is not point like. Nevertheless, the dark matter is also observed in elliptical galaxies, galaxies clusters and in large scale structures. Using this example as an indication of scientific bias is silly at least.

  6. Avatar for Jeremy Fox
    Jeremy Fox

    I appreciate the forceful discussion of bias, but the notion that "useless" research is "no different" than biased research because neither can be applied is a serious mistake. One might as well say that a newborn baby is no different than an ill and bedridden adult, because neither can hold a full-time job. Or that applied research is no different than biased research because neither improves our fundamental understanding of nature.

    So if a use is discovered for research previously thought useless, does that research suddenly become unbiased?

    The author has previously argued for a greater emphasis on what he sees as relevant research. He should stick to offering good argument for such research, as he has in the past, rather than spuriously conflating basic research with "biased" research.

  7. Avatar for David Tyler
    David Tyler

    "How can we explain such pervasive bias? Like a magnetic field that pulls iron filings into alignment, a powerful cultural belief is aligning multiple sources of scientific bias in the same direction."

    Surely the analysis of Thomas Kuhn, helpfully articulated in a recent review by David Kaiser
    (Nature, 12 April 2012, http://www.nature.com/nature/journal/v484/n7393/full/484164a.html) is relevant here. Most researchers are practicing "normal science" and are building on a consensual paradigm. They have a model of incremental progress and they think deductively that all "positives" must advance the paradigm. They are not thinking about false positives. This is the real "cultural belief" that steers the way research is done.
    Somehow, we need to avoid appeals to scientific "consensus" that closes down or confines discourse. Science thrives when the appeal is not to consensus but to evidence. Why can't the "multiple working hypotheses" approach be more widely adopted?

  8. Avatar for Paul Matthews
    Paul Matthews

    It's interesting that Daniel Sarewitz does not mention the most obvious example of bias and hype &#8211 climate science. Perhaps writing anything daring to question climate science in Nature would lead to instant dismissal from the writing team.

  9. Avatar for Donald Forsdyke
    Donald Forsdyke

    Quote: "Scientists rightly extol the capacity of research to self-correct. But the lesson coming from biomedicine is that this self-correction depends not just on competition between researchers, but also on the close ties between science and its application that allow society to push back against biased and useless results." Wow!



    1. There can be long time-lags before self-correction can occur (e.g the neglect of Mendel's work for 35 years).
    2. Competition often leads to the opposite of self-correction. One group seizes the high ground and keeps its competitors at bay by controlling publication access, funding, etc.
    3. How on earth can society push back against biased and useless results? This is something only experts can determine. What society can do is push for reforms, such as those I have suggested in my book Tomorrow's Cures Today (2000).

  10. Avatar for Ad Lagendijk
    Ad Lagendijk

    Very interesting. Thank you. The problem is not restricted to science but relates to the whole society. And can only be solved when this broader context is addressed. Society wants positive results. Society wants promises, like a quantum computer. Scientists want positive results. When a theoretical physicist discovers his theory is not in agreement with experiment he will "improve" his theory until it does agree. It is never the other way around: when the theory agrees no scientist will improve his theory until it does not agree. When an experimentalist discovers an outlier in his data set he will find arguments to disregard it. When in contrast a data point fits the theory its value is not discussed. Only in the exceptional cases in physics of widely tested theories, like the three laws of thermodynamics and the two forms of relativity theory could the focus be on outliers only.

    The solution lies in "unhyping" science which requires unhyping society. This excessive focusing on "Is there any news?" in science should be abandoned. The very concept of a press release on a scientific discovery denies the nature of science. It is amusing to quote from a famous German anecdote. When in 1845 Friedrich Wilhelm IV, King of Prussia, asked an astronomer "Gibbt es Neues am Himmel?â&#x80&#x9c ("Is there anything new in the sky") he got as answer: "Kennen Majestät schon das Alte?â&#x80&#x9c ("Does your majesty already know the old stuff?")

  11. Avatar for Thomas Germe
    Thomas Germe

    The increasing pressure to publish will inherently worsen the bias towards false positive results. It seems that the scientific community doesn't have clear assessment criteria to evaluate individual scientists other than those relying on publications. Being a good scientist often involves hard work to clearly demonstrate the unfeasibility of an approach. Yet, this work doesn't appear in your track record and can even prove a hindrance to your career.

  12. Avatar for Suresh VR
    Suresh VR

    Besides the points mentioned in this article, there are a few other things to watch out for that are on the rise. Here in India, the research scenario, like other scenarios, is highly politicized and crony politics is rampant. Getting oneâ&#x80&#x99s research proposals is a matter of persistence, luck and above all, contacts in the right funding agencies. Some of these senior scientists already have a network that helps them publish their poorly designed and interpreted research in good journals, but I wonâ&#x80&#x99t speak of that bias now.

    I want to point out the bias that one has to watch out for in papers/patents that seem to report studies with all the necessary technical information but with some research errors and misinterpretations that sometimes, some editors/reviewers are willing to overlook. Wishing to encourage research, the governmental funding agencies promote collaborative research between academic and industrial partners.

    In their desire to tap into these funds and promote their own careers, many academic and industrial scientists are putting together projects based on flimsy data, carry out studies that are heavily biased towards their own desired outcomes and seek to publish and patent these results by conveniently sweeping negative and undesired results under the carpet. Reading a report from such a project may seem rather straightforward with the required in vitro and in vivo data.

    But beware of the experience of the investigators and any signs of things looking too good to be true because they usually are. Since the funding agencies neither care about the actual research done and the institutes/universities employing these scientists wonâ&#x80&#x99t want to address issues that tarnish their names, this kind of fraud can go on for a long time.

    Another problem on the rise in India is that because there are deep pockets to tap into in the biomedical fields, many well-connected investigators, who are actually not trained in these life sciences, will submit proposals to carry out biomedical research, (or they may be trained in one area, say, plant biotechnology and will apply for funding in cancer biology), get the funding, outsource the work to companies or other scientists who will carry out those experiments for some fee, and then report these data in their papers and reports to the funding agencies.

    Neither party will actually take responsibility for the work being done and the company/scientist (if not a collaborator) wonâ&#x80&#x99t even appear as an author (sometimes they are not even acknowledged). So it would be wise when looking at a given body of data to do what the funding agencies should ideally be doing: look at whether the authors have had prior experience in that area or are at least trained in that area, email the corresponding author(s) with questions about their research if you have any doubts at all and see what kind of answers you get. Unfortunately, as always, these money-grabbing careerists give the well-trained, sincere scientists a bad name.

  13. Avatar for Igor Litvinyuk
    Igor Litvinyuk

    A biased scientific result is not "no different from a useless one". While both
    couldn't indeed be turned into a real-world application, no one would even attempt such conversion
    with a useless one. The biased result, on the other hand, could divert valuable resources invested
    into hopeless translational efforts, the resources which could be expended more productively
    elsewhere. That opportunity cost could very well be the most harmful result of the bias.
    Imagine the effort (and expense) the Amgen researchers had to go into in order just to bring
    this bias to light and how much more productively that effort could have been applied.

  14. Avatar for Michael Lardelli
    Michael Lardelli

    This is an excellent and very timely article. I think we need to recognise that much of the bias in scientific research rests within the minds of the researchers themselves. Quite simply â&#x80&#x93 in some areas there is a herd mentality combined with a lack of objectivity and an unwillingness to accept research results that appear to contradict dearly held ideas about how systems work. An excellent example is the field of Alzheimerâ&#x80&#x99s disease research where a great deal of research effort (over 60,000 papers!) has resulted in only modest advances and still no coherent understanding of the disease mechanism. I know one quite eminent researcher from outside the field who in recent years has obtained results relevant to Alzheimerâ&#x80&#x99s disease genetics that contradict much conventional thinking in the area. He has had astonishing difficulty getting the results published â&#x80&#x93 one paper with very interesting results was submitted to 11 different journals before finding a home!
    Even the editorial staff at Nature display this bias and lack of objectivity â&#x80&#x93 many commentaries in Nature blindly accept ideas based on economic â&#x80&#x9cprinciplesâ&#x80&#x9d that are demonstrably false. Why do most Nature commentaries assume that economic growth will continue in the longer term? Why do they assume that higher energy and other resource prices must (must!) successfully result in development of alternatives that will allow growth and â&#x80&#x9cprogressâ&#x80&#x9d to continue? Why did it take so many years for Nature to publish a commentary on peak oil when conventional oil production peaked back in 2005/6? Why did Nature have more confidence in the economists who dismiss peak oil ideas rather than the scientists who raised concerns about oil production and have published peer-reviewed research papers in that area. What other areas of research are being suppressed on subjective grounds because we do not want to believe the results are real or we find them embarrassing etc.? ( Hint &#8211 for one possible candidate watch this online lecture http://www.youtube.com/watch?v=EtweR_qGHEc ) Scientific progress requires an open mind but once scientific â&#x80&#x9celitesâ&#x80&#x9d become established in any area they defend their positions against objective reality. After all â&#x80&#x93 research results that contradict widely held beliefs are controversial and excessive controversy/inconvenient ideas scares away the money that commercial enterprises such as Nature and the highly paid administrators that run modern universities are so focussed on.

  15. Avatar for Dilip G. Banhatti
    Dilip G. Banhatti

    I entered essentially the same comment in different words once again! However, it may not be a bad idea to let both remain.

  16. Avatar for Dilip G. Banhatti
    Dilip G. Banhatti

    A systematically wrong result in astrophysics that is yet to be widely enough recognized as such is inference of dark matter halo around disk galaxy from its rotation curve. Newtonian dynamics & gravity fully model disk galaxy rotation curves without any need for dark matter. The error arose due to applying solar system model to disk galaxies, although disks have distributed mass while solar system planets orbit around essentially a single point mass at Sun. In using halo models, there is needless inclusion of additional parameters instead of inverting disk rotation curve to disk mass density.

  17. Avatar for Dilip G. Banhatti
    Dilip G. Banhatti

    An example of a systematically wrong result in astrophysics which is yet to be widely enough recognized as such is the inference of dark matter halo around disk galaxies from their rotation curves. If done correctly & using minimal additional hypotheses & parameters, it turns out disk galaxy rotation curves are fully modelled withing Newtonian dynamics & gravity. The error made in mid-seventies was to implicitly assume Keplerian rotation for disk galaxies as the right model, although it actually applies only to rotation around a point mass, while disk galaxies have very distributed mass, not at all concentrated at centre. No halo is actually needed if one simply inverts the rotation curve to get disk mass density. I will be glad to provide reports of calculations & theory where this has been brought out clearly since over more than a decade.

  18. Avatar for Peter Gerard Beninger
    Peter Gerard Beninger

    Some of the bias against negative results may be due to the near-infinite number of negative results to report! I can think of one study in which it was reported that a certain animal did not eat a certain plant offered as food. There are thousands of plants which could be offered, thousands of negative results published, what journal will actually do this???? My point is that the number of negative or null results will always far outnumber the positive results, there has to be some selection here to filter out the least justified studies, but here again one could cry 'bias'...

  19. Avatar for Yvan Dutil
    Yvan Dutil

    Personally, I think the problem is much less serious in physical science because there is an indicative to disprove theory. When facing a unexpected observation, physicist will tend to do another one with better sensitivity and less systematic bias. A good example, is the claim that neutrino were going faster than the speed of light. Rapidly, experimental challenge to this results were carried. In addition, in physics, upper limits bring a scientific understanding too.

    The main problem I see when reading articles in others research fields is the low statistical confidence of these experiment. Small samples, lower than 30, can easily bring false positive results. Nevertheless, I large fraction of studies seams to be done on such small samples. In addition, often additional variables are used to subdivide the original sample is multiple subgroups. This may make sense, but authors seem to be unaware than you divide your sample in 4 part, your 95 % confidence level as a probability of around 20% of occurring by chance alone. Many low level radiation and electromagnetic wave heath effect studies share this problem. In those circumstances, even the sightless reporting bias will have a dramatic impact.

  20. Avatar for Chris Said
    Chris Said

    Great article, Daniel. In my own field (neuroscience), a bias away from null results is clearly distorting the research. There have been several well-intentioned attempts to solve this problem, such as journals dedicated to null results. None of them have caught on because the underlying incentive structure remains the same. As I argue here, a change in the incentive structure must come from the granting agencies.

Hotspots

urbanemissions

Track urban emissions on a human scale

Cities need to understand and manage their carbon footprint at the level of streets, buildings and communities

Pump problems

AMS

Space station dark-matter experiment hits a glitch

Alpha Magnetic Spectrometer cooling pump failure raises questions about its longevity.

Ancient Americans

boatppl

Fishing for the first Americans

Archaeology is moving underwater and along riverbanks to find clues left by the people who colonized the New World.

Quantum computing

crypto

Online security braces for quantum revolution

Encryption fix begins in preparation for arrival of futuristic computers.

Germany's elite?

Germany

Germany claims success for elite universities drive

Report praises US$5-billion scheme for making leading universities more competitive — but some smaller institutions have done just as well.

Science jobs from naturejobs