Science needs reason to be trusted

That we now live in the grip of post-factualism would seem naturally repellent to most physicists. But in championing theory without demanding empirical evidence, we're guilty of ignoring the facts ourselves.

I'm a theoretical particle physicist and I doubt the value of theoretical particle physics. That's awkward already, I know, but it gets even worse. I'm afraid the public has good reasons to mistrust scientists and — sad but true — I myself find it increasingly hard to trust them too.

In recent years, trust in science has been severely challenged by the reproducibility crisis1. This problem has predominantly fallen on the life sciences where, it turns out, many peer-reviewed findings can't be independently reproduced. Attempts to solve this have focused on improving the current measures for statistical reliability and their practical implementation. Changes like this were made to increase scientific objectivity or — more bluntly — to prevent scientists from lying to themselves and each other. They were made to re-establish trust.

The reproducibility crisis is a problem, but at least it's a problem that has been recognized and is being addressed. From where I sit, however, in a research area that can be roughly summarized as the foundations of physics — cosmology, physics beyond the standard model, the foundations of quantum mechanics — I have a front-row seat to a much bigger problem.

I work in theory development. Our task, loosely speaking, is to come up with new — somehow better — explanations for already existing observations, and then make predictions to test these ideas. We have no reproducibility crisis because we have no data to begin with — all presently available observations can be explained by well-established theories (namely, the standard model of particle physics and the cosmological concordance model).

But we have a crisis of an entirely different sort: we produce a huge amount of new theories and yet none of them is ever empirically confirmed. Let's call it the overproduction crisis. We use the approved methods of our field, see they don't work, but don't draw consequences. Like a fly hitting the window pane, we repeat ourselves over and over again, expecting different results.

Some of my colleagues will disagree we have a crisis. They'll tell you that we have made great progress in the past few decades (despite nothing coming out of it), and that it's normal for progress to slow down as a field matures — this isn't the eighteenth century, and finding fundamentally new physics today isn't as simple as it used to be. Fair enough. But my issue isn't the snail's pace of progress per se, it's that the current practices in theory development signal a failure of the scientific method.

Let me illustrate what I mean.

In December 2015, the LHC collaborations CMS and ATLAS presented evidence for a deviation from standard-model physics at approximately 750 GeV resonant mass2,3. The excess appeared in the two-photon decay channel and had a low statistical significance. It didn't look like anything anybody had ever predicted. By August 2016, new data had revealed that the excess was merely a statistical fluctuation. But before this happened, high-energy physicists produced more than 600 papers to explain the supposed signal. Many of these papers were published in the field's top journals. None of them describes reality.

Now, the particle physics community has always been subject to fads and fashions. Though this case was extreme both in the number of participants and in their haste, there have been many similar cases before4. In particle physics, jumping on a hot topic in the hope of collecting citations is so common it even has a name: 'ambulance chasing', referring to the (presumably apocryphal) practice of lawyers following ambulances in the hope of finding new clients.

One could argue that even if all the proposed explanations for the 750 GeV bump were wrong, they were still good exercise for the brain, a kind of a fire drill for the real deal. I'm not convinced this is time well spent, but either way, ambulance chasing isn't what worries me. What worries me is that this flood of papers is a stunning demonstration for how useless the current quality criteria are. If it takes but a few months to produce several hundred 'explanations' for a statistical fluke, then what are these explanations good for?

And it's not only theoretical high-energy physics. You also see this in cosmology, where models for inflation abound. Theorists introduce one or several new fields and potentials that drive the Universe's dynamics before decaying into normal matter. Current observational data can't distinguish the different models. And even if new data comes in, there will still be infinitely many models left to write papers about. By my estimate, the literature presently contains several hundred of these5.

For each choice of inflation fields and potentials one can calculate observables, and then move on to the next fields and potentials. The likelihood that any of these models describes reality is vanishingly small — it's roulette on an infinitely large table. But according to current quality criteria, that's first-rate science.

This syndrome of behaviour also arises in astrophysics, where theoreticians conjure up fields to explain the cosmological constant (which is well explained by it being a constant) and suggest more and more complicated 'hidden sectors' of particles that may or may not make up dark matter.

It isn't my intention to indiscriminately dismiss all this research as useless. In each of these cases there are good reasons why the topic is worth investigating and may lead to new insights — reasons I don't have the space to go into here. But in the absence of good quality measures, the ideas that catch on are the most fruitful ones, even though there is no evidence that a theory's fruitfulness correlates with its correctness. Let me emphasize that this doesn't necessarily mean any one individual scientist modifies behaviour to please peers. It merely means that the tactics that survive are those that reproduce6.

Many of my colleagues believe this forest of theories will eventually be chopped down by data. But in the foundations of physics it has become extremely rare for any model to be ruled out. The accepted practice is instead to adjust the model so that it continues to agree with the lack of empirical support.

The dominance of fruitful and easily amendable hypotheses has consequences. Because experiments probing the foundations of physics have become so costly and take such a long time to build, we have to carefully consider which experiments are likely to reveal new phenomena. In this assessment, theorists' convictions about which models are likely to be correct play a big role. Of course experimentalists push their own agenda, but theory should inform the commission of experiments. This, however, means that if theorists get lost, experiments become less likely to deliver new results, theorists are less likely to get new data, and the circle closes.

It's not hard to see how we got into this situation. We're judged by our publication count — or at least it's what we think we're being judged by — and stricter quality measures in theory development would cut back productivity. But that publication pressure rewards quantity to the detriment of quality has been said many times before, and I don't want to add yet another complaint about ill-conceived measures for scientific success. Evidently, such complaints don't make a difference.

Complaints about publication pressure don't help because this pressure is merely a symptom, not the disease. The underlying problem is that science, like any other collective human activity, is subject to social dynamics. Unlike most other collective human activities, however, scientists should acknowledge threats to their objective judgment and find ways to avoid them. But this doesn't happen.

If scientists are selectively exposed to information from likeminded peers, if they are punished for not attracting enough attention, if they face hurdles to leave a research area when its promise declines, they can't be counted on to be objective. That's the situation we're in today — and we have accepted it.

To me, our inability — or maybe even unwillingness — to limit the influence of social and cognitive biases in scientific communities is a serious systemic failure. We don't protect the values of our discipline. The only response I see are attempts to blame others: funding agencies, higher education administrators or policy makers. But none of these parties is interested in wasting money on useless research. They rely on us, the scientists, to tell them how science works.

I offered examples for the missing self-correction from my own discipline. It seems reasonable that social dynamics is more influential in areas starved of data, so the foundations of physics are probably an extreme case. But at its root, the problem affects all scientific communities. Last year, the Brexit campaign and the US presidential campaign showed us what post-factual politics looks like — a development that must be utterly disturbing for anyone with a background in science. Ignoring facts is futile. But we too are ignoring the facts: there's no evidence that intelligence provides immunity against social and cognitive biases7, so their presence must be our default assumption. And just as we have guidelines to avoid systematic bias in data analysis, we should also have guidelines to avoid systematic bias stemming from the way human brains process information.

This means, for example, that we shouldn't punish researchers for working in unpopular fields, filter information using friends' recommendations or allow marketing tactics, and should counteract loss aversion with incentives to switch fields and give more space to knowledge not already widely shared (to prevent the 'shared information bias'). Above all, we should start taking the problem seriously.

Why hasn't it been taken seriously so far? Because scientists trust science. It's always worked, and most scientists are optimistic it will continue to work — without requiring their action. But this isn't the eighteenth century. Scientific communities have changed dramatically in the past few decades. There are more of us, we collaborate more, and we share more information than ever before. All this amplifies social feedback, and it's naive to believe that when our communities change we don't have to update our methods too.

How can we blame the public for being misinformed because they live in social bubbles if we're guilty of it too?


  1. 1.

    Nature 533, 452–454 (2016).

  2. 2.

    et al. (CMS Collaboration) Phys. Rev. Lett. 117, 051802 (2016).

  3. 3.

    et al. (ATLAS Collaboration) J. High Energ. Phys. 2016, 001 (2016).

  4. 4.

    Preprint at (2016).

  5. 5.

    , , & J. Cosmol. Astropart. Phys. 2014, 039 (2014).

  6. 6.

    & R. Soc. Open Sci. 3, 160384 (2016).

  7. 7.

    & J. Pers. Soc. Psychol. 94, 672–695 (2008).

Download references

Author information


  1. Sabine Hossenfelder is at the Frankfurt Institute for Advanced Studies, Ruth-Moufang-Straße 1, 60438 Frankfurt am Main, Germany

    • Sabine Hossenfelder


  1. Search for Sabine Hossenfelder in:

Corresponding author

Correspondence to Sabine Hossenfelder.