Q+A

Q&A: Four false beliefs that are thwarting trustworthy research

Science doesn't self-correct – at least not fast enough.

  • Gemma Conroy

Credit: leszekglasner/Getty

Q&A: Four false beliefs that are thwarting trustworthy research

Science doesn't self-correct – at least not fast enough.

13 September 2019

Gemma Conroy

leszekglasner/Getty

Rules, regulations and peer review are important measures for safeguarding research integrity, but are they enough to improve the trustworthiness of science?

According to Mark Yarborough, such measures do little to tackle the root cause of the problem, particularly in biomedical research.

“I was struck by how little conversation I heard among my colleagues about quality concerns in their research,” says Yarborough, a bioethicist at the University of California, Davis.

With his colleagues, Robert Nadon at McGill University in Canada and independent researcher, David Karlin, Yarborough has pinpointed four false beliefs that hinder trustworthy research:

  1. Questioning the trustworthiness of research questions the integrity of researchers;
  2. most of the problems in research are due to a few bad actors;
  3. science self-corrects;
  4. compliance with regulations can solve the problems that gave rise to the regulations themselves.

Their views are published in a feature article in eLife

BIH/Thomas Rafalzyk

Mark Yarborough

BIH/Thomas Rafalzyk

Robert Nadon

Nature Index spoke to Yarborough and Nadon about how these problems erode public trust in science.

Why is it important to emphasise, “It’s about the science, not the scientist”?

Mark Yarborough: I noticed that whenever I’d give a presentation about these topics, people tended to think I was personally criticizing them and their colleagues.

I thought I was being careful to talk about the research as opposed to the researchers, though it’s clearly difficult to speak about one without speaking about the other.

A typical slide I use in my talks is a picture of an aviation jet and it says, “This is about the plane, not the pilot.” It’s the same with science.

Robert Nadon: One of the things in biomedicine is that there isn’t as much formal training in study design and statistics as other disciplines. For instance, blinding is a widely accepted procedure in many fields and a requirement in clinical studies. But it’s not quite as accepted in biomedicine.

If I want to talk about blinding issues, it’s not intended to be personal, but from a biomedical scientist’s point of view, I can understand how it can be taken personally if they’ve been doing studies for their entire career without blinding, and then someone comes along and says that it’s a really necessary for rigorous science.

What does “We need to focus on the health of the orchard, not just the bad apples” mean?

MY: I’ve noticed that many researchers think that the problems are due to a couple of bad apples that reflect poorly on the entire community. Of course, this isn’t true, because blatant misconduct is the least of our worries.

I’ve always been frustrated that many people seem to use that as a kind of defence mechanism. It’s as if they think they don’t really need to worry about these quality concerns and challenges because they’re not one of the bad apples.

RN: The problem is, the really egregious instances of misconduct tend to make the headlines in the media, and it’s understandable why we talk about it, as there are some bad actors who have crossed lines that you wouldn’t imagine a scientist would do.

But the collective damage of ‘less serious’ behaviours can actually cause more problems than a few bad actors.

Lack of rigorous designs, proper statistical analysis, cherry-picking results, removing data points that you don't like in order to get significant p-values - these kinds of behaviours don't cross into that really egregious territory that we see in the headlines.

Nonetheless, they are not conducive to trustworthy research and don’t encourage the public to trust the research.

Why do our beliefs about self-correcting science need self-correcting?

MY: The notion of that science self-corrects is the folklore of science, and people tend to put too much faith in this belief.

We’re all taught this from a very young age, but the historical record is absolutely clear that science does not self-correct, at least not nearly quickly enough if we're concerned with the quality and reliability of publications in the biomedical literature.

RN: Part of the problem is that doing replications and self-correcting in biomedicine is very expensive and time-consuming. There's no incentive to do this, except in some very rare circumstances with particularly ground-breaking work.

You say that following regulations does not guarantee trustworthiness. Why is this so?

MY: It’s not enough, because in most cases, these rules simply don’t work. Many regulations are targeted at small pieces of the problem, like plagiarism.

A lot of people in the research community also delegate their responsibilities to regulatory officers. If the officers approve, then everything must be fine, right? But it’s not always the case.

RN: Sometimes when people try and follow the rules, they’re not well-informed enough to know to follow them. For example, say there's a rule that randomisation should be routinely used in experiments, and some journals are encouraging the use of randomisation.

But as a reviewer, when I probe down and ask for more information on how the randomisation was done, very often the authors aren’t doing it properly. They might be haphazardly choosing the mice with their eyes closed or doing something else where bias can creep in. There are a lot of honest mistakes.

The physicist Richard Feynman pointed that ‘leaning over backwards’ is the proper way to do science. In other words, researchers should do their studies and report them in such a way so that all information is in the open, so journal editors, reviewers and readers can fully evaluate what’s been done.

Given the absolutely hyper-competitive environment of biomedical research, it is very difficult to get researchers to report their studies in a way that would lower their chances of getting research published.

The majority of honest and diligent researchers are trying to balance those concerns – they want to be trustworthy and do good research, but they need to continue to receive funding so that they can continue to do good research.

This interview has been edited for clarity and length.

Read next:

Fudged research results erode people’s trust in experts

Confidence in scientists is on the rise in the US, Pew survey finds

From punish to empower: A blame-free approach to research misconduct