News feature

"It’s not a replication crisis. It’s an innovation opportunity"

The meaning of failure in science.

  • Jon Brock

Detecting cancer in human tissues,
Credit: Aamir Ahmed, Jane Pendjiky and Michael Millar.

"It’s not a replication crisis. It’s an innovation opportunity"

The meaning of failure in science.

medium

28 October 2019

Jon Brock

Aamir Ahmed, Jane Pendjiky and Michael Millar.

Detecting cancer in human tissues,

It takes me a few moments to recognize the name. I’ve arrived early at the National Health and Medical Research Council’s Research Translation Symposium at the University of Sydney and I’m sitting, coffee in hand, planning my route through the day’s program.

A gentleman, quietly spoken, in a sharp blue jacket, asks to share my table. His conference badge identifies him as C. Glenn Begley. We’re shaking hands when I remember his significance.

Six years ago, Begley published an article in the journal Nature that was, in hindsight, a seminal moment in science. In three tersely worded pages, it ushered in what’s become known as the “replication crisis” — a growing recognition that many published scientific findings cannot be reproduced by independent scientists and may, therefore, be untrue.

There’s a rueful smile when I mention this, a slight rolling of the eyes. But he’s soon telling me the back story.

It begins in 2002 when Begley left Australia for California to become Vice President of Hematology and Oncology at biopharmaceutical multinational Amgen. He’d made his name as a researcher at Melbourne’s Walter and Eliza Hall Institute where, amongst other achievements, his team discovered the protein human G-CSF. Found in bone marrow, it can be used to hasten recovery after chemotherapy.

The move from academia to industry was, Begley tells me, a natural progression. “I saw it as an opportunity to have a real impact for patients,” he says.

Begley’s new team was soon put to work, scouring the scientific journals for interesting 'preclinical' research — studies conducted in test tubes and Petri dishes or with laboratory mice that might ultimately lead to new treatments. For each such finding, the first step was always replication.

The Amgen scientists would carefully follow the methods described in the journal article to see if they could achieve the same result. If they failed, they’d contact the original researchers. "Sometimes," Begley explains, "there are tricks to an experiment that are not fully described in the papers."

If they still couldn’t replicate the finding, they would arrange to visit the original researchers and observe them attempting to replicate their own experiments in their own labs. If that also failed, they would quietly abandon the project and move on to the next potential lead.

The turning point, Begley recalls, came one year at the American Association for Cancer Research conference. Lee Ellis, a prominent scientist and patient advocate, accused Amgen and other industry players of failing patients by making only incremental advances in treatment.

"Lee was right," Begley says. “But we weren’t doing it deliberately. We weren’t setting out to develop drugs that were only 2% better than existing drugs.”

Stung by the criticism, Begley revisited the Amgen records, selecting 53 ‘landmark’ projects. “These were major, seminal findings,” he says. “The most famous labs, the most famous researchers.” The records showed that 47 of those projects had been discontinued because the original finding could not be replicated. “I went back to Lee Ellis,” Begley tells me, “and I said, 'This is part of the problem.'"

The big red flag

Co-written with Ellis, Begley’s Nature article presented a devastating critique of preclinical cancer research. Entire fields of research, they argued, had arisen from findings that could not themselves be replicated. Patients had undergone clinical trials of treatments that had no prospect of success.

The response from the research community was immediate. But it wasn’t what Begley or Ellis had hoped for.

“At conferences people would get in my face,” Begley tells me. “They would say Amgen scientists were hopeless, stupid, incompetent and so on without recognizing that mostly it was the original investigators themselves who could not reproduce their own work. It was very unpleasant.”

Then there was the hate mail. “The emails were so offensive that I just deleted them,” he says. “I never showed my wife.”

I ask Begley what happened to those 47 studies that didn’t replicate. Has the scientific community been notified? He explains that, in order to gain access to the labs, Amgen had to sign confidentiality agreements. Only the original scientists could disclose the replication failures and, to his knowledge, none have done so.

Instead, Begley’s workaround has been to describe, without identifying individual studies, the characteristics of those that did not replicate. Often, they lacked important checks and control measures to rule out alternative interpretations. They’d report only a single study, with no follow-up to confirm the results. And they’d provide only select highlights of the data.

But the biggest red flag for Begley was a lack of blinding — the scientists performing the experiments knew, for example, which condition the different samples belonged to, allowing their observations to be influenced by what they hoped or expected to find. “If I was king of the world,” he says, “the one thing I’d insist is that all experiments are performed blinded.”

Begley’s crusade for better science has focused on medical research. But the problems he identified have become increasingly apparent in other areas of science. As he and Ellis were writing their Nature article, psychologists were awakening to ‘questionable research practices’ in their own field and beginning a number of large-scale replication attempts.

One project, published in 2015, repeated 100 studies from top psychology journals. Where 97 of the original studies reported a statistically significant effect, just 35 of replications were ‘successful’. Like Begley, the psychology replicators have faced accusations of incompetence and malicious intent.

But as more replications have failed, the issue has become difficult to ignore. The so-called ‘hard’ sciences have not been immune either. In a survey of 1,500 scientists, 77% of biologists and 87% of chemists said that they had tried and failed to replicate published research findings. More than half of respondents agreed that science is facing a “significant crisis”.

'Sexy' results vs rigorous methods

By the time Nature published his article, Begley had already moved on from Amgen. For five years, he continued working for biotech companies in the US. But in 2017 he was enticed back to Australia to lead BioCurate, a joint initiative from Melbourne and Monash Universities.

The company’s aim, Begley explains, is to improve success rates in translating basic research into commercially viable products that reach patients and consumers. He describes his role, however, as “managing failure”.

The harsh reality is that most projects won’t reach market. And while there are many things that can be done to avoid failure, it’s also an inevitable part of the scientific process. “Sometimes, we just don’t understand biology deeply enough,” he says. “That’s a perfectly acceptable reason for failure.”

The problem, Begley tells me, is that we load scientific results with emotional terms.

“If the result is the one we like, we call it a positive study. If it’s one we don’t like, we call it a negative study. But the only truly failed study is one that’s uninterpretable. What we’re trying to do in this business is improve human health. A negative study, if it’s well done, might tell us that that whole area of research should stop. It’s a really valuable contribution.”

Begley leans forward, adjusts his glasses. “I sometimes think that people deliberately try and misunderstand what I’m saying. I’m not saying that experiments shouldn’t fail. But an experiment that actually failed shouldn’t be presented as a success.” This, he argues, is the crux of the problem — academia’s focus on “sexy results” over rigorous methods.

“The incentives are to get a paper published in whatever journal so that I become famous, so that I get my next grant, so that I get elected to the academy. That’s the system we have in academia. That’s the cycle that is perpetuated. We’re talking about human behaviour. You provide people with an incentive and they’ll try and achieve what is required to get the reward. It’s like rats in a cage.”

At this point, I’m nodding in agreement. “It’s pretty depressing,” I say.

Begley counters immediately. “I’m not depressed,” he replies. “I’m actually incredibly optimistic about science. The advances that we’ve made in the last decades were unimaginable when I was in medical school. We’ve got treatments for rheumatoid arthritis, ankylosing spondylitis [a form of arthritis affecting the spine], inflammatory bowel disease, cancer, hepatitis C.” He shakes his head incredulously.

“The world now has the opportunity to eradicate hepatitis C. Unimaginable! So one can’t be negative about science. And, if you go more broadly, the decrease in infant mortality, the increase in longevity both in western countries and in third world countries. Phenomenal! So you can’t be depressed."

“What you can say,” he continues, “is that we could do better. We’ve made enormous advances but we’ve wasted a lot of money. In the US the estimate is that it’s costing $28 billion a year for sloppy science. So there is a lot of lost opportunity. But the advances we’ve made are incredible. You’ve got to hold that in tension. That’s why I’ve never described this as a replication crisis. I’ve never used that word. It’s an opportunity. Because if we can take the funding from the lazy scientists and give it to the really good scientists, we’ll do even better. It really is an innovation opportunity.”

Around us, the venue is filling up. Delegates are queueing for coffee. Others are pinning their research posters to the felt-covered boards that line the walls. They have titles such as “Reducing waste by choosing the right systematic review type” and “Breaking down barriers to using research to guide clinical practice”.

The theme this year is “Ensuring Value in Research” and the program includes sessions on improving data sharing and access to scientific publications, working with stake-holders to set research agendas, and communicating research to patients and health professionals. Later this morning there’s a panel on “Reimagining Failure”.

And though he disapproves of the term, Begley is one of four speakers addressing “The Replication Crisis”. Signs, I suggest, that the problems he identified are finally being taken seriously.

“It’s so gratifying,” Begley says, “to see how the mood has changed, how much the world has moved on in the last six years. It’s wonderful.”

This article features in the Best Australian Science Writing 2019 anthology.

This article was originally published by Medium. Read the original article.