A correlation between error rate and success undermines promise of stem-cell trials.
When it comes to stem-cell therapies, the stakes are high — but not as high as the hopes of people who are severely ill. Over the past few years, dozens of small, early-phase clinical trials have tested the value of adult stem cells in treating debilitating or life-threatening heart disease. Results have been mixed, but most peer-reviewed academic reports have hinted that patients may be helped. This has, understandably, encouraged clinicians to move potential therapies into large and expensive phase III trials to establish whether the treatments can fulfil their promise.
Now comes a shocking reality check, revealed this week in the British Medical Journal (BMJ). As we report on page 15, a London-based team has scrutinized reports of all the randomized trials of bone-marrow stem-cell treatments for heart disease they could find.
The authors searched for discrepancies that might undermine the results and found plenty — errors such as numbers not adding up, or individual patients reported variously as male and female, dead and alive. In fact, the researchers found a linear relationship between the number of discrepancies and the claimed effect size. The small number of trials that they identified as unflawed showed an effect size of zero. In other words, the scientists declare this stem-cell emperor to have no clothes.
The small number of trials identified as unflawed showed an effect size of zero.
The multitude of discrepancies may not necessarily invalidate the conclusions of an individual trial — the authors point out that all too often the clinical data are not available, leaving them unable to check whether the discrepancies are real errors or just the result of sloppy reporting.
But, at the very least, the BMJ report should raise the question of whether the data are really strong enough to support the big step of moving to a phase III trial, particularly given that in the case of adult stem cells the results of animal studies have been ambiguous. Initially, researchers suggested that these cells became specialized to the target organ and replaced damaged tissue, but this idea has since been rejected. Many clinicians now think that the cells instead act to heal the surrounding tissue, releasing molecules that cause inflammation and the growth of oxygen-bearing small blood vessels, processes important to repair.
The findings of the BMJ study raise another worrying question: why did the clinical journals concerned fail to notice the discrepancies, given that many of the errors seem, in hindsight at least, to be startlingly visible? If a table claims to describe nclinical events, for example, but in its columns refers to n + 2 events, is that really so hard to catch?
This, in turn, raises more queries about the process. Who should take responsibility for fact-checking a paper for internal consistency? Is it the notoriously busy clinical experts who act as referees? Or the editors, many of whom also have a full schedule of clinical duties? Few of the journals that published the papers scrutinized in this case have professional editors or significant numbers of in-house editing staff. Pressure to review and publish quickly is high. The two sides of the equation don’t balance, and the problems identified in the study suggest something of a crisis.
To address this, the publishers of clinical journals must do more to ensure that someone takes responsibility for the fact-checking. That could involve asking authors to guarantee that they have checked figures, tables, text and abstracts for internal consistency. Publishers could require authors to make available suitably anonymized data on each patient as metadata to the study, so that readers can trace the source of any discrepancy that might creep through. Or the publishers could reach into their pockets and provide more in-house resources to perform the necessary checking. What is not acceptable is for the situation to continue as it is, with responsibilities undefined and inexact publishing distorting clinical messages.
The problem seems to run deeper than the heart and stem-cell studies checked in this case. For years, analyses have highlighted a bias towards publishing clinical trials that show a positive outcome. (A similar trend has also been found with scientific results.)
Translational medicine is one of the buzz-phrases of the twenty-first century. In a way, it should be a surprise that it has taken so long for the idea to catch on. What use is medicine that is stuck in the scientific laboratory? But as the curious case of adult stem cells demonstrates, the right checks and balances are not brakes on progress, but an essential foundation for that progress. Fools rush in. So do those who have not done their homework.
Related links in Nature Research
Related external links
About this article
Cite this article
False positives. Nature 509, 7–8 (2014). https://doi.org/10.1038/509007b