Self-help and business books are replete with advice for learning from failures. The biomedical community must do just that if it is to ease the burden from intractable conditions such as Alzheimer’s disease.
It can take 20 years or more to get a drug to market, from testing compounds in animals to running late-stage (phase III) clinical trials in thousands of subjects. More than 80% of drugs that are tested in humans fail to demonstrate safety and efficacy1 (see ‘High failure rate’); the rate for Alzheimer’s treatments is estimated at more than 99%2 (see ‘Alzheimer’s drug attrition’).
Yet the data behind these failures are generally not seen by regulators, or considered deeply by anyone outside the company sponsoring the trial. Without this information, learning is unlikely.
In 2015, drug companies were invited to discuss confidential information about all their Alzheimer’s disease programmes by the European Medicines Agency (EMA), where we work. An important result of this data-sharing initiative was new recommendations for designing clinical trials and assessing patients’ outcomes, as consolidated in EMA’s revised guideline for clinical investigations of Alzheimer’s disease treatments3. We believe that what the companies learnt (indirectly) from one another will lead to faster, more-informative clinical trials. In our view, if this information had been put together sooner, decision-making after early-stage trials could have been improved.
Practices to enable more-thorough, earlier analyses of failed developments should be adapted to treatments for other challenging diseases, and should be part of regulators’ responsibilities. This will ensure that clinical research evaluates treatments faster and with more certainty.
Initiatives for private companies to share biomedical data and ideas have expanded in the past decade. Some, such as the Biomarkers Consortium and the Structural Genomics Consortium, bring together many companies and academics to design experiments for the benefit of the community, such as identifying disease markers or characterizing tool compounds to understand how target proteins work. Others ask companies and academic groups to pool data in a common repository. For instance, the Project Data Sphere Initiative is a platform to share de-identified data from people who were enrolled in the control, placebo or even experimental arms of more than 180 cancer trials.
More data are also being put into the public domain from individual trials. The International Committee of Medical Journal Editors has advocated for the release of large quantities of data from trials that have had results submitted for publication4. For its part, EMA has started publishing all clinical-study reports for medicines after regulatory review is completed, together with its assessment of the preclinical and clinical evidence5. Although these data are useful, they do not encompass information for drug candidates that fail to make it to regulatory submission.
At best, some research leading to negative results in clinical trials will appear on clinical-trial registries or, perhaps, in publications, but without the context of how these compounds performed in preclinical or early-stage programmes. Moreover, the time lag between the generation of data and any eventual accessibility is usually very long, hampering efforts to learn.
The whole story
Information that is not shared is arguably the most important: data that failed to meet drug developers’ hopes are most likely to help progress. Large clinical trials are multimillion-dollar experiments to validate a hypothesis that an experimental drug will be effective and safe. Results that go against these expectations must be made available to refine hypotheses and to elaborate alternative ones.
Data from negative research can reveal whether a trial adequately tested the intended hypothesis. For example, in cardiovascular disease, three clinical trials of inhibitors of cholesteryl ester transfer protein (CETP) showed no effect and led to questions over whether CETP was an appropriate target. When a fourth trial of a CETP inhibitor found that it modestly reduced the risk of a coronary event, such as a heart attack or unstable angina, the result led to speculations that the target was indeed promising. The problem arose because of the way in which molecules were tested, and because it was difficult to find molecules that inhibited CETP enough to make a measurable difference. (The company running the fourth trial elected not to pursue that product further.) We have this insight because the CETP cardiovascular trials were all large and disclosed6.
Going back to the bench to elaborate a new hypothesis for treating a disease is likely to delay drug discovery by a decade or more, so it is crucial to assess whether there are other ways forward. We in the scientific community wanted to know what could be learnt from earlier, undisclosed work on Alzheimer’s.
Alzheimer’s disease is perhaps the therapeutic area best positioned to encourage this level of cooperation. In the past 10 years, more than 30 drugs have entered phase III clinical trials for Alzheimer’s disease. So far, none of these experimental treatments have shown therapeutic benefit or even met trial objectives, such as halting or reversing the decline in a person’s cognition or ability to perform everyday activities. There is evidence that, in some cases, these trials were not preceded by adequate exploratory research. This led to a high rate of failures, increased the risk of researchers missing therapeutic potential even if it existed (for example, by selecting a wrong dose or inappropriate target population), and created a near-certainty of obtaining results that are difficult to interpret.
Many development programmes for Alzheimer’s treatments have announced disappointing results: starting in 2012, large, highly anticipated trials sponsored by Merck, Pfizer, Johnson & Johnson, Eli Lilly and Roche all failed to show therapeutic benefits.
Health ministers from countries in the Group of 8 (G8) published a policy paper on dementia in 2013 that was intended to stimulate action from all players. In 2015, the World Health Organization included “increasing collective efforts in dementia research and fostering collaboration” in its global call for action on dementia. These initiatives, together with the previous failures, meant that drug companies faced significant public pressure to demonstrate that they had taken action towards solutions. Working voluntarily with regulators offered a good way to do so.
Following the G8 call to action, we at EMA invited drug companies to present their research to us confidentially and individually — detailing what drug targets they investigated, what populations they thought their interventions might treat, and how they intended to test this in their trial designs. Seven companies agreed to take part. Their presentations to us covered data on 14 discontinued or ongoing trials, including efficacy trials that collectively covered more than 12,000 participants.
We did not ask them to give us their files so that we could do our own analyses. Instead, we invited them to walk us through their logic and the evidence that led, in most cases, to disappointing results in large clinical trials. The point was not to combine data to perform more powerful statistical analyses, but to review the entirety of data that each company provided and then to consider common issues.
We looked at the landscape of research and development (R&D) plans with the knowledge that pivotal clinical trials had been negative. We considered the hypotheses that the companies put forward originally, how they set up studies to test their hypotheses, how they developed in vitro assays and animal models, and how they interpreted signals in early clinical work. We considered what we could learn from one company’s studies in light of another’s. We tried to understand when ideas about potential therapies went wrong.
The information shared with our teams was more up-to-date, broader and more in-depth than what is commonly published in the literature, included in trial registries or given in mandated public summaries. Details on data generated before phase III trials — including preclinical and early clinical research — were crucial to frame the failure’s significance in terms of which hypotheses it falsified. Such information also helped to avoid unwarranted negative conclusions about uninformative generic terms such as ‘β-amyloid hypothesis’ (the theory that accumulation of the peptide amyloid-β in the brain is what causes the disease), which could encompass multiple molecular targets and strategies.
These insights improved our understanding of the disease, how it progresses and, importantly, how modulating the supposed mechanism of disease might have a clinically detectable effect (unpublished results). For example, how strongly must a potential drug molecule bind to a target protein to alter physiology? What fraction of the molecule must penetrate the blood–brain barrier to have an effect? These parameters become clearer with data from multiple, diverse programmes.
Although we are legally limited as to which data we can present publicly, our work helped us to revise the EMA guideline3 that we think will aid the design of more-informative Alzheimer’s trials and better R&D programmes. For example, people enrolled in trials should be assessed both for their symptoms and for evidence of amyloid-β pathology. This makes the progression of disease more predictable and enhances the power of the trial. This exercise also allowed us to develop recommendations for how to consider ‘intercurrent events’, such as a stroke or change in medication regimen, that some older participants in a long trial will inevitably experience, and which complicate the interpretation of results.
We also uncovered problems with outcome measurements. Some of the previously used and best-known instruments have proved inadequate to study Alzheimer’s disease in its early stages7,8. To overcome this issue, most trials combine single items from various outcome measures, gauging specific aspects of cognitive performance and function in daily activities. Although this strategy has merit, the interpretability of results should take precedence over a purely statistical approach. In addition, different practices across trials limit efforts to compare results. We think that more-standardized approaches to measuring outcomes in our revised guideline will lead to more-informative trials.
Data feed good science
How were we able to do this? Because regulators routinely work with commercially confidential information, companies can be willing to share data with regulators that they would be reluctant to put in the public domain.
For this project, EMA worked with the regulatory agencies of Canada, Japan and the United States to align requirements as much as possible9. Pharmaceutical companies often claim that different regulatory requirements impede global development. Although this is still a challenge, our focused multilateral effort identified areas for convergence — such as selecting the study population and assessing patient outcomes — that can aid clinical investigations. Such convergence is reflected in the US Food and Drug Administration’s revised industry guidance on Alzheimer’s disease, which was published shortly before EMA’s latest guideline3 and contains similar recommendations.
It is too early to assess whether Alzheimer’s drug-development programmes led by the new EMA guideline will yield positive results. Nonetheless, we think our efforts demonstrate that the gains for overall progress in pharmaceutical research should outweigh any individual company’s hesitation to disclose data. Furthermore, it shows that regulators can act as enablers of more effective R&D. To speed up progress, companies must be more forthcoming with their data and thinking, and regulators must find ways to help them with this. The ultimate goal is to allow broader access to data from drug-development programmes and to enable faster learning by the entire research community.
We hope that this project leads to similar efforts in other diseases that are difficult to treat. We owe it to the public and to patients to ensure that R&D efforts continue to move towards greater transparency.
Nature 563, 317-319 (2018)