Comment | Published:

The paradox of precision medicine

Nature Reviews Clinical Oncologyvolume 15pages341342 (2018) | Download Citation

According to the paradigm of precision medicine, the administration of agents targeting the molecular alteration detected in a particular patient’s tumour reduces uncertainty in the clinical management of that patient. We describe how approaches to precision medicine can lead, paradoxically, to increased levels of uncertainty. We offer recommendations for how physicians can better navigate new uncertainties in precision medicine.

A goal of cancer precision medicine is to characterize the responses of patients to treatments targeting the particular alterations that define patient strata, thus reducing the uncertainty that surrounds the clinical management of patients with cancer. For some malignancies, this strategy has borne fruit. Less widely appreciated are the ways through which precision medicine is amplifying uncertainty in clinical decision-making.

Pressures on sample size

In precision medicine, large, heterogeneous patient populations are divided into smaller, more-homogeneous strata, with the aim of reducing variance in the response to treatment. Improvements in gene-sequencing techniques have enabled detailed results to be obtained, including an improved sensitivity of target gene detection in tumours. Such improvements aspire to divide patient groups into even finer strata. Yet, this stratification renders large, sufficiently powered clinical trials that provide precise estimates of the effects of such treatments virtually impossible to conduct. Small sample sizes also create pressure to use surrogate primary end points, such as tumour response, which enable greater statistical power because they are based on events that accumulate early after starting the clinical trial, instead of using primary end points that require time for events to occur, or that reflect long-term benefits, such as overall survival and its quality. If the trend towards greater patient stratification persists, precision medicine is likely to depend increasingly on low-grade evidence, such as that obtained from studies with small cohorts using surrogate end points, case reports, or preclinical experiments. For example, in a study published in 2016 reporting the results of molecular profiling of paediatric solid tumours for 100 patients1, 31% of analyses led to what the authors described as clinically actionable recommendations. Yet, only three of such recommendations were made on the basis of evidence from clinical trials; for 90% of patients, recommendations were made using preclinical evidence or “consensus opinion” (REF.1).

Incentives only for selected trials

In precision medicine, there is limited incentive to perform large, adequately controlled trials to confirm the efficacy of antitumour agents after a drug is approved. Instead, pharmaceutical companies have strong incentives to maximize their ability to detect antitumour efficacy by enriching the patient populations involved in preapproval trials, because such a measure can reduce the time or trial enrolment needed to obtain FDA approval. However, these incentives diminish after the therapeutic agent is approved: pharmaceutical companies have little reason to fund trials that might show that a new therapeutic agent is effective only for a very limited subset of patients. Also, once they are approved, agents can be used off-label without having to demonstrate efficacy in large clinical trials.

A factor limiting the generation of high-quality evidence in precision medicine is that the diagnostic methods used to define patient strata generally do not return a large profit; nor are they usually required to undergo evaluation in large randomized trials before receiving marketing approval2. The same principle is true for proprietary algorithms that are used to recommend treatment strategies. Consequently, limited incentives are provided to conduct research aimed at the rigorous evaluation of diagnostic methods.

Pharmaceutical companies have strong interests in funding clinical trials designed to explore the efficacy of approved agents in newly defined patient strata, but these incentives do not extend to undertaking the large, long-duration trials necessary to prove that such agents are effective in different diseases. Off-label treatments are often recommended in clinical practice guidelines based on low-level evidence3. Unless new sources of funding become available for appropriately designed trials, or new regulatory or reimbursement pressures emerge, the imbalance between the supply of low-level, exploratory evidence and uptake of such findings into trials producing high-level evidence will increase.

The ‘ragged edge’ of precision medicine

Precision medicine seeks to dichotomize patient populations into those who might benefit from a treatment and those for whom benefit is improbable. Defining cut-off points and criteria for such dichotomies is difficult, leading to uncertainties that one commentator defined as “the ragged edge of personalized medicine” (REF4). For example, trastuzumab was approved in 1998 for the treatment of patients with HER2-positive breast cancer. Determining that women with breast cancers with low levels of HER2 amplification do not benefit from trastuzumab took an additional 20 years5.

The availability of multiple techniques to test a patient’s tumour is a challenge. For example, the main diagnostic tool for matching patients with non-small-cell lung cancer to treatment with crizotinib is fluorescence in situ hybridization (FISH) to detect chromosomal arrangements involving ALK. However, case reports refer to patients with a response to crizotinib who had been described not to harbour ALK rearrangements according to FISH, but were later found to harbour such alterations when analysed using next-generation sequencing or immunochemistry6. Therefore, the application of evidence from a trial conducted using one diagnostic technique to clinical decision-making in a medical centre in which a different technique is used routinely introduces new uncertainties.

Treatment algorithms

Treatment recommendations are often generated using algorithms based on individual somatic (and occasionally germ-line) genotypic alterations. However, tumours often harbour multiple driver mutations (owing to intratumoural and intertumoural heterogeneity). Physicians therefore need to combine different streams of evidence to prioritize their choice of treatment. Algorithms can be tested in clinical trials, but the discovery of new mutations and the emergence of new therapeutic agents creates the possibility of such algorithms rapidly becoming outdated. In the SHIVA trial, a randomization algorithm was used to test the performance of tissue-agnostic, molecular signature-based matching of patients to treatments, in comparison with treatments selected by physicians. Subsequent findings raised doubts about the reliability of the matching algorithm, which might have contributed to the negative findings of this trial7. The requirement for treatment algorithms places demands on the decision-making capabilities of cancer centres and individual physicians, and adds to the complexity and uncertainty of precision medicine.

Measures to reduce uncertainty

The implementation of precision medicine relies on a fragmented landscape of evidence and a propagation of decision points in a disease course as physicians select among different diagnostic methods and treatment options. How, then, can oncologists best manage the expanding uncertainties of precision medicine? Improving the quality of the evidence used to make decisions about treatment can reduce uncertainties. Biomarker analysis and preclinical cancer research is rife with irregularities in experimental design and reporting of results, sometimes providing more ‘noise’ than signal. One way to reduce this noise would be to establish standards of experimental design and reporting that certify the degree of evidence reproducibility of each study. Reports used to generate treatment algorithms or guidelines should be based on evidence from studies in which the hypotheses have been prespecified and registered in a public database, analyses are conducted rigorously, and results have been shown to be reproducible.

A second approach for navigating the above described uncertainties is to improve decision-making processes for physicians. Human minds have a limited capacity to process information, and often rely on simple rules — heuristics — to reduce the cognitive burdens associated with decision-making, but heuristics can lead to biased decision making. A problem relevant to precision medicine is that known as ‘base-rate neglect’: a tendency to ignore a given event’s rate of occurrence within a relevant sample or population when making predictions8. In precision medicine, base-rate neglect could manifest as a tendency to render judgements on the basis of the molecular characteristics of a patient’s tumour rather than those that take a broader clinical context into account (for example, a patient with advanced-stage cancer who has exhausted the options for treatment that are supported by level one evidence). Precision medicine might need to draw on specific tools for improving decision making, such as decision aids, debiasing techniques, or feedback training9.

In summary, we encourage champions of precision medicine to adopt a realistic evaluation of its impact on patients and health-care systems. Many cancer centres advertise themselves as providing patients with a ‘personalized’ approach to cancer care. Such approaches may be appropriate as long as physicians are prepared to explain that such personalization is often accompanied by uncertainty regarding the level and interpretation of the evidence used to guide decisions.

References

  1. 1.

    Harris, M. H. et al. Multicenter feasibility study of tumor molecular profiling to inform therapeutic decisions in advanced pediatric solid tumors: the individualized cancer therapy (iCat) study. JAMA Oncol. 2, 608–615 (2016).

  2. 2.

    Hayes, D. F. et al. Breaking a vicious cycle. Sci. Transl. Med. 31, 196cm6 (2013).

  3. 3.

    Poonacha, T. K. & Go, R. S. Level of scientific evidence underlying recommendations arising from the National Comprehensive Cancer Network clinical practice guidelines. J. Clin. Oncol. 10, 186–191 (2011).

  4. 4.

    Fleck, L. M. Personalized medicine’s ragged edge. Hastings Cent. Rep. 40, 16–18 (2010).

  5. 5.

    Fehrenbacher, L. et al. in 2017 San Antonio Breast Cancer Symposium Day 2 GS1-02 (San Antonio, TX, USA, 2017).

  6. 6.

    Camidge, D. R. et al. Activity and safety of crizotinib in patients with ALK-positive non-small-cell lung cancer: updated results from a phase 1 study. Lancet Oncol. 13, 1011–1019 (2012).

  7. 7.

    Le Tourneau, C., Kamal, M. & Bièche, I. The SHIVA01 trial: what have we learned? Pharmacogenomics 18, 831–834 (2017).

  8. 8.

    Bar-Hillel, M. & Fischhoff, B. When do base rates affect predictions? J. Pers. Soc. Psychol. 41, 671–680 (1981).

  9. 9.

    Kahneman, D. Thinking, Fast and Slow (Farrar, Straus and Giroux, New York, 2011).

Download references

Acknowledgements

J.K. is funded by Genome Canada/Genome Quebec (PACEOMICS).

Author information

Affiliations

  1. Biomedical Ethics Unit, McGill University, Montreal, Quebec, Canada

    • Jonathan Kimmelman
  2. Department of Social Studies of Medicine, McGill University, Montreal, Quebec, Canada

    • Jonathan Kimmelman
  3. Division of Medical Oncology, Princess Margaret Cancer Centre, Toronto, Ontario, Canada

    • Ian Tannock

Authors

  1. Search for Jonathan Kimmelman in:

  2. Search for Ian Tannock in:

The authors declare no competing interests.

Corresponding author

Correspondence to Jonathan Kimmelman.

About this article

Publication history

Published

DOI

https://doi.org/10.1038/s41571-018-0016-0

Newsletter Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing