Cancer screening, diagnosis and care stand to benefit greatly from advances in artificial intelligence (AI). Researchers, developers and deployers must ensure that applications of AI avoid known racial and gender biases to advance health care for all.
This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 print issues and online access
$209.00 per year
only $17.42 per issue
Buy this article
- Purchase on SpringerLink
- Instant access to full article PDF
Prices may be subject to local taxes which are calculated during checkout
References
Rajpurkar, P., Chen, E., Banerjee, O. & Topol, E. J. AI in health and medicine. Nat. Med. 28, 31–38 (2022).
Seyyed-Kalantari, L., Zhang, H., McDermott, M. B. A., Chen, I. Y. & Ghassemi, M. Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in under-served patient populations. Nat. Med. 27, 2176–2182 (2021).
Lazar, S. & Nelson, A. AI safety on whose terms? Science 381, 138 (2023).
Amboree, T. L. et al. National breast, cervical, and colorectal cancer screening use in federally qualified health centers. JAMA Intern. Med. 184, 671–679 (2024).
Spencer, J. C. & Pignone, M. P. Cancer screening through federally qualified health centers. JAMA Intern. Med. 184, 679–680 (2024).
Khorana, A. A., Kuderer, N. M., Culakova, E., Lyman, G. H. & Francis, C. W. Development and validation of a predictive model for chemotherapy-associated thrombosis. Blood 111, 4902–4907 (2008).
Khorana, A. A. et al. Rivaroxaban for thromboprophylaxis in high-risk ambulatory patients with cancer. N. Engl. J. Med. 380, 720–728 (2019).
Ek, L. et al. Randomized phase III trial of low-molecular-weight heparin enoxaparin in addition to standard treatment in small-cell lung cancer: the RASTEN trial. Ann. Oncol. 29, 398–404 (2018).
Gichoya, J. W. et al. AI recognition of patient race in medical imaging: a modelling study. Lancet Digit. Health 4, e406–e414 (2022).
Adam, H. et al. Write it like you see it: Detectable differences in clinical notes by race lead to differential model recommendations. In Proc. 2022 AAAI/ACM Conference on AI, Ethics, and Society 7–21 (ACM, 2022).
Xiao, Y., Lim, S., Pollard, T. J. & Ghassemi, M. In the name of fairness: assessing the bias in clinical record de-identification. In Proc. 2023 ACM Conference on Fairness, Accountability, and Transparency 123–137 (ACM, 2023).
Ebrahimian, S. et al. FDA-regulated AI algorithms: trends, strengths, and gaps of validation studies. Acad. Radiol. 29, 559–566 (2022).
Yang, Y., Zhang, H., Katabi, D. & Ghassemi, M. Change is hard: a closer look at subpopulation shift. In Proc. 40th International Conference on Machine Learning 39584–39622 (JMLR, 2023).
Bondi-Kelly, E. et al. Taking off with AI: lessons from aviation for healthcare. In Proc. 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization 1–14 (ACM, 2023).
Marcus, L. et al. FDA approval summary: pembrolizumab for the treatment of tumor mutational burden-high solid tumors. Clin. Cancer Res. 27, 4685–4689 (2021).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Rights and permissions
About this article
Cite this article
Ghassemi, M., Gusev, A. Limiting bias in AI models for improved and equitable cancer care. Nat Rev Cancer (2024). https://doi.org/10.1038/s41568-024-00739-x
Published:
DOI: https://doi.org/10.1038/s41568-024-00739-x