The use of artificial intelligence (AI) in medicine and in urology specifically has increased over the past few years, during which time it has enabled optimization of patient workflow, increased diagnostic accuracy and enhanced computer analysis of radiological and pathological images. However, before further use of AI is undertaken, possible ethical issues need to be evaluated to improve understanding of this technology and to protect patients and providers. Possible ethical issues that require consideration when applying AI in clinical practice include patient safety, cybersecurity, transparency and interpretability of the data, inclusivity and equity, fostering responsibility and accountability, and the preservation of providers’ decision-making and autonomy. Ethical principles for the application of AI to health care and in urology are proposed to guide urologists, patients and regulators to improve use of AI technologies and guide policy-making.
This is a preview of subscription content, access via your institution
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 print issues and online access
$209.00 per year
only $17.42 per issue
Rent or buy this article
Prices vary by article type
Prices may be subject to local taxes which are calculated during checkout
Nolsøe, A. B., Østergren, P. B., Jensen, C. F. S. & Fode, M. From separation to collaboration: the future of urology. Nat. Rev. Urol. 16, 633–634 (2019).
Sidey-Gibbons, J. A. M. & Sidey-Gibbons, C. J. Machine learning in medicine: a practical introduction. BMC Med. Res. Methodol. 19, 64 (2019).
World Health Organization. Ethics and governance of artificial intelligence for health: WHO guidance (WHO, 2021).
Gillon, R. Medical ethics: four principles plus attention to scope. Br. Med. J. 309, 184 (1994).
Benanti, P. Algor-ethics: artificial intelligence and ethical reflection. Rev. Éthique Théol. Morale 307, 93–110 (2020).
Cacciamani, G. E., Anvar, A., Chen, A., Gill, I. & Hung, A. J. How the use of the artificial intelligence could improve surgical skills in urology: state of the art and future perspectives. Curr. Opin. Urol. 31, 378–384 (2021).
Chen, A. B. et al. Artificial intelligence applications in urology: reporting standards to achieve fluency for urologists. Urol. Clin. North Am. 49, 65–117 (2021).
Hung, A. J., Chen, A. B., Cacciamani, G. E. & Gill, I. S. Artificial intelligence will (may) make doctors expendable (in good ways): pro. Eur. Urol. Focus 7, 683–684 (2021).
Hung, A. J., Liu, Y. & Anandkumar, A. Deep learning to automate technical skills assessment in robotic surgery. JAMA Surg. 156, 1059–1060 (2021).
Rapakoulia, T. et al. EnsembleGASVR: a novel ensemble method for classifying missense single nucleotide polymorphisms. Bioinformatics 30, 2324–2333 (2014).
Johnson, H. et al. Development and validation of a 25-gene panel urine test for prostate cancer diagnosis and potential treatment follow-up. BMC Med. 18, 1–14 (2020).
Hamet, P. & Tremblay, J. Artificial intelligence in medicine. Metabolism 69, S36–S40 (2017).
Chen, A. B. et al. Artificial intelligence applications in urology: reporting standards to achieve fluency for urologists. Urol. Clin. North Am. 49, 65–117 (2022).
World Health Organization. WHO guidelines for safe surgery 2009: safe surgery saves lives (WHO, 2009).
Gawande, A. A., Thomas, E. J., Zinner, M. J. & Brennan, T. A. The incidence and nature of surgical adverse events in Colorado and Utah in 1992. Surgery 126, 66–75 (1999).
Luongo, F., Hakim, R., Nguyen, J. H., Anandkumar, A. & Hung, A. J. Deep learning-based computer vision to recognize and classify suturing gestures in robot-assisted surgery. Surgery 169, 1240–1244 (2021).
Szeliski, R. Computer Vision: Algorithms and Applications (Springer, 2010).
Chen, J. et al. Objective assessment of robotic surgical technical skill: a systematic review. J. Urol. 201, 461–469 (2019).
Eppler, M. B. et al. Automated capture of intraoperative adverse events using artificial intelligence: a systematic review and meta-analysis. J. Clin. Med. 12, 1687 (2023).
Cacciamani, G. E. et al. Is artificial intelligence replacing our radiology stars? Not yet! Eur. Urol. Open Sci. 48, 14–16 (2023).
Bartsch, G. Jr. et al. Use of artificial intelligence and machine learning algorithms with gene expression profiling to predict recurrent nonmuscle invasive urothelial carcinoma of the bladder. J. Urol. 195, 493–498 (2016).
Wong, N. C., Lam, C., Patterson, L. & Shayegan, B. Use of machine learning to predict early biochemical recurrence after robot-assisted prostatectomy. BJU Int. 123, 51–57 (2019).
Evans, B. J., Burke, W. & Jarvik, G. P. The FDA and genomic tests — getting regulation right. N. Engl. J. Med. 372, 2258–2264 (2015).
Toft, E. L., Kaae, S. E., Malmqvist, J. & Brodersen, J. Psychosocial consequences of receiving false-positive colorectal cancer screening results: a qualitative study. Scand. J. Prim. Health Care 37, 145–154 (2019).
Kaissis, G. A., Makowski, M. R., Rückert, D. & Braren, R. F. Secure, privacy-preserving and federated machine learning in medical imaging. Nat. Mach. Intell. 2, 305–311 (2020).
Rudzicz, F. & Saqur, R. Ethics of artificial intelligence in surgery. Preprint at https://doi.org/10.48550/arXiv.2007.14302 (2020).
Centers for Disease Control and Prevention. Health Insurance Portability and Accountability Act of 1996 (HIPAA). CDC https://www.cdc.gov/phlp/publications/topic/hipaa.html (1996).
European Council. The General Data Protection Regulation. EC https://www.consilium.europa.eu/en/policies/data-protection/data-protection-regulation/ (2022).
Cohen, I. G. & Mello, M. M. HIPAA and protecting health information in the 21st century. J. Am. Med. Assoc. 320, 231–232 (2018).
Price, W. N. & Cohen, I. G. Privacy in the age of medical big data. Nat. Med. 25, 37–43 (2019).
European Union. Artificial intelligence Act. EU https://artificialintelligenceact.eu/the-act/ (2021).
Meszaros, J., Minari, J. & Huys, I. The future regulation of artificial intelligence systems in healthcare services and medical research in the European Union. Front. Genet. 13, 927721 (2022).
Hirsch, D. D. From individual control to social protection: new paradigms for privacy law in the age of predictive analytics. Md. Law Rev. 79, 439 (2019).
Rocher, L., Hendrickx, J. M. & De Montjoye, Y.-A. Estimating the success of re-identifications in incomplete datasets using generative models. Nat. Commun. 10, 1–9 (2019).
Goldsteen, A., Ezov, G., Shmelkin, R., Moffie, M. & Farkash, A. Data minimization for GDPR compliance in machine learning models. AI Ethics 2, 477–479 (2022).
Fredrikson, M. et al. in 23rd USENIX Security Symposium 17–32 (2014).
Gerke, S., Yeung, S. & Cohen, I. G. Ethical and legal aspects of ambient intelligence in hospitals. J. Am. Med. Assoc. 323, 601–602 (2020).
Tsang, L. et al. The impact of artificial intelligence on medical innovation in the European Union and United States. Intell. Prop. Technol. Law J. 29, 3–11 (2017).
US Food and Drug Administration. Firmware update to address cybersecurity vulnerabilities identified in Abbott’s (formerly St. Jude Medical’s) implantable cardiac pacemakers: FDA safety communication (FDA, 2017).
Morgan, S. Cybercrime to cost the world $10.5 trillion annually by 2025. Cybersecurity Ventures https://cybersecurityventures.com/cybercrime-damages-6-trillion-by-2021 (2020).
Williams, C. M., Chaturvedi, R. & Chakravarthy, K. Cybersecurity risks in a pandemic. J. Med. Internet Res. 22, e23692 (2020).
Kruse, C. S., Frederick, B., Jacobson, T. & Monticone, D. K. Cybersecurity in healthcare: a systematic review of modern threats and trends. Technol. Health Care 25, 1–10 (2017).
Simera, I. et al. Transparent and accurate reporting increases reliability, utility, and impact of your research: reporting guidelines and the EQUATOR Network. BMC Med. 8, 24 (2010).
UK EQUATOR Centre. Enhancing the quality and transparency of health research. EQUATOR Network https://www.equator-network.org (2023).
Chen, J. et al. Current status of artificial intelligence applications in urology and their potential to influence clinical practice. BJU Int. 124, 567–577 (2019).
Checcucci, E. et al. Applications of neural networks in urology: a systematic review. Curr. Opin. Urol. 30, 788–807 (2020).
Han, E. R. et al. Medical education trends for future physicians in the era of advanced technology and artificial intelligence: an integrative review. BMC Med. Educ. 19, 460 (2019).
Liu, X. et al. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension. Nat. Med. 26, 1364–1374 (2020).
Cruz Rivera, S. et al. Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extension. Nat. Med. 26, 1351–1363 (2020).
Collins, G. S. et al. Protocol for development of a reporting guideline (TRIPOD-AI) and risk of bias tool (PROBAST-AI) for diagnostic and prognostic prediction model studies based on artificial intelligence. BMJ Open 11, e048008 (2021).
Vasey, B. et al. Reporting guideline for the early-stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI. Nat. Med. 28, 924–933 (2022).
UK EQUATOR Centre. Reporting guidelines under development for other study designs. EQUATOR Network https://www.equator-network.org/library/reporting-guidelines-under-development/reporting-guidelines-under-development-for-other-study-designs/#STARDAI (2020).
Mongan, J., Moy, L. & Kahn, C. E. Jr. Checklist for artificial intelligence in medical imaging (CLAIM): a guide for authors and reviewers. Radiol. Artif. Intell. 2, e200029 (2020).
Cacciamani, G. E. et al. PRISMA AI reporting guidelines for systematic reviews and meta-analyses on AI in healthcare. Nat. Med. 29, 14–15 (2023).
Murphy, K. et al. Artificial intelligence for good health: a scoping review of the ethics literature. BMC Med. Ethics 22, 14 (2021).
Benanti, P. The urgency of an algorethics. Discov. Artif. Intell. 3, 11 (2023).
Habli, I., Lawton, T. & Porter, Z. Artificial intelligence in health care: accountability and safety. Bull. World Health Organ. 98, 251–256 (2020).
Smith, H. Clinical AI: opacity, accountability, responsibility and liability. AI Soc. 36, 535–545 (2021).
Science and Technology Committee. Robotics and artificial intelligence: fifth report of session 2016–2017 https://publications.parliament.uk/ (House of Commons, 2016).
Liu, X. et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet Digit. Health 1, e271–e297 (2019).
Gillies, R. J., Kinahan, P. E. & Hricak, H. Radiomics: images are more than pictures, they are data. Radiology 278, 563–577 (2016).
van Timmeren, J. E., Cester, D., Tanadini-Lang, S., Alkadhi, H. & Baessler, B. Radiomics in medical imaging — “how-to” guide and critical reflection. Insights Imaging 11, 91 (2020).
Sugano, D. et al. Impact of radiomics on prostate cancer detection: a systematic review of clinical applications. Curr. Opin. Urol. 30, 754–781 (2020).
Adadi, A. & Berrada, M. Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018).
Morozov, A. et al. A systematic review and meta-analysis of artificial intelligence diagnostic accuracy in prostate cancer histology identification and grading. Prostate Cancer Prostatic Dis. https://doi.org/10.1038/s41391-023-00673-3 (2023).
US Food and Drug Administration. FDA authorizes software that can help identify prostate cancer (FDA, 2021).
Raciti, P. et al. Novel artificial intelligence system increases the detection of prostate cancer in whole slide images of core needle biopsies. Mod. Pathol. 33, 2058–2066 (2020).
Glikson, E. & Woolley, A. W. Human trust in artificial intelligence: review of empirical research. Acad. Manag. Ann. 14, 627–660 (2020).
Nundy, S., Montgomery, T. & Wachter, R. M. Promoting trust between patients and physicians in the era of artificial intelligence. J. Am. Med. Assoc. 322, 497–498 (2019).
Doshi-Velez, F. & Kim, B. Towards a rigorous science of interpretable machine learning. Preprint at arXiv https://doi.org/10.48550/arXiv.1702.08608 (2017).
Gastounioti, A. & Kontos, D. Is it time to get rid of black boxes and cultivate trust in AI? Radiol. Artif. Intell. 2, e200088 (2020).
Reyes, M. et al. On the interpretability of artificial intelligence in radiology: challenges and opportunities. Radiol. Artif. Intell. 2, e190043 (2020).
Hales, M. AI presents HIPAA risks. HIPAA E-Tool https://thehipaaetool.com/ai-presents-hipaa-risks/ (2023).
European Parliament and Council of the European Union. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (Official Journal of the European Union, 2016).
Ghassemi, M., Oakden-Rayner, L. & Beam, A. L. The false hope of current approaches to explainable artificial intelligence in health care. Lancet Digit. Health 3, e745–e750 (2021).
Hamon, R. et al. in Proc. 2021 ACM Conf. Fairness Accountability Transparency 549–559 (ACM, 2021).
Barocas, S. & Selbst, A. D. Big data’s disparate impact. Calif. L. Rev. 104, 671 (2016).
Malanga, S. E., Loe, J. D., Robertson, C. T. & Ramos, K. in Big Data, Health Law, and Bioethics (eds Cohen, G. et al.) 98–111 (Cambridge Univ. Press, 2018).
Gijsberts, C. M. et al. Race/ethnic differences in the associations of the Framingham risk factors with carotid IMT and cardiovascular events. PLoS ONE 10, e0132321 (2015).
McCarthy, A. M. et al. Health care segregation, physician recommendation, and racial disparities in BRCA1/2 testing among women with breast cancer. J. Clin. Oncol. 34, 2610 (2016).
Caliskan, A., Bryson, J. J. & Narayanan, A. Semantics derived automatically from language corpora contain human-like biases. Science 356, 183–186 (2017).
Banerjee, I. et al. Reading race: AI recognises patient’s racial identity in medical images. Preprint at arXiv https://doi.org/10.48550/arXiv.2107.10356 (2021).
United Nations. Universal Declaration of Human Rights. United Nations Gen. Assem. 302, 14–25 (1948).
Office of the United Nations High Commissioner for Human Rights. The right to privacy in the digital age (OHCHR, 2022).
Office of the United Nations High Commissioner for Human Rights. Artificial intelligence and privacy, and children’s privacy (OHCHR, 2021).
Bacciarelli, A. et al. The Toronto Declaration: protecting the right to equality and non-discrimination in machine learning systems (Amnesty Int., 2018).
Andorno, R. The Oviedo Convention: a European legal framework at the intersection of human rights and health law (De Gruyter, 2005).
Altman, D. G., Simera, I., Hoey, J., Moher, D. & Schulz, K. EQUATOR: reporting guidelines for health research. Lancet 371, 1149–1150 (2008).
Liu, X., Faes, L., Calvert, M. J. & Denniston, A. K. Extension of the CONSORT and SPIRIT statements. Lancet 394, 1225 (2019).
Committee on Artificial Intelligence. Consolidated working draft of the Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law (Council of Europe, 2023).
Council of Europe. Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (Council of Europe, 1981).
European Commission for the Efficiency of Justice. European ethical Charter on the use of artificial intelligence in judicial systems and their environment (Council of Europe, 2018).
Organisation for Economic Co-operation and Development. Recommendation of the Council on Artificial Intelligence (OECD, 2019).
Ibero-American Data Protection Network. General recommendations for the processing of personal data in artificial intelligence (redipd, 2019).
I.S.G. is an unpaid advisor for Steba and has equity in OneLine Health. A.J.H. is a paid advisor for Intuitive Surgical. A.C. and G.E.C. declare no competing interests.
Peer review information
Nature Reviews Urology thanks Alfredo Vellido, Alejandro Granados Martinez and Juan Gomez Rivas for their contribution to the peer review of this work.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Cacciamani, G.E., Chen, A., Gill, I.S. et al. Artificial intelligence and urology: ethical considerations for urologists and patients. Nat Rev Urol (2023). https://doi.org/10.1038/s41585-023-00796-1