Abstract
Erectile dysfunction (ED) is a disorder that can cause distress and shame for men suffering from it. Men with ED will often turn to online support and chat groups to ask intimate questions about their health. ChatGPT is an artificial intelligence (AI)-based software that has been trained to engage in conversation with human input. We sought to assess the accuracy, readability, and reproducibility of ChatGPT’s responses to frequently asked questions regarding the diagnosis, management, and care of patients with ED. Questions pertaining to ED were derived from clinic encounters with patients as well as online chat forums. These were entered into the free ChatGPT version 3.5 during the month of August 2023. Questions were asked on two separate days from unique accounts and computers to prevent the software from memorizing responses linked to a specific user. A total of 35 questions were asked. Outcomes measured were accuracy using grading from board certified urologists, readability with the Gunning Fog Index, and reproducibility by comparing responses between days. For epidemiology of disease, the percentage of responses that were graded as “comprehensive” or “correct but inadequate” was 100% across both days. There was fair reproducibility and median readability of 15.9 (IQR 2.5). For treatment and prevention, the percentage of responses that were graded as “comprehensive” or “correct but inadequate” was 78.9%. There was poor reproducibility of responses with a median readability of 14.5 (IQR 4.0). Risks of treatment and counseling both had 100% of questions graded as “comprehensive” or “correct but inadequate.” The readability score for risks of treatment was median 13.9 (IQR 1.1) and for counseling median 13.8 (IQR 0.5), with good reproducibility for both question domains. ChatGPT provides accurate answers to common patient questions pertaining to ED, although its understanding of treatment options is incomplete and responses are at a reading level too advanced for the average patient.
This is a preview of subscription content, access via your institution
Access options
Subscribe to this journal
Receive 8 print issues and online access
$259.00 per year
only $32.38 per issue
Buy this article
- Purchase on Springer Link
- Instant access to full article PDF
Prices may be subject to local taxes which are calculated during checkout
Similar content being viewed by others
Data availability
All data generated or analyzed during this study are included in this published article and Supplementary Files and are available open access using ChatGPT software.
References
Rew KT, Heidelbaugh JJ. Erectile dysfunction. Am Fam Physician. 2016;94:820–7.
Yafi FA, Jenkins L, Albersen M, Corona G, Isidori AM, Goldfarb S, et al. Erectile dysfunction. Nat Rev Dis Prim. 2016;2:16003.
Matsui H, Sopko NA, Hannan JL, Bivalacqua TJ. Pathophysiology of erectile dysfunction. Curr Drug Targets. 2015;16:411–9.
Nguyen HMT, Gabrielson AT, Hellstrom WJG. Erectile dysfunction in young men-a review of the prevalence and risk factors. Sex Med Rev. 2017;5:508–20.
Jain V, Raut DK. Medical literature search dot com. Indian J Dermatol Venereol Leprol. 2011;77:135–40.
Will ChatGPT transform healthcare? Nat Med. 2023;29:505–6.
Grajales FJ 3rd, Sheps S, Ho K, Novak-Lauscher H, Eysenbach G. Social media: a review and tutorial of applications in medicine and health care. J Med Internet Res. 2014;16:e13.
Sallam M. ChatGPT utility in healthcare education, research, and practice: systematic review on the promising perspectives and valid concerns. Healthcare. 2023;11:887.
Thorp HH. ChatGPT is fun, but not an author. Science. 2023;379:313.
Contreras Kallens P, Kristensen-McLachlan RD, Christiansen MH. Large language models demonstrate the potential of statistical learning in language. Cogn Sci. 2023;47:e13256.
Kung TH, Cheatham M, Medenilla A, Sillos C, De Leon L, Elepaño C, et al. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLoS Digit Health. 2023;2:e0000198.
Waisberg E, Ong J, Masalkhi M, Kamran SA, Zaman N, Sarker P, et al. GPT-4 and ophthalmology operative notes. Ann Biomed Eng. 2023;51:2353–5.
Yeo YH, Samaan JS, Ng WH, Ting PS, Trivedi H, Vipani A, et al. Assessing the performance of ChatGPT in answering questions regarding cirrhosis and hepatocellular carcinoma. Clin Mol Hepatol. 2023;29:721–32.
Karabacak M, Margetis K. Embracing large language models for medical applications: opportunities and challenges. Cureus. 2023;15:e39305.
Świeczkowski D, Kułacz S. The use of the Gunning Fog Index to evaluate the readability of Polish and English drug leaflets in the context of health literacy challenges in medical linguistics: an exploratory study. Cardiol J. 2021;28:627–31.
Foe G, Larson EL. Reading level and comprehension of research consent forms: an integrative review. J Empir Res Hum Res Ethics. 2016;11:31–46.
Eltorai AE, Naqvi SS, Ghanian S, Eberson CP, Weiss AP, Born CT, et al. Readability of invasive procedure consent forms. Clin Transl Sci. 2015;8:830–3.
Peate I. Breaking the silence: helping men with erectile dysfunction. Br J Community Nurs. 2012;17:310, 2, 4–7.
Foster P, Luebke M, Razzak AN, Anderson DJ, Hasoon J, Viswanath O, et al. Stigmatization as a barrier to urologic care: a review. Health Psychol Res. 2023;11:84273.
Brown AF, Ma GX, Miranda J, Eng E, Castille D, Brockie T, et al. Structural interventions to reduce and eliminate health disparities. Am J Public Health. 2019;109:S72–8.
Purnell TS, Calhoun EA, Golden SH, Halladay JR, Krok-Schoen JL, Appelhans BM, et al. Achieving health equity: closing the gaps in health care disparities, interventions, and research. Health Aff. 2016;35:1410–5.
Eppler MB, Ganjavi C, Knudsen JE, Davis RJ, Ayo-Ajibola O, Desai A, et al. Bridging the gap between urological research and patient understanding: the role of large language models in automated generation of Layperson’s summaries. Urol Pract. 2023;10:436–43.
Gabriel J, Shafik L, Alanbuki A, Larner T. The utility of the ChatGPT artificial intelligence tool for patient education and enquiry in robotic radical prostatectomy. Int Urol Nephrol. 2023;55:2717–32.
Lebhar MS, Velazquez A, Goza S, Hoppe IC. Dr. ChatGPT: utilizing artificial intelligence in surgical education. Cleft Palate Craniofac J. 2023:10556656231193966
McGowan A, Gui Y, Dobbs M, Shuster S, Cotter M, Selloni A, et al. ChatGPT and Bard exhibit spontaneous citation fabrication during psychiatry literature search. Psychiatry Res. 2023;326:115334.
Emsley R. ChatGPT: these are not hallucinations - they’re fabrications and falsifications. Schizophrenia. 2023;9:52.
Dave T, Athaluri SA, Singh S. ChatGPT in medicine: an overview of its applications, advantages, limitations, future prospects, and ethical considerations. Front Artif Intell. 2023;6:1169595.
Russo GI, di Mauro M, Cocci A, Cacciamani G, Cimino S, Serefoglu EC, et al. Consulting “Dr Google” for sexual dysfunction: a contemporary worldwide trend analysis. Int J Impot Res. 2020;32:455–61.
Acknowledgements
The authors have no acknowledgements.
Author information
Authors and Affiliations
Contributions
SR: conceptualization, data curation, formal analysis, original draft writing, review and editing. ARS: conceptualization, data curation, formal analysis, review and editing. YB: review and editing. MS: review and editing. RJV: conceptualization, review and editing, supervision.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Ethical approval and consent to participate
Not applicable.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Razdan, S., Siegal, A.R., Brewer, Y. et al. Assessing ChatGPT’s ability to answer questions pertaining to erectile dysfunction: can our patients trust it?. Int J Impot Res (2023). https://doi.org/10.1038/s41443-023-00797-z
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41443-023-00797-z
This article is cited by
-
Comment on: Assessing ChatGPT’s ability to answer questions pertaining to erectile dysfunction
International Journal of Impotence Research (2024)
-
Response to commentary on: assessing ChatGPT’s ability to answer questions pertaining to erectile dysfunction: can our patients trust it?
International Journal of Impotence Research (2024)