Large language models (LLMs) can respond to free-text queries without being specifically trained in the task in question, causing excitement and concern about their use in healthcare settings. ChatGPT is a generative artificial intelligence (AI) chatbot produced through sophisticated fine-tuning of an LLM, and other tools are emerging through similar developmental processes. Here we outline how LLM applications such as ChatGPT are developed, and we discuss how they are being leveraged in clinical settings. We consider the strengths and limitations of LLMs and their potential to improve the efficiency and effectiveness of clinical, educational and research work in medicine. LLM chatbots have already been deployed in a range of biomedical contexts, with impressive but mixed results. This review acts as a primer for interested clinicians, who will determine if and how LLM technology is used in healthcare for the benefit of patients and practitioners.
This is a preview of subscription content, access via your institution
Open Access articles citing this article.
Journal of Hematology & Oncology Open Access 27 November 2023
Current Cardiovascular Risk Reports Open Access 10 November 2023
Performance evaluation of ChatGPT, GPT-4, and Bard on the official board examination of the Japan Radiology Society
Japanese Journal of Radiology Open Access 04 October 2023
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 print issues and online access
$209.00 per year
only $17.42 per issue
Rent or buy this article
Prices vary by article type
Prices may be subject to local taxes which are calculated during checkout
Esteva, A. et al. A guide to deep learning in healthcare. Nat. Med. 25, 24–29 (2019).
Aggarwal, R. et al. Diagnostic accuracy of deep learning in medical imaging: a systematic review and meta-analysis. NPJ Digit. Med. 4, 65 (2021).
Liddy, E. Natural language processing. In Encyclopedia of Library and Information Science (eds Kent, A. & Lancour, H.)(Marcel Decker, 2001).
Khurana, D., Koli, A., Khatter, K. & Singh, S. Natural language processing: state of the art, current trends and challenges. Multimed. Tools Appl. 82, 3713–3744 (2023).
Brown, T. et al. Language models are few-shot learners. In Advances in Neural Information Processing Systems Vol. 33 1877–1901 (Curran Associates, 2020).
Moor, M. et al. Foundation models for generalist medical artificial intelligence. Nature 616, 259–265 (2023).
Kaplan, J. et al. Scaling laws for neural language models. Preprint at arXiv https://doi.org/10.48550/arXiv.2001.08361 (2020).
Shoeybi, M. et al. Megatron-LM: training multi-billion parameter language models using model parallelism. Preprint at arXiv https://doi.org/10.48550/arXiv.1909.08053 (2020).
Thoppilan, R. et al. LaMDA: language models for dialog applications. Preprint at arXiv https://doi.org/10.48550/arXiv.2201.08239 (2022).
Zeng, A. et al. GLM-130B: an open bilingual pre-trained model. Preprint at arXiv https://doi.org/10.48550/arXiv.2210.02414 (2022).
Amatriain, X. Transformer models: an introduction and catalog. Preprint at arXiv https://doi.org/10.48550/arXiv.2302.07730 (2023).
Introducing ChatGPT. https://openai.com/blog/chatgpt
Ouyang, L. et al. Training language models to follow instructions with human feedback. Preprint at arXiv https://doi.org/10.48550/arXiv.2203.02155 (2022).
OpenAI. GPT-4 technical report. Preprint at arXiv https://doi.org/10.48550/arXiv.2303.08774 (2023).
Kung, T. H. et al. Performance of ChatGPT on USMLE: potential for AI-assisted medical education using large language models. PLoS Digit. Health 2, e0000198 (2023).
Thirunavukarasu, A. J. et al. Trialling a large language model (ChatGPT) in general practice with the applied knowledge test: observational study demonstrating opportunities and limitations in primary care. JMIR Med. Educ. 9, e46599 (2023).
Ayers, J. W. et al. Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum. JAMA Intern. Med. 183, 589–596 (2023).
Rajpurkar, P., Chen, E., Banerjee, O. & Topol, E. J. AI in health and medicine. Nat. Med. 28, 31–38 (2022).
Radford, A., Narasimhan, K., Salimans, T. & Sutskever, I. Improving language understanding by generative pre-training. https://openai.com/research/language-unsupervised (2018).
Radford, A. et al. Language models are unsupervised multitask learners. Preprint at Semantic Scholar https://www.semanticscholar.org/paper/Language-Models-are-Unsupervised-Multitask-Learners-Radford-Wu/9405cc0d6169988371b2755e573cc28650d14dfe (2018).
Qiu, X. et al. Pre-trained models for natural language processing: a survey. Sci. China Technol. Sci. 63, 1872–1897 (2020).
Touvron, H. et al. LLaMA: open and efficient foundation language models. Preprint at arXiv https://doi.org/10.48550/arXiv.2302.13971 (2023).
Dennean, K., Gantori, S., Limas, D. K., Pu, A. & Gilligan, R. Let’s chat about ChatGPT. https://www.ubs.com/global/en/wealth-management/our-approach/marketnews/article.1585717.html (2023).
Dai, D. et al. Why can GPT learn in-context? Language models secretly perform gradient descent as meta-optimizers. Preprint at arXiv https://doi.org/10.48550/arXiv.2212.10559 (2022).
Confirmed: the new Bing runs on OpenAI’s GPT-4. https://blogs.bing.com/search/march_2023/Confirmed-the-new-Bing-runs-on-OpenAI’s-GPT-4/ (2023).
Glaese, A. et al. Improving alignment of dialogue agents via targeted human judgements. Preprint at arXiv https://doi.org/10.48550/arXiv.2209.14375 (2022).
Shuster, K. et al. BlenderBot 3: a deployed conversational agent that continually learns to responsibly engage. Preprint at arXiv https://doi.org/10.48550/arXiv.2208.03188 (2022).
Shuster, K. et al. Language models that seek for knowledge: modular search & generation for dialogue and prompt completion. Preprint at arXiv https://doi.org/10.48550/arXiv.2203.13224 (2022).
Anil, R. et al. PaLM 2 technical report. Preprint at arXiv https://doi.org/10.48550/arXiv.2305.10403 (2023).
Taori, R. et al. Alpaca: a strong, replicable instruction-following model. Preprint at https://crfm.stanford.edu/2023/03/13/alpaca.html (2023).
OpenAI. GPT-4 system card. https://cdn.openai.com/papers/gpt-4-system-card.pdf (2023).
Lacoste, A., Luccioni, A., Schmidt, V. & Dandres, T. Quantifying the carbon emissions of machine learning. Preprint at arXiv https://doi.org/10.48550/arXiv.1910.09700 (2019).
Patterson, D. et al. The carbon footprint of machine learning training will plateau, then shrink. Preprint at arXiv https://doi.org/10.48550/arXiv.2204.05149 (2022).
Strubell, E., Ganesh, A. & McCallum, A. Energy and policy considerations for deep learning in NLP. Preprint at arXiv https://doi.org/10.48550/arXiv.1906.02243 (2019).
Bender, E. M., Gebru, T., McMillan-Major, A. & Shmitchell, S. On the dangers of stochastic parrots: can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency 610–623 https://doi.org/10.1145/3442188.3445922 (Association for Computing Machinery, 2021).
ARK Investment Management LLC. Big Ideas 2023. https://ark-invest.com/home-thank-you-big-ideas-2023/?submissionGuid=d741a6f9-1a47-43d4-ac82-901cd909ff96 (2023).
Nori, H., King, N., McKinney, S. M., Carignan, D. & Horvitz, E. Capabilities of GPT-4 on medical challenge problems. Preprint at arXiv https://doi.org/10.48550/arXiv.2303.13375 (2023).
Singhal, K. et al. Towards expert-level medical question answering with large language models. Preprint at arXiv https://doi.org/10.48550/arXiv.2305.09617 (2023).
Looi, M.-K. Sixty seconds on… ChatGPT. BMJ 380, p205 (2023).
Pause giant AI experiments: an open letter. Future of Life Institute. https://futureoflife.org/open-letter/pause-giant-ai-experiments/ (2023).
Lee, P., Bubeck, S. & Petro, J. Benefits, limits, and risks of GPT-4 as an AI chatbot for medicine. N. Engl. J. Med. 388, 1233–1239 (2023).
Singhal, K. et al. Large language models encode clinical knowledge. Preprint at arXiv https://doi.org/10.48550/arXiv.2212.13138 (2022).
Gilson, A. et al. How does ChatGPT perform on the United States Medical Licensing Examination? The implications of large language models for medical education and knowledge assessment. JMIR Med. Educ. 9, e45312 (2023).
Sarraju, A. et al. Appropriateness of cardiovascular disease prevention recommendations obtained from a popular online chat-based artificial intelligence model. JAMA 329, 842–844 (2023).
Nastasi, A. J., Courtright, K. R., Halpern, S. D. & Weissman, G. E. Does ChatGPT provide appropriate and equitable medical advice?: a vignette-based, clinical evaluation across care contexts. Preprint at medRxiv https://doi.org/10.1101/2023.02.25.23286451 (2023).
Rao, A. et al. Assessing the utility of ChatGPT throughout the entire clinical workflow. Preprint at medRxiv https://doi.org/10.1101/2023.02.21.23285886 (2023).
Levine, D. M. et al. The diagnostic and triage accuracy of the GPT-3 artificial intelligence model. Preprint at medRxiv https://doi.org/10.1101/2023.01.30.23285067 (2023).
Nov, O., Singh, N. & Mann, D. M. Putting ChatGPT’s medical advice to the (Turing) test. Preprint at medRxiv https://doi.org/10.1101/2023.01.23.23284735 (2023).
Thirunavukarasu, A. J. Large language models will not replace healthcare professionals: curbing popular fears and hype. J. R. Soc. Med. 116, 181–182 (2023).
Kraljevic, Z. et al. Foresight—Generative Pretrained Transformer (GPT) for modelling of patient timelines using EHRs. Preprint at arXiv https://doi.org/10.48550/arXiv.2212.08072 (2023).
Shao, Y. et al. Hybrid value-aware transformer architecture for joint learning from longitudinal and non-longitudinal clinical data. Preprint at medRxiv https://doi.org/10.1101/2023.03.09.23287046 (2023).
Adams, L. C. et al. Leveraging GPT-4 for post hoc transformation of free-text radiology reports into structured reporting: a multilingual feasibility study. Radiology 307, e230725 (2023).
Arora, A. & Arora, A. The promise of large language models in health care. Lancet 401, 641 (2023).
Spataro, J. Introducing Microsoft 365 Copilot—your copilot for work. The Official Microsoft Blog. https://blogs.microsoft.com/blog/2023/03/16/introducing-microsoft-365-copilot-your-copilot-for-work/ (2023).
Ghahramani, Z. Introducing PaLM 2. Google. https://blog.google/technology/ai/google-palm-2-ai-large-language-model/ (2023).
Patel, S. B. & Lam, K. ChatGPT: the future of discharge summaries? Lancet Digit. Health 5, e107–e108 (2023).
Will ChatGPT transform healthcare? Nat. Med. 29, 505–506 (2023).
Our latest health AI research updates. Google. https://blog.google/technology/health/ai-llm-medpalm-research-thecheckup/ (2023).
Khan, S. Harnessing GPT-4 so that all students benefit. A nonprofit approach for equal access! Khan Academy Blog. https://blog.khanacademy.org/harnessing-ai-so-that-all-students-benefit-a-nonprofit-approach-for-equal-access/ (2023).
Duolingo Team. Introducing Duolingo Max, a learning experience powered by GPT-4. Duolingo Blog. https://blog.duolingo.com/duolingo-max/ (2023).
Han, Z., Battaglia, F., Udaiyar, A., Fooks, A. & Terlecky, S. R. An explorative assessment of ChatGPT as an aid in medical education: use it with caution. Preprint at medRxiv https://doi.org/10.1101/2023.02.13.23285879 (2023).
Benoit, J. R. A. ChatGPT for clinical vignette generation, revision, and evaluation. Preprint at medRxiv https://doi.org/10.1101/2023.02.04.23285478 (2023).
Lee, J. et al. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36, 1234–1240 (2020).
Gu, Y. et al. Domain-specific language model pretraining for biomedical natural language processing. ACM Trans. Comput. Health. 3, 1–23 (2022).
Salganik, M. Can ChatGPT—and its successors—go from cool to tool? Freedom to Tinker. https://freedom-to-tinker.com/2023/03/08/can-chatgpt-and-its-successors-go-from-cool-to-tool/ (2023).
Zhavoronkov, A. Caution with AI-generated content in biomedicine. Nat. Med 29, 532 (2023).
Yang, X. et al. A large language model for electronic health records. NPJ Digit. Med. 5, 194 (2022).
Agrawal, M., Hegselmann, S., Lang, H., Kim, Y. & Sontag, D. Large language models are few-shot clinical information extractors. Preprint at arXiv https://doi.org/10.48550/arXiv.2205.12689 (2022).
Huang, K., Altosaar, J. & Ranganath, R. ClinicalBERT: modeling clinical notes and predicting hospital readmission. Preprint at arXiv https://doi.org/10.48550/arXiv.1904.05342 (2020).
Madani, A. et al. Large language models generate functional protein sequences across diverse families. Nat. Biotechnol. https://doi.org/10.1038/s41587-022-01618-2 (2023).
Mai, D. H. A., Nguyen, L. T. & Lee, E. Y. TSSNote-CyaPromBERT: development of an integrated platform for highly accurate promoter prediction and visualization of Synechococcus sp. and Synechocystis sp. through a state-of-the-art natural language processing model BERT. Front. Genet. 13, 1067562 (2022).
Jumper, J. et al. Highly accurate protein structure prediction with AlphaFold. Nature 596, 583–589 (2021).
Yan, C. et al. A multifaceted benchmarking of synthetic electronic health record generation models. Nat. Commun. 13, 7609 (2022).
OpenAI. Model index for researchers. https://platform.openai.com/docs/model-index-for-researchers
Ball, P. The lightning-fast quest for COVID vaccines—and what it means for other diseases. Nature 589, 16–18 (2021).
Hallin, J. et al. Anti-tumor efficacy of a potent and selective non-covalent KRASG12D inhibitor. Nat. Med. 28, 2171–2182 (2022).
Babbage, C. Passages from the Life of a Philosopher (Longman, Green, Longman, Roberts, & Green, 1864).
Total data volume worldwide 2010-2025. Statista. https://www.statista.com/statistics/871513/worldwide-data-created/
Villalobos, P. et al. Will we run out of data? An analysis of the limits of scaling datasets in machine learning. Preprint at arXiv https://doi.org/10.48550/arXiv.2211.04325 (2022).
Ji, Z. et al. Survey of hallucination in natural language generation. ACM Comput. Surv. 55, 1–38 (2023).
Alkaissi, H. & McFarlane, S. I. Artificial hallucinations in ChatGPT: implications in scientific writing. Cureus 15, e35179 (2023).
Huang, J. et al. Large language models can self-improve. Preprint at arXiv https://doi.org/10.48550/arXiv.2210.11610 (2022).
Wang, X. et al. Self-consistency improves chain of thought reasoning in language models. Preprint at arXiv https://doi.org/10.48550/arXiv.2203.11171 (2023).
Bommasani, R. et al. On the opportunities and risks of foundation models. Preprint at arXiv https://doi.org/10.48550/arXiv.2108.07258 (2022).
Ramesh, A., Dhariwal, P., Nichol, A., Chu, C. & Chen, M. Hierarchical text-conditional image generation with CLIP latents. Preprint at arXiv https://doi.org/10.48550/arXiv.2204.06125 (2022).
Zini, J. E. & Awad, M. On the explainability of natural language processing deep models. ACM Comput. Surv. 55, 1–103 (2022).
Barredo Arrieta, A. et al. Explainable Artificial Intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020).
Else, H. Abstracts written by ChatGPT fool scientists. Nature 613, 423–423 (2023).
Taylor, J. ChatGPT’s alter ego, Dan: users jailbreak AI program to get around ethical safeguards. The Guardian https://www.theguardian.com/technology/2023/mar/08/chatgpt-alter-ego-dan-users-jailbreak-ai-program-to-get-around-ethical-safeguards (2023).
Perez, F. & Ribeiro, I. Ignore previous prompt: attack techniques for language models. Preprint at arXiv https://doi.org/10.48550/arXiv.2211.09527 (2022).
Li, X. & Zhang, T. An exploration on artificial intelligence application: from security, privacy and ethic perspective. In 2017 IEEE 2nd International Conference on Cloud Computing and Big Data Analysis (ICCCBDA) 416–420 https://doi.org/10.1109/ICCCBDA.2017.7951949 (Curran Associates, 2017).
Wolford, B. What is GDPR, the EU’s new data protection law? https://gdpr.eu/what-is-gdpr/ (2018).
Thorp, H. H. ChatGPT is fun, but not an author. Science 379, 313 (2023).
Yeo-Teh, N. S. L. & Tang, B. L. NLP systems such as ChatGPT cannot be listed as an author because these cannot fulfill widely adopted authorship criteria. Account Res. https://doi.org/10.1080/08989621.2023.2185776 (2023).
Stokel-Walker, C. ChatGPT listed as author on research papers: many scientists disapprove. Nature 613, 620–621 (2023).
Lehman, E. et al. Do we still need clinical language models? Preprint at arXiv https://doi.org/10.48550/arXiv.2302.08091 (2023).
Yang, X. et al. GatorTron: a large clinical language model to unlock patient information from unstructured electronic health records. Preprint at arXiv https://doi.org/10.48550/arXiv.2203.03540 (2022).
Weiner, S. J., Wang, S., Kelly, B., Sharma, G. & Schwartz, A. How accurate is the medical record? A comparison of the physician’s note with a concealed audio recording in unannounced standardized patient encounters. J. Am. Med. Inf. Assoc. 27, 770–775 (2020).
Ioannidis, J. P. A. Why most published research findings are false. PLoS Med. 2, e124 (2005).
Liebrenz, M., Schleifer, R., Buadze, A., Bhugra, D. & Smith, A. Generating scholarly content with ChatGPT: ethical challenges for medical publishing. Lancet Digit. Health 5, e105–e106 (2023).
Stokel-Walker C. AI bot ChatGPT writes smart essays—should academics worry? Nature https://doi.org/10.1038/d41586-022-04397-7 (2022).
Elali, F. R. & Rachid, L. N. AI-generated research paper fabrication and plagiarism in the scientific community. Patterns 4, 100706 (2023).
Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. Nature 613, 612–612 (2023).
Sample, I. Science journals ban listing of ChatGPT as co-author on papers. The Guardian https://www.theguardian.com/science/2023/jan/26/science-journals-ban-listing-of-chatgpt-as-co-author-on-papers (2023).
Flanagin, A., Bibbins-Domingo, K., Berkwits, M. & Christiansen, S. L. Nonhuman ‘authors’ and implications for the integrity of scientific publication and medical knowledge. JAMA 329, 637–639 (2023).
Authorship and contributorship. Cambridge Core. https://www.cambridge.org/core/services/authors/publishing-ethics/research-publishing-ethics-guidelines-for-journals/authorship-and-contributorship
New AI classifier for indicating AI-written text. https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text
Kirchenbauer, J. et al. A watermark for large language models. Preprint at arXiv http://arxiv.org/abs/2301.10226 (2023).
The Lancet Digital Health. ChatGPT: friend or foe? Lancet Digit. Health 5, e102 (2023).
Mbakwe, A. B., Lourentzou, I., Celi, L. A., Mechanic, O. J. & Dagan, A. ChatGPT passing USMLE shines a spotlight on the flaws of medical education. PLoS Digit. Health 2, e0000205 (2023).
Abid, A., Farooqi, M. & Zou, J. Large language models associate Muslims with violence. Nat. Mach. Intell. 3, 461–463 (2021).
Nangia, N., Vania, C., Bhalerao, R. & Bowman, S. R. CrowS-Pairs: a challenge dataset for measuring social biases in masked language models. In Proc. of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 1953–1967 https://doi.org/10.18653/v1/2020.emnlp-main.154 (Association for Computational Linguistics, 2020).
Bender, E. M. & Friedman, B. Data statements for natural language processing: toward mitigating system bias and enabling better science. In Transactions of the Association for Computational Linguistics 6, 587–604 (2018).
Li, H. et al. Ethics of large language models in medicine and medical research. Lancet Digit. Health 5, e333–e335 (2023).
Aggarwal, A., Tam, C. C., Wu, D., Li, X. & Qiao, S. Artificial intelligence–based chatbots for promoting health behavioral changes: systematic review. J. Med. Internet Res. 25, e40789 (2023).
Vasey, B. et al. Reporting guideline for the early-stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI. Nat. Med. 28, 924–933 (2022).
Friedberg, M. W. et al. Factors affecting physician professional satisfaction and their implications for patient care, health systems, and health policy. RAND Health Q 3, 1 (2014).
Kwee, A., Teo, Z. L. & Ting, D. S. W. Digital health in medicine: important considerations in evaluating health economic analysis. Lancet Reg. Health West Pac. 23, 100476 (2022).
Littmann, M. et al. Validity of machine learning in biology and medicine increased through collaborations across fields of expertise. Nat. Mach. Intell. 2, 18–24 (2020).
D.S.W.T. is supported by the National Medical Research Council, Singapore (NMCR/HSRG/0087/2018, MOH-000655-00 and MOH-001014-00), the Duke-NUS Medical School (Duke-NUS/RSF/2021/0018 and 05/FY2020/EX/15-A58) and the Agency for Science, Technology and Research (A20H4g2141 and H20C6a0032). These funders were not involved in the conception, execution or reporting of this review.
D.S.W.T. holds a patent on a deep learning system for the detection of retinal diseases. The other authors declare no conflicts of interest.
Peer review information
Nature Medicine thanks Melissa McCradden, Pranav Rajpurkar and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Primary handling editor: Karen O’Leary, in collaboration with the Nature Medicine team.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Thirunavukarasu, A.J., Ting, D.S.J., Elangovan, K. et al. Large language models in medicine. Nat Med 29, 1930–1940 (2023). https://doi.org/10.1038/s41591-023-02448-8
This article is cited by
Journal of Hematology & Oncology (2023)
Comment on: ‘Comparison of GPT-3.5, GPT-4, and human user performance on a practice ophthalmology written examination’ and ‘ChatGPT in ophthalmology: the dawn of a new era?’
Nature Medicine (2023)
Performance evaluation of ChatGPT, GPT-4, and Bard on the official board examination of the Japan Radiology Society
Japanese Journal of Radiology (2023)
Diagnostic accuracy of a large language model in rheumatology: comparison of physician and ChatGPT-4
Rheumatology International (2023)