Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Comment
  • Published:

Presentation matters for AI-generated clinical advice

If mistakes are made in clinical settings, patients suffer. Artificial intelligence (AI) generally — and large language models specifically — are increasingly used in health settings, but the way that physicians use AI tools in this high-stakes environment depends on how information is delivered. AI toolmakers have a responsibility to present information in a way that minimizes harm.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Presentation of health AI output is underevaluted.

References

  1. Kung, T. H. et al. PLoS Digit. Health 2, e0000198 (2023).

    Article  PubMed  PubMed Central  Google Scholar 

  2. Suriyakumar, V. M., Papernot, N., Goldenberg, A. & Ghassemi, M. Chasing your long tails: Differentially private prediction in health care settings. In Proc. 2021 ACM Conf. on Fairness, Accountability, and Transparency, 723–734 (ACM, 2021).

  3. Seyyed-Kalantari, L., Zhang, H., McDermott, M. B. A., Chen, I. Y. & Ghassemi, M. Nat. Med. 27, 2176–2182 (2021).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  4. Zhang, H., Lu, A. X., Abdalla, M., McDermott, M. & Ghassemi, M. Hurtful words: quantifying biases in clinical contextual word embeddings. In Proc. ACM Conf. on Health, Inference, and Learning, 110–120 (ACM, 2020).

  5. Ghassemi, M. & Nsoesie, E. O. Patterns 3, 100392 (2022).

    Article  PubMed  PubMed Central  Google Scholar 

  6. Chen, I. Y. et al. Annu. Rev. Biomed. Data Sci. 4, 123–144 (2021).

    Article  PubMed  PubMed Central  Google Scholar 

  7. Bates, D. W. et al. New Engl. J. Med. 388, 142–153 (2023).

    Article  PubMed  Google Scholar 

  8. Raji, I. D., Kumar, I. E., Horowitz, A. & Selbst, A. The fallacy of AI functionality. In 2022 ACM Conf. on Fairness, Accountability, and Transparency, 959–972 (ACM, 2022).

  9. Adashi, E. Y. & Cohen, I. G. Nat. Med. 28, 2241–2242 (2022).

    Article  CAS  PubMed  Google Scholar 

  10. Smallman, M. Nature 567, 7 (2019).

    Article  CAS  PubMed  Google Scholar 

  11. Wong, A. et al. JAMA Intern. Med. 181, 1065–1070 (2021).

    Article  PubMed  Google Scholar 

  12. Gaube, S. et al. NPJ Digit. Med. 4, 31 (2021).

    Article  PubMed  PubMed Central  Google Scholar 

  13. Gichoya, J. W. et al. Lancet Digit. Health 4, e406–e414 (2022).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  14. Adam, H. et al. Write it like you see it: detectable differences in clinical notes by race lead to differential model recommendations. In Proc. 2022 AAAI/ACM Conf. on AI, Ethics, and Society, 7–21 (ACM, 2022).

  15. Adam, H., Balagopalan, A., Alsentzer, E., Christia, F. & Ghassemi, M. Commun. Med. 2, 149 (2022).

    Article  PubMed  PubMed Central  Google Scholar 

  16. Robinette, P. et al. Overtrust of robots in emergency evacuation scenarios. In 2016 11th ACM/IEEE Internat. Conf. on Human–Robot Interaction, 101–108 (IEEE, 2016).

  17. Goodman, K. E., Rodman, A. M. & Morgan, D. J. New Engl. J. Med. 389, 483–487 (2023).

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

M.G. is a CIFAR AI Chair, CIFAR Azrieli Global Scholar, Herman L. F. von Helmholtz Career Development Professor, and JameelClinic Affiliate, and acknowledges support from these programmes.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Marzyeh Ghassemi.

Ethics declarations

Competing interests

The author declares no competing interests.

Peer review

Peer reviewer information

Nature Human Behaviour thanks Sanmi Koyejo, and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ghassemi, M. Presentation matters for AI-generated clinical advice. Nat Hum Behav 7, 1833–1835 (2023). https://doi.org/10.1038/s41562-023-01721-7

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s41562-023-01721-7

Search

Quick links

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics