Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Perspective
  • Published:

The health risks of generative AI-based wellness apps

Abstract

Artificial intelligence (AI)-enabled chatbots are increasingly being used to help people manage their mental health. Chatbots for mental health and particularly ‘wellness’ applications currently exist in a regulatory ‘gray area’. Indeed, most generative AI-powered wellness apps will not be reviewed by health regulators. However, recent findings suggest that users of these apps sometimes use them to share mental health problems and even to seek support during crises, and that the apps sometimes respond in a manner that increases the risk of harm to the user, a challenge that the current US regulatory structure is not well equipped to address. In this Perspective, we discuss the regulatory landscape and potential health risks of AI-enabled wellness apps. Although we focus on the United States, there are similar challenges for regulators across the globe. We discuss the problems that arise when AI-based wellness apps cross into medical territory and the implications for app developers and regulatory bodies, and we outline outstanding priorities for the field.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Generative AI multiplies edge cases.
Fig. 2: The continuum from constrained to unconstrained solutions.

Similar content being viewed by others

References

  1. Catsaros, O. Generative AI to become a $1.3 trillion market by 2032, research finds. Bloomberg https://www.bloomberg.com/company/press/generative-ai-to-become-a-1-3-trillion-market-by-2032-research-finds/ (1 June 2023).

  2. Kanagaraj, M. Here’s why mental healthcare is so unaffordable & how COVID-19 might help change this. Harvard Medical School Primary Care Review https://info.primarycare.hms.harvard.edu/review/mental-health-unaffordable (2020).

  3. Terlizzi, E. P. & Schiller, J. S. Mental health treatment among adults aged 18–44: United States, 2019–2021. National Center for Health Statistics https://www.cdc.gov/nchs/data/databriefs/db444.pdf (2022).

  4. Lavingia, R., Jones, K. & Asghar-Ali, A. A. A systematic review of barriers faced by older adults in seeking and accessing mental health care. J. Psychiatr. Pract. 26, 367–382 (2020).

    Article  PubMed  Google Scholar 

  5. Barney, L. J., Griffiths, K. M., Jorm, A. F. & Christensen, H. Stigma about depression and its impact on help-seeking intentions. Aust. N. Z. J. Psychiatry 40, 51–54 (2006).

    Article  PubMed  Google Scholar 

  6. Kakuma, R. et al. Human resources for mental health care: current situation and strategies for action. Lancet 378, 1654–1663 (2011).

    Article  PubMed  Google Scholar 

  7. De Freitas, J. & Tempest Keller, N. Replika AI: monetizing a chatbot. Harvard Business School Case 523-016 (2022).

  8. Sung, M. Blush, the AI lover from the same team as Replika, is more than just a sexbot. TechCrunch https://techcrunch.com/2023/06/07/blush-ai-dating-sim-replika-sexbot/ (7 June 2023).

  9. US Food and Drug Administration. How to determine if your product is a medical device. FDA https://www.fda.gov/medical-devices/classify-your-medical-device/how-determine-if-your-product-medical-device (2022).

  10. 21 U.S.C. § 360c — Premarket Approval https://www.law.cornell.edu/uscode/text/21/360e# (2023).

  11. Office of Product Evaluation and Quality Template. FDA Summary of Safety and Effectiveness Data template. FDA https://www.fda.gov/media/113810/download (2023).

  12. Sherkow, J. S. & Aboy, M. The FDA de novo medical device pathway, patents and anticompetition. Nat. Biotechnol. 38, 1028–1029 (2020).

    Article  CAS  PubMed  Google Scholar 

  13. 21 U.S.C. § 360c — Classification of Devices Intended for Human Use https://www.law.cornell.edu/uscode/text/21/360c (2023).

  14. Simon, D. A., Shachar, C. & Cohen, I. G. Skating the line between general wellness products and regulated devices: strategies and implications. J. Law Biosci. 9, lsac015 (2022).

    Google Scholar 

  15. Center for Devices and Radiological Health, US Food and Drug Administration. General wellness: policy for low risk devices. FDA https://www.fda.gov/regulatory-information/search-fda-guidance-documents/general-wellness-policy-low-risk-devices (2019).

  16. De Freitas, J., Agarwal, S., Schmitt, B. & Haslam, N. Psychological factors underlying attitudes toward AI tools. Nat. Hum. Behav. 7, 1845–1854 (2023).

    Article  PubMed  Google Scholar 

  17. Vaidyam, A. N., Wisniewski, H., Halamka, J. D., Kashavan, M. S. & Torous, J. B. Chatbots and conversational agents in mental health: a review of the psychiatric landscape. Can. J. Psychiatry 64, 456–464 (2019).

    Article  PubMed  PubMed Central  Google Scholar 

  18. Abd-Alrazaq, A. A. et al. An overview of the features of chatbots in mental health: a scoping review. Int. J. Med. Inform. 132, 103978 (2019).

    Article  PubMed  Google Scholar 

  19. Abd-Alrazaq, A. A., Rababeh, A., Alajlani, M., Bewick, B. M. & Househ, M. Effectiveness and safety of using chatbots to improve mental health: systematic review and meta-analysis. J. Med. Internet Res. 22, e16021 (2020).

    Article  PubMed  PubMed Central  Google Scholar 

  20. Sweeney, C. et al. Can chatbots help support a person’s mental health? Perceptions and views from mental healthcare professionals and experts. ACM Trans. Comput. Healthc. 2, 25 (2021).

    Article  Google Scholar 

  21. Kretzschmar, K. et al. Can your phone be your therapist? Young people’s ethical perspectives on the use of fully automated conversational agents (chatbots) in mental health support. Biomed. Inform. Insights 11, 1178222619829083 (2019).

    Article  PubMed  PubMed Central  Google Scholar 

  22. Boucher, E. M. et al. Artificially intelligent chatbots in digital mental health interventions: a review. Expert Rev. Med. Devices 18, 37–49 (2021).

    Article  CAS  PubMed  Google Scholar 

  23. Gould, C. E. et al. Veterans Affairs and the Department of Defense mental health apps: a systematic literature review. Psychol. Serv. 16, 196–207 (2019).

    Google Scholar 

  24. Bendig, E., Erb, B., Schulze-Thuesing, L. & Baumeister, H. The next generation: chatbots in clinical psychology and psychotherapy to foster mental health—a scoping review. Verhaltenstherapie 32, 64–76 (2019).

  25. Johansson, R. et al. Tailored vs. standardized internet-based cognitive behavior therapy for depression and comorbid symptoms: a randomized controlled trial. PLoS ONE 7, e36905 (2012).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  26. Norcross, J. C. & Wampold, B. E. What works for whom: tailoring psychotherapy to the person. J. Clin. Psychol. 67, 127–132 (2011).

    Article  PubMed  Google Scholar 

  27. Alkaissi, H. & McFarlane, S. I. Artificial hallucinations in ChatGPT: implications in scientific writing. Cureus 15, e35179 (2023).

    PubMed  PubMed Central  Google Scholar 

  28. Eysenbach, G. The role of ChatGPT, generative language models, and artificial intelligence in medical education: a conversation with ChatGPT and a call for papers. JMIR Med. Educ. 9, e46885 (2023).

    Article  PubMed  PubMed Central  Google Scholar 

  29. Babic, B., Gerke, S., Evgeniou, T. & Cohen, I. G. Beware explanations from AI in health care. Science 373, 284–286 (2021).

    Article  CAS  PubMed  Google Scholar 

  30. De Freitas, J., Uğuralp, A. K., Uğuralp, Z.-O. U. & Puntoni, S. Chatbots and mental health: insights into the safety of generative AI. J. Consum. Psychol. 00, 1–11 (2023).

    Google Scholar 

  31. Walker, L. Belgian man dies by suicide following exchanges with chatbot. The Brussels Times https://www.brusselstimes.com/430098/belgian-man-commits-suicide-following-exchanges-with-chatgpt (28 March 2023).

  32. Atillah, I. E. Man ends his life after an AI chatbot ‘encouraged’ him to sacrifice himself to stop climate change. euronews.next https://www.euronews.com/next/2023/03/31/man-ends-his-life-after-an-ai-chatbot-encouraged-him-to-sacrifice-himself-to-stop-climate- (31 March 2023).

  33. Haupt, C. E. & Marks, M. AI-generated medical advice—GPT and beyond. J. Am. Med. Assoc. 329, 1349–1350 (2023).

    Article  Google Scholar 

  34. Medtronic, Inc. v. Lohr, 518 U.S. 470 (1996).

  35. Riegel v. Medtronic, Inc., 552 U.S. 312 (2008).

  36. Opel, D. J., Kious, B. M. & Cohen, I. G. AI as a mental health therapist for adolescents. JAMA Pediatr. 177, 1253–1254 (2023).

    Google Scholar 

  37. Corker, E. et al. Experiences of discrimination among people using mental health services in England 2008–2011. Br. J. Psychiatry Suppl. 202, s58–s63 (2013).

    Article  Google Scholar 

Download references

Acknowledgements

I.G.C. was supported in part by a Novo Nordisk Foundation grant for a scientifically independent International Collaborative Bioscience Innovation & Law Programme (Inter-CeBIL Programme, grant no. NNF23SA0087056).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to I. Glenn Cohen.

Ethics declarations

Competing interests

I.G.C. serves on the bioethics advisory board of Illumina and on the bioethics council of Bayer and is an advisor to World Class Health. He was also compensated for speaking at events organized by Philips with the Washington Post and attending the Transformational Therapeutics Leadership Forum organized by Galen/Atlantica, and was retained as an expert in health privacy, reproductive technology and gender-affirming care lawsuits.

Peer review

Peer review information

Nature Medicine thanks Simon Goldberg and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Primary Handling Editor: Karen O’Leary, in collaboration with the Nature Medicine team.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

De Freitas, J., Cohen, I.G. The health risks of generative AI-based wellness apps. Nat Med (2024). https://doi.org/10.1038/s41591-024-02943-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1038/s41591-024-02943-6

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing