Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Comparing the value of perceived human versus AI-generated empathy

Abstract

Artificial intelligence (AI) and specifically large language models demonstrate remarkable social–emotional abilities, which may improve human–AI interactions and AI’s emotional support capabilities. However, it remains unclear whether empathy, encompassing understanding, ‘feeling with’ and caring, is perceived differently when attributed to AI versus humans. We conducted nine studies (n = 6,282) where AI-generated empathic responses to participants’ emotional situations were labelled as provided by either humans or AI. Human-attributed responses were rated as more empathic and supportive, and elicited more positive and fewer negative emotions, than AI-attributed ones. Moreover, participants’ own uninstructed belief that AI had aided the human-attributed responses reduced perceived empathy and support. These effects were replicated across varying response lengths, delays, iterations and large language models and were primarily driven by responses emphasizing emotional sharing and care. Additionally, people consistently chose human interaction over AI when seeking emotional engagement. These findings advance our general understanding of empathy, and specifically human–AI empathic interactions.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Effects of condition on perceived empathy and positivity resonance.
Fig. 2: Effects of condition on emotions, authenticity and support.
Fig. 3: Effects of assumed aid from the other source on empathy in the response and perceived support.
Fig. 4: Differences in perceived empathy in a multi-turn interaction.
Fig. 5: Linear model reveals condition-dependent differences in perceived empathy by response type.

Similar content being viewed by others

Data availability

All preprocessed data, excluding participants’ personal experiences when they did not provide consent to share them, are available via OSF at https://osf.io/w4hkd/?view_only=52a2324d36bc4c03ad9f1d90ba75ab7b.

Code availability

All analysis files are available via OSF at https://osf.io/w4hkd/?view_only=52a2324d36bc4c03ad9f1d90ba75ab7b.

References

  1. Gero, K. I. AI and the Writer: How Language Models Support Creative Writers (Columbia Univ., 2023).

  2. Joksimovic, S., Ifenthaler, D., Marrone, R., De Laat, M. & Siemens, G. Opportunities of artificial intelligence for supporting complex problem-solving: findings from a scoping review. Comput. Educ. Artif. Intell. 4, 100138 (2023).

    Article  Google Scholar 

  3. Wang, L. et al. Document-level machine translation with large language models. In Proc. Conference on Empirical Methods in Natural Language Processing (eds Bouamor, H. et al.) 16646–16661 (ACL, 2023). https://doi.org/10.18653/v1/2023.emnlp-main.1036

  4. Yao, S. et al. Tree of thoughts: deliberate problem solving with large language models. Adv. Neural Inf. Process. Syst. 36, 11809–11822 (2023).

    Google Scholar 

  5. Inzlicht, M., Cameron, C. D., D’Cruz, J. & Bloom, P. In praise of empathic AI. Trends Cogn. Sci. 28, 89–91 (2023).

    Article  PubMed  Google Scholar 

  6. Sharma, A., Lin, I. W., Miner, A. S., Atkins, D. C. & Althoff, T. Human–AI collaboration enables more empathic conversations in text-based peer-to-peer mental health support. Nat. Mach. Intell. 5, 46–57 (2023).

    Article  Google Scholar 

  7. Ayers, J. W. et al. Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum. JAMA Intern. Med. 183, 589–596 (2023).

    Article  PubMed  PubMed Central  Google Scholar 

  8. Morris, R. We provided mental health support to about 4,000 people—using GPT-3. Here’s what happened. X https://twitter.com/RobertRMorris/status/1611450197707464706 (2023).

  9. Ong, D. et al. Is discourse role important for emotion recognition in conversation? In Proc. AAAI Conference on Artificial Intelligence Vol. 36 11121–11129 (PKP, 2022).

  10. Sharma, A. et al. Cognitive reframing of negative thoughts through human-language model interaction. In Proc. 61st Annual Meeting of the Association for Computational Linguistics Vol. 1 (eds Rogers, A. et al.) 9977–10000 (ACL, 2023). https://doi.org/10.18653/v1/2023.acl-long.555

  11. Lin, I. et al. IMBUE: improving interpersonal effectiveness through simulation and just-in-time feedback with human-language model interaction. In Proc. 62nd Annual Meeting of the Association for Computational Linguistics Vol. 1 (eds Ku, L.-W. et al.) 810–840 (ACL, 2024). https://doi.org/10.18653/v1/2024.acl-long.47

  12. Replika https://replika.com (Luka, 2025).

  13. Maples, B., Cerit, M., Vishwanath, A. & Pea, R. Loneliness and suicide mitigation for students using GPT3-enabled chatbots. npj Ment. Health Res. 3, 4 (2024).

    Article  PubMed  PubMed Central  Google Scholar 

  14. Zaki, J. & Ochsner, K. N. The neuroscience of empathy: progress, pitfalls and promise. Nat. Neurosci. 15, 675–680 (2012).

    Article  CAS  PubMed  Google Scholar 

  15. Schurz, M. et al. Toward a hierarchical model of social cognition: a neuroimaging meta-analysis and integrative review of empathy and theory of mind. Psychol. Bull. 147, 293–327 (2021).

    Article  PubMed  Google Scholar 

  16. Feldman, R. Social behavior as a transdiagnostic marker of resilience. Annu. Rev. Clin. Psychol. 17, 153–180 (2021).

    Article  PubMed  Google Scholar 

  17. Andreychik, M. R. I like that you feel my pain, but I love that you feel my joy: empathy for a partner’s negative versus positive emotions independently affect relationship quality. J. Soc. Pers. Relat. 36, 834–854 (2019).

    Article  Google Scholar 

  18. Batson, C. D. et al. Empathic joy and the empathy–altruism hypothesis. J. Pers. Soc. Psychol. 61, 413–426 (1991).

    Article  CAS  PubMed  Google Scholar 

  19. Pierce, J. R., Kilduff, G. J., Galinsky, A. D. & Sivanathan, N. From glue to gasoline: how competition turns perspective takers unethical. Psychol. Sci. 24, 1986–1994 (2013).

    Article  PubMed  Google Scholar 

  20. Weisz, E. & Zaki, J. in The Oxford Handbook of Compassion Science (eds Seppälä, E. M. et al.) 205–218 (Oxford Univ. Press, 2017); https://doi.org/10.1093/oxfordhb/9780190464684.013.16

  21. Bartal, I. B.-A., Decety, J. & Mason, P. Empathy and pro-social behavior in rats. Science 334, 1427–1430 (2011).

    Article  CAS  PubMed Central  Google Scholar 

  22. de Waal, F. B. M. & Preston, S. D. Mammalian empathy: behavioural manifestations and neural basis. Nat. Rev. Neurosci. 18, 498–509 (2017).

    Article  PubMed  Google Scholar 

  23. Cameron, C. D. et al. Empathy is hard work: people choose to avoid empathy because of its cognitive costs. J. Exp. Psychol. Gen. 148, 962–976 (2019).

    Article  PubMed  Google Scholar 

  24. Eyal, T., Steffel, M. & Epley, N. Perspective mistaking: accurately understanding the mind of another requires getting perspective, not taking perspective. J. Pers. Soc. Psychol. 114, 547–571 (2018).

    Article  PubMed  Google Scholar 

  25. Choshen-Hillel, S. et al. Physicians prescribe fewer analgesics during night shifts than day shifts. Proc. Natl Acad. Sci. USA 119, e2200047119 (2022).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  26. Guadagni, V., Burles, F., Ferrara, M. & Iaria, G. The effects of sleep deprivation on emotional empathy. J. Sleep Res. 23, 657–663 (2014).

    Article  PubMed  Google Scholar 

  27. Seo, H.-Y. et al. Burnout as a mediator in the relationship between work–life balance and empathy in healthcare professionals. Psychiatry Investig. 17, 951–959 (2020).

    Article  PubMed  PubMed Central  Google Scholar 

  28. Vévodová, Š., Vévoda, J., Vetešníková, M., Kisvetrová, H. & Chrastina, J. The relationship between burnout syndrome and empathy among nurses in emergency medical services. Kontakt 18, e17–e21 (2016).

    Article  Google Scholar 

  29. Wilkinson, H., Whittington, R., Perry, L. & Eames, C. Examining the relationship between burnout and empathy in healthcare professionals: a systematic review. Burn. Res. 6, 18–29 (2017).

    Article  PubMed  PubMed Central  Google Scholar 

  30. Ferguson, A. M., Cameron, C. D. & Inzlicht, M. When does empathy feel good? Curr. Opin. Behav. Sci. 39, 125–129 (2021).

    Article  Google Scholar 

  31. Tak, A. N. & Gratch, J. GPT-4 emulates average-human emotional cognition from a third-person perspective. Preprint at https://arxiv.org/abs/2408.13718 (2024).

  32. Sorin, V. et al. Large language models and empathy: systematic review. J. Med. Internet Res. 26, e52597 (2024).

    Article  PubMed  PubMed Central  Google Scholar 

  33. Paiva, A., Leite, I., Boukricha, H. & Wachsmuth, I. Empathy in virtual agents and robots: a survey. ACM Trans. Interact. Intell. Syst. 7, 11:1–11:40 (2017).

    Article  Google Scholar 

  34. Wang, Y. et al. A systematic review on affective computing: emotion models, databases, and recent advances. Inf. Fusion 83–84, 19–52 (2022).

    Article  Google Scholar 

  35. Gandhi, K. et al. Human-like affective cognition in foundation models. Preprint at https://arxiv.org/abs/2409.11733 (2024).

  36. Ovsyannikova, D., de Mello, V. O. & Inzlicht, M. Third-party evaluators perceive AI as more compassionate than expert humans. Commun. Psychol. 3, 4 (2025).

    Article  PubMed  PubMed Central  Google Scholar 

  37. Lee, Y. K., Suh, J., Zhan, H., Li, J. J. & Ong, D. C. Large language models produce responses perceived to be empathic. Preprint at https://arxiv.org/abs/2403.18148 (2024).

  38. Sharma, A., Lin, I. W., Miner, A. S., Atkins, D. C. & Althoff, T. Towards facilitating empathic conversations in online mental health support: a reinforcement learning approach. In Proc. Web Conference 194–205 (ACM, 2021). https://doi.org/10.1145/3442381.3450097

  39. Zhan, H. et al. Large language models are capable of offering cognitive reappraisal, if guided. Preprint at https://arxiv.org/abs/2404.01288 (2024).

  40. Pataranutaporn, P., Liu, R., Finn, E. & Maes, P. Influencing human–AI interaction by priming beliefs about AI can increase perceived trustworthiness, empathy and effectiveness. Nat. Mach. Intell. 5, 1076–1086 (2023).

    Article  Google Scholar 

  41. Yin, Y., Jia, N. & Wakslak, C. J. AI can help people feel heard, but an AI label diminishes this impact. Proc. Natl Acad. Sci. USA 121, e2319112121 (2024).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  42. Hohenstein, J. & Jung, M. AI as a moral crumple zone: the effects of AI-mediated communication on attribution and trust. Comput. Hum. Behav. 106, 106190 (2020).

    Article  Google Scholar 

  43. Purcell, Z. A., Dong, M., Nussberger, A.-M., Köbis, N. & Jakesch, M. Fears about AI-mediated communication are grounded in different expectations for one’s own versus others’ use. Preprint at https://arxiv.org/abs/2305.01670 (2023).

  44. Hohenstein, J. et al. Artificial intelligence in communication impacts language and social relationships. Sci. Rep. 13, 5487 (2023).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  45. Mieczkowski, H., Hancock, J. T., Naaman, M., Jung, M. & Hohenstein, J. AI-mediated communication: language use and interpersonal effects in a referential communication task. Proc. ACM Hum. Comput. Interact. 5, 17:1–17:14 (2021).

    Article  Google Scholar 

  46. Glikson, E. & Asscher, O. AI-mediated apology in a multilingual work context: implications for perceived authenticity and willingness to forgive. Comput. Hum. Behav. 140, 107592 (2023).

    Article  Google Scholar 

  47. Hancock, J. T., Naaman, M. & Levy, K. AI-mediated communication: definition, research agenda, and ethical considerations. J. Comput. Mediat. Commun. 25, 89–100 (2020).

    Article  Google Scholar 

  48. Mohanasundari, S. K. et al. Can artificial intelligence replace the unique nursing role? Cureus 15, e51150 (2023).

    CAS  PubMed  PubMed Central  Google Scholar 

  49. Montemayor, C., Halpern, J. & Fairweather, A. In principle obstacles for empathic AI: why we can’t replace human empathy in healthcare. AI Soc. 37, 1353–1359 (2022).

    Article  PubMed  Google Scholar 

  50. Nass, C. & Moon, Y. Machines and mindlessness: social responses to computers. J. Soc. Issues 56, 81–103 (2000).

    Article  Google Scholar 

  51. Reeves, B. & Nass, C. I. The Media Equation: How People Treat Computers, Television, and New Media like Real People and Places xiv, 305 (Cambridge Univ. Press, 1996).

  52. Shteynberg, G. et al. Does it matter if empathic AI has no empathy? Nat. Mach. Intell. 6, 496–497 (2024).

    Article  Google Scholar 

  53. Perry, A. AI will never convey the essence of human empathy. Nat. Hum. Behav. https://doi.org/10.1038/s41562-023-01675-w (2023).

  54. Haugeland, J. Understanding natural language. J. Philos. 76, 619–632 (1979).

    Article  Google Scholar 

  55. Major, B. C., Le Nguyen, K. D., Lundberg, K. B. & Fredrickson, B. L. Well-being correlates of perceived positivity resonance: evidence from trait and episode-level assessments. Pers. Soc. Psychol. Bull. 44, 1631–1647 (2018).

    Article  PubMed  PubMed Central  Google Scholar 

  56. Forgas, J. P. & Laham, S. M. in Cognitive Illusions (ed. Pohl, R. F.) 276–290 (Psychology Press, 2016).

  57. Rubin, M., Arnon, H., Huppert, J. D. & Perry, A. Considering the role of human empathy in AI-driven therapy. JMIR Ment. Health 11, e56529 (2024).

    Article  PubMed  PubMed Central  Google Scholar 

  58. Lucas, G. M., Gratch, J., King, A. & Morency, L.-P. It’s only a computer: virtual humans increase willingness to disclose. Comput. Hum. Behav. 37, 94–100 (2014).

    Article  Google Scholar 

  59. Bhattacharya, K., Ghosh, A., Monsivais, D., Dunbar, R. & Kaski, K. Absence makes the heart grow fonder: social compensation when failure to interact risks weakening a relationship. EPJ Data Sci. 6, 1 (2017).

    Article  PubMed  PubMed Central  Google Scholar 

  60. Huxhold, O., Fiori, K. L. & Windsor, T. Rethinking social relationships in adulthood: the differential investment of resources model. Pers. Soc. Psychol. Rev. 26, 57–82 (2022).

    Article  PubMed  PubMed Central  Google Scholar 

  61. Jakesch, M., Hancock, J. T. & Naaman, M. Human heuristics for AI-generated language are flawed. Proc. Natl Acad. Sci. USA 120, e2208839120 (2023).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  62. Haim, G. B. et al. Empathy and clarity in GPT-4-generated emergency department discharge letters. Preprint at medRxiv https://doi.org/10.1101/2024.10.07.24315034 (2024).

  63. Schork, N. J. Artificial intelligence and personalized medicine. Cancer Treat. Res. 178, 265–283 (2019).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  64. Vaidyam, A. N., Wisniewski, H., Halamka, J. D., Kashavan, M. S. & Torous, J. B. Chatbots and conversational agents in mental health: a review of the psychiatric landscape. Can. J. Psychiatry 64, 456–464 (2019).

    Article  PubMed  PubMed Central  Google Scholar 

  65. Felzmann, H., Villaronga, E. F., Lutz, C. & Tamò-Larrieux, A. Transparency you can trust: transparency requirements for artificial intelligence between legal norms and contextual concerns. Big Data Soc. 6, 2053951719860542 (2019).

    Article  Google Scholar 

  66. Laux, J., Wachter, S. & Mittelstadt, B. Three pathways for standardisation and ethical disclosure by default under the European Union Artificial Intelligence Act. Comput. Law Secur. Rev. 53, 105957 (2024).

    Article  Google Scholar 

  67. Leib, M., Köbis, N., Rilke, R. M., Hagens, M. & Irlenbusch, B. Corrupted by algorithms? How AI-generated and human-written advice shape (dis)honesty. Econ. J. 134, 766–784 (2024).

    Article  Google Scholar 

  68. Li, J. Z., Herderich, A. & Goldenberg, A. Skill but not effort drive GPT overperformance over humans in cognitive reframing of negative scenario. Preprint at PsyArXiv https://doi.org/10.31234/osf.io/fzvd8 (2024).

  69. Brinkmann, L. et al. Machine culture. Nat. Hum. Behav. 7, 1855–1868 (2023).

    Article  PubMed  Google Scholar 

  70. Brysbaert, M. How many participants do we have to include in properly powered experiments? A tutorial of power analysis with reference tables. J. Cogn. 2, 16 (2019).

    Article  PubMed  PubMed Central  Google Scholar 

  71. R Core Team R: A Language and Environment for Statistical Computing (R Foundation for Statistical Computing, 2024); http://www.R-project.org/

  72. Hughes, M. E., Waite, L. J., Hawkley, L. C. & Cacioppo, J. T. A short scale for measuring loneliness in large surveys: results from two population-based studies. Res. Aging 26, 655–672 (2004).

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

This work was supported in part by grants from the Mind and Life Institute and the Azrieli Israel Center for Addiction and Mental Health to A.P., and a fellowship from the Azrieli Israel Center for Addiction and Mental Health to M.R. The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript.

Author information

Authors and Affiliations

Authors

Contributions

M.R., J.L., F.Z., A.G. and A.P. were involved in the experimental design and project planning. M.R., D.C.O., A.G. and A.P. contributed to the data analyses. M.R., A.G. and A.P. wrote the paper. All authors reviewed and edited the paper.

Corresponding authors

Correspondence to Matan Rubin or Anat Perry.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Nature Human Behaviour thanks Corina Pelau and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rubin, M., Li, J.Z., Zimmerman, F. et al. Comparing the value of perceived human versus AI-generated empathy. Nat Hum Behav (2025). https://doi.org/10.1038/s41562-025-02247-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1038/s41562-025-02247-w

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing