Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Bad machines corrupt good morals

Abstract

As machines powered by artificial intelligence (AI) influence humans’ behaviour in ways that are both like and unlike the ways humans influence each other, worry emerges about the corrupting power of AI agents. To estimate the empirical validity of these fears, we review the available evidence from behavioural science, human–computer interaction and AI research. We propose four main social roles through which both humans and machines can influence ethical behaviour. These are: role model, advisor, partner and delegate. When AI agents become influencers (role models or advisors), their corrupting power may not exceed the corrupting power of humans (yet). However, AI agents acting as enablers of unethical behaviour (partners or delegates) have many characteristics that may let people reap unethical benefits while feeling good about themselves, a potentially perilous interaction. On the basis of these insights, we outline a research agenda to gain behavioural insights for better AI oversight.

Your institute does not have access to this article

Relevant articles

Open Access articles citing this article.

Access options

Buy article

Get time limited or full article access on ReadCube.

$32.00

All prices are NET prices.

Fig. 1: Four main roles in which AI agents and humans influence ethical behavior.

References

  1. Abeler, J., Nosenzo, D. & Raymond, C. Preferences for truth-telling. Econometrica 87, 1115–1153 (2019).

    Google Scholar 

  2. Gächter, S. & Schulz, J. F. Intrinsic honesty and the prevalence of rule violations across societies. Nature 531, 496–499 (2016).

    PubMed  PubMed Central  Google Scholar 

  3. Weisel, O. & Shalvi, S. The collaborative roots of corruption. Proc. Natl Acad. Sci. USA 112, 10651–10656 (2015).

    CAS  PubMed  PubMed Central  Google Scholar 

  4. Rahwan, I. et al. Machine behaviour. Nature 568, 477–486 (2019).

    CAS  PubMed  Google Scholar 

  5. de Melo, C. M., Marsella, S. & Gratch, J. Social decisions and fairness change when people’s interests are represented by autonomous agents. Auton. Agent. Multi Agent Syst. 32, 163–187 (2018).

    Google Scholar 

  6. Domingos, P. A few useful things to know about machine learning. Commun. ACM 55, 78–87 (2012).

    Google Scholar 

  7. Yang, G.-Z. et al. The grand challenges of science robotics. Sci. Robot. 3, eaar7650 (2018).

    PubMed  Google Scholar 

  8. Floridi, L. Faultless responsibility: on the nature and allocation of moral responsibility for distributed moral actions. Philos. Trans. A Math. Phys. Eng. Sci. https://doi.org/10.1098/rsta.2016.0112 (2016).

  9. Damiani, J. A voice deepfake was used to scam a CEO out of $243,000. Forbes Magazine https://www.forbes.com/sites/jessedamiani/2019/09/03/a-voice-deepfake-was-used-to-scam-a-ceo-out-of-243000/ (3 September 2019).

  10. Robitzski, D. This grad student used a neural network to write his papers. Futurism https://futurism.com/grad-student-neural-network-write-papers (21 April 2020).

  11. Lin, T. C. W. The new market manipulation. Emory Law J. 66, 1253–1315 (2016).

    Google Scholar 

  12. Hakim, F. Z. M., Indrayani, L. M. & Amalia, R. M. A dialogic analysis of compliment strategies employed by Replika chatbot. In Proc. 3rd International Conference of Arts, Language and Culture (ICALC 2018) https://www.atlantis-press.com/proceedings/icalc-18/55913474 (Atlantis, 2019).

  13. Cave, S. & Dihal, K. Hopes and fears for intelligent machines in fiction and reality. Nat. Mach. Intell. 1, 74–78 (2019).

    Google Scholar 

  14. White Paper on Artificial Intelligence—A European Approach to Excellence and Trust (EU Commission, 2020).

  15. Plant, S. Zeros and Ones: Digital Women and the New Technoculture (Fourth Estate, 1997).

  16. Frank, M., Roehrig, P. & Pring, B. What to Do When Machines Do Everything: How to Get Ahead in a World of AI, Algorithms, Bots, and Big Data (Wiley, 2017).

  17. Tegmark, M. Life 3.0: Being Human in the Age of Artificial Intelligence (Knopf, 2017).

  18. Mungiu-Pippidi, A. The time has come for evidence-based anticorruption. Nat. Hum. Behav. 1, 0011 (2017).

    Google Scholar 

  19. Gino, F. Understanding ordinary unethical behavior: why people who value morality act immorally. Curr. Opin. Behav. Sci. 3, 107–111 (2015).

    Google Scholar 

  20. Jones, T. M. Ethical decision making by individuals in organizations: an issue-contingent model. Acad. Manag. Rev. 16, 366–395 (1991).

    Google Scholar 

  21. Cohn, A., Maréchal, M. A., Tannenbaum, D. & Zünd, C. L. Civic honesty around the globe. Science 365, 70–73 (2019).

    CAS  PubMed  Google Scholar 

  22. Treviño, L. K., Weaver, G. R. & Reynolds, S. J. Behavioral ethics in organizations: a review. J. Manag. 32, 951–990 (2006).

    Google Scholar 

  23. Bazerman, M. H. & Gino, F. Behavioral ethics: toward a deeper understanding of moral judgment and dishonesty. Annu. Rev. Law Soc. Sci. 8, 85–104 (2012).

    Google Scholar 

  24. Shalvi, S., Weisel, O., Kochavi-Gamlie, S. & Leib, M. in Cheating, Corruption, and Concealment: the Roots of Dishonesty (eds Van Prooijen, J. W. & Van Lange, P. A. M.) 134–148 (Cambridge Univ. Press, 2016).

  25. Mazar, N., Amir, O. & Ariely, D. The dishonesty of honest people: a theory of self-concept maintenance. J. Mark. Res. 45, 633–644 (2008).

    Google Scholar 

  26. Ariely, D. The Honest Truth about Dishonesty: How We Lie to Everyone—Especially Ourselves (HarperCollins, 2012).

  27. Shalvi, S., Gino, F., Barkan, R. & Ayal, S. Self-serving justifications: doing wrong and feeling moral. Curr. Dir. Psychol. Sci. 24, 125–130 (2015).

    Google Scholar 

  28. Cohn, A., Fehr, E. & Maréchal, M. A. Business culture and dishonesty in the banking industry. Nature 516, 86–89 (2014).

    CAS  PubMed  Google Scholar 

  29. Rahwan, Z., Yoeli, E. & Fasolo, B. Heterogeneity in banker culture and its influence on dishonesty. Nature 575, 345–349 (2019).

    CAS  PubMed  Google Scholar 

  30. Gerlach, P., Teodorescu, K. & Hertwig, R. The truth about lies: a meta-analysis on dishonest behavior. Psychol. Bull. 145, 1–44 (2019).

    PubMed  Google Scholar 

  31. Köbis, N. C., van Prooijen, J.-W., Righetti, F. & Van Lange, P. A. M. Prospection in individual and interpersonal corruption dilemmas. Rev. Gen. Psychol. 20, 71–85 (2016).

    Google Scholar 

  32. Gross, J., Leib, M., Offerman, T. & Shalvi, S. Ethical free riding: when honest people find dishonest partners. Psychol. Sci. 29, 1956–1968 (2018).

    PubMed  Google Scholar 

  33. Gross, J. & De Dreu, C. K. W. Rule following mitigates collaborative cheating and facilitates the spreading of honesty within groups. Pers. Soc. Psychol. Bull. 47, 395–409 (2020).

    PubMed  PubMed Central  Google Scholar 

  34. Leib, M., Köbis, N. C., Soraperra, I., Weisel, O. & Shalvi, S. Collaborative Dishonesty: a Meta-Study CREED Working Paper Series (Univ. Amsterdam, 2021).

  35. Thomas, P. S. et al. Preventing undesirable behavior of intelligent machines. Science 366, 999–1004 (2019).

    CAS  PubMed  Google Scholar 

  36. Obermeyer, Z., Powers, B., Vogeli, C. & Mullainathan, S. Dissecting racial bias in an algorithm used to manage the health of populations. Science 366, 447–453 (2019).

    CAS  PubMed  Google Scholar 

  37. Koenecke, A. et al. Racial disparities in automated speech recognition. Proc. Natl Acad. Sci. USA 117, 7684–7689 (2020).

    CAS  PubMed  PubMed Central  Google Scholar 

  38. He, Q., Turel, O. & Bechara, A. Brain anatomy alterations associated with social networking site (SNS) addiction. Sci. Rep. 7, 45064 (2017).

    CAS  PubMed  PubMed Central  Google Scholar 

  39. Aral, S. The Hype Machine: How Social Media Disrupts Our Elections, Our Economy, and Our Health–and How We Must Adapt (Crown, 2020).

  40. Vosoughi, S., Roy, D. & Aral, S. The spread of true and false news online. Science 359, 1146–1151 (2018).

    CAS  PubMed  Google Scholar 

  41. Soraperra, I. et al. The bad consequences of teamwork. Econ. Lett. 160, 12–15 (2017).

    Google Scholar 

  42. Cialdini, R. B., Reno, R. R. & Kallgren, C. A. A focus theory of normative conduct: recycling the concept of norms to reduce littering in public places. J. Pers. Soc. Psychol. 58, 1015–1026.

  43. Bicchieri, C. Norms in the Wild: How to Diagnose, Measure, and Change Social Norms (Oxford Univ. Press, 2016).

  44. Efferson, C., Vogt, S. & Fehr, E. The promise and the peril of using social influence to reverse harmful traditions. Nat. Hum. Behav. 4, 55–68 (2020).

    PubMed  Google Scholar 

  45. Köbis, N. C., Troost, M., Brandt, C. O. & Soraperra, I. Social norms of corruption in the field: social nudges on posters can help to reduce bribery. Behav. Public Policy https://doi.org/10.1017/bpp.2019.37 (2019).

  46. Köbis, N. C., van Prooijen, J.-W., Righetti, F. & Van Lange, P. A. M. ‘Who doesn’t?’—the impact of descriptive norms on corruption. PLoS ONE 10, e0131830 (2015).

    PubMed  PubMed Central  Google Scholar 

  47. Köbis, N. C., Jackson, D. & Carter, D. I. in A Research Agenda for Studies of Corruption (eds Mungiu-Pippidi, A. & Heywood, P.) 41–53 (Edward Elgar, 2020).

  48. Brandstetter, J. et al. A peer pressure experiment: recreation of the Asch conformity experiment with robots. In Proc. 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems 1335–1340 (IEEE, 2014).

  49. Shiomi, M. & Hagita, N. Do synchronized multiple robots exert peer pressure? In Proc. 4th International Conference on Human Agent Interaction 27–33 (Association for Computing Machinery, 2016).

  50. Vollmer, A.-L., Read, R., Trippas, D. & Belpaeme, T. Children conform, adults resist: a robot group induced peer pressure on normative social conformity. Sci. Robot. 3, eaat7111 (2018).

    PubMed  Google Scholar 

  51. Salomons, N., van der Linden, M., Strohkorb Sebo, S. & Scassellati, B. Humans conform to robots: disambiguating trust, truth, and conformity. In Proc. 2018 ACM/IEEE International Conference on Human–Robot Interaction 187–195 (Association for Computing Machinery, 2018).

  52. Hertz, N. & Wiese, E. Under pressure: examining social conformity with computer and robot groups. Hum. Factors 60, 1207–1218 (2018).

    PubMed  Google Scholar 

  53. Hertz, N., Shaw, T., de Visser, E. J. & Wiese, E. Mixing it up: how mixed groups of humans and machines modulate conformity. J. Cogn. Eng. Decis. Mak. 13, 242–257 (2019).

    Google Scholar 

  54. Köbis, N. & Mossink, L. Artificial intelligence versus Maya Angelou: experimental evidence that people cannot differentiate AI-generated from human-written poetry. Comput. Human Behav. 114, 106553 (2021).

    Google Scholar 

  55. Ishowo-Oloko, F. et al. Behavioural evidence for a transparency–efficiency tradeoff in human–machine cooperation. Nat. Mach. Intell. 1, 517–521 (2019).

    Google Scholar 

  56. Song-Nichols, K. & Young, A. G. Gendered robots can change children’s gender stereotyping. In Proc. CogSci 2020 2480–2485 (Cognitive Science Society, 2020).

  57. Williams, R., Machado, C. V., Druga, S., Breazeal, C. & Maes, P. ‘My doll says it’s ok’: a study of children’s conformity to a talking doll. In Proc. 17th ACM Conference on Interaction Design and Children 625–631 (Association for Computing Machinery, 2018).

  58. Milgram, S. Behavioral study of obedience. J. Abnorm. Psychol. 67, 371–378 (1963).

    CAS  PubMed  Google Scholar 

  59. Burger, J. M. Replicating Milgram: would people still obey today? Am. Psychol. 64, 1–11 (2009).

    PubMed  Google Scholar 

  60. Gino, F., Moore, D. A. & Bazerman, M. H. No Harm, No Foul: the Outcome Bias in Ethical Judgments Harvard Business School NOM Working Paper (Harvard Univ., 2009).

  61. Wiltermuth, S. S., Newman, D. T. & Raj, M. The consequences of dishonesty. Curr. Opin. Psychol. 6, 20–24 (2015).

    Google Scholar 

  62. Fogg, B. J. Creating persuasive technologies: an eight-step design process. In Proc. 4th International Conference on Persuasive Technology 1–6 (Association for Computing Machinery, 2009).

  63. Longoni, C. & Cian, L. Artificial intelligence in utilitarian vs. hedonic contexts: the ‘word-of-machine’ effect. J. Mark. https://journals.sagepub.com/doi/full/10.1177/0022242920957347 (2020).

  64. AI reads human emotions. Should it? MIT Technology Review (14 October 2020).

  65. How close is AI to decoding our emotions? MIT Technology Review (24 September 2020).

  66. Giubilini, A. & Savulescu, J. The artificial moral advisor. The ‘ideal observer’ meets artificial intelligence. Philos. Technol. 31, 169–188 (2018).

    PubMed  Google Scholar 

  67. Hoc, J.-M. & Lemoine, M.-P. Cognitive evaluation of human–human and human–machine cooperation modes in air traffic control. Int. J. Aviat. Psychol. 8, 1–32 (1998).

    Google Scholar 

  68. Castelo, N., Bos, M. W. & Lehmann, D. R. Task-dependent algorithm aversion. J. Mark. Res. 56, 809–825 (2019).

    Google Scholar 

  69. Dietvorst, B., Simmons, J. P. & Massey, C. Algorithm aversion: people erroneously avoid algorithms after seeing them err. J. Exp. Psychol. Gen. 144, 114–126 (2015).

    PubMed  Google Scholar 

  70. Leib, M., Köbis, N. C., Hagens, M., Rilke, R. & Irlenbusch, B. The corruptive force of AI-generated advice. Preprint at https://arxiv.org/abs/2102.07536

  71. Robinette, P., Li, W., Allen, R., Howard, A. M. & Wagner, A. R. Overtrust of robots in emergency evacuation scenarios. In Proc. 2016 ACM/IEEE International Conference on Human–Robot Interaction 101–108 (2016).

  72. Asch, S. E. Studies of independence and conformity: I. A minority of one against a unanimous majority. Psychological Monogr. 70, 1–70 (1956).

    Google Scholar 

  73. Larsen, K. S. The Asch conformity experiment: replication and transhistorical comparison. J. Soc. Behav. Pers. 5, 163–168 (1990).

    Google Scholar 

  74. Wiltermuth, S. S. Cheating more when the spoils are split. Organ. Behav. Hum. Decis. Process. 115, 157–168 (2011).

    Google Scholar 

  75. Ryvkin, D. & Serra, D. Corruption and competition among bureaucrats: an experimental study. J. Econ. Behav. Organ. 175, 439–451 (2018).

    Google Scholar 

  76. Köbis, N. C., van Prooijen, J.-W., Righetti, F. & Van Lange, P. A. M. The road to bribery and corruption: slippery slope or steep cliff? Psychol. Sci. 28, 297–306 (2017).

    PubMed  Google Scholar 

  77. Lambsdorff, J. G. & Frank, B. Corrupt reciprocity–experimental evidence on a men’s game. Int. Rev. Law Econ. 31, 116–125 (2011).

    Google Scholar 

  78. Schmidt, K. in Distributed Decision Making: Cognitive Models for Cooperative Work (eds Rasmussen, J. et al.) 75–110 (Wiley, 1991).

  79. Hoc, J.-M. Towards a cognitive approach to human–machine cooperation in dynamic situations. Int. J. Hum. Comput. Stud. 54, 509–540 (2001).

    Google Scholar 

  80. Flemisch, F. et al. Towards a dynamic balance between humans and automation: authority, ability, responsibility and control in shared and cooperative control situations. Cogn. Technol. Work 14, 3–18 (2012).

    Google Scholar 

  81. Suchman, L., Blomberg, J., Orr, J. E. & Trigg, R. Reconstructing technologies as social practice. Am. Behav. Sci. 43, 392–408 (1999).

    Google Scholar 

  82. Chugunova, M. & Sele, D. We and It: an Interdisciplinary Review of the Experimental Evidence on Human-Machine Interaction https://doi.org/10.2139/ssrn.3692293 (SSRN, 2020).

  83. Crandall, J. W. et al. Cooperating with machines. Nat. Commun. 9, 233 (2018).

    PubMed  PubMed Central  Google Scholar 

  84. Calvano, E., Calzolari, G., Denicolò, V. & Pastorello, S. Artificial intelligence, algorithmic pricing and collusion. Am. Econ. Rev. 110, 3267–3297 (2019).

    Google Scholar 

  85. Calvano, E., Calzolari, G., Denicolò, V., Harrington, J. E. Jr & Pastorello, S. Protecting consumers from collusive prices due to AI. Science 370, 1040–1042 (2020).

    PubMed  Google Scholar 

  86. Martinez-Miranda, E., McBurney, P. & Howard, M. J. W. Learning unfair trading: a market manipulation analysis from the reinforcement learning perspective. In Proc. 2016 IEEE Conference on Evolving and Adaptive Intelligent Systems 103–109 (EAIS, 2016).

  87. Mell, J., Lucas, G. & Gratch, J. in Intelligent Virtual Agents 273–282 (Springer, 2017).

  88. Hohenstein, J. & Jung, M. AI as a moral crumple zone: the effects of AI-mediated communication on attribution and trust. Comput. Human Behav. 106, 106190 (2020).

    Google Scholar 

  89. Kirchkamp, O. & Strobel, C. Sharing responsibility with a machine. J. Behav. Exp. Econ. 80, 25–33 (2019).

    Google Scholar 

  90. Pezzo, M. V. & Pezzo, S. P. Physician evaluation after medical errors: does having a computer decision aid help or hurt in hindsight? Med. Decis. Mak. 26, 48–56 (2006).

    Google Scholar 

  91. Paravisini, D. & Schoar, A. The Incentive Effect of Scores: Randomized Evidence from Credit Committees Working Paper Series (National Bureau of Economic Research, 2013).

  92. Gombolay, M. C., Gutierrez, R. A., Clarke, S. G., Sturla, G. F. & Shah, J. A. Decision-making authority, team efficiency and human worker satisfaction in mixed human–robot teams. Auton. Robots 39, 293–312 (2015).

    Google Scholar 

  93. Shank, D. B., DeSanti, A. & Maninger, T. When are artificial intelligence versus human agents faulted for wrongdoing? Moral attributions after individual and joint decisions. Inf. Commun. Soc. 22, 648–663 (2019).

    Google Scholar 

  94. Houser, D. & Kurzban, R. Revisiting kindness and confusion in public goods experiments. Am. Econ. Rev. 92, 1062–1069 (2002).

    Google Scholar 

  95. Coricelli, G. & Nagel, R. Neural correlates of depth of strategic reasoning in medial prefrontal cortex. Proc. Natl Acad. Sci. USA 106, 9163–9168 (2009).

    CAS  PubMed  PubMed Central  Google Scholar 

  96. Frith, C. D. & Frith, U. The neural basis of mentalizing. Neuron 50, 531–534 (2006).

    CAS  PubMed  Google Scholar 

  97. Schniter, E., Shields, T. W. & Sznycer, D. Trust in humans and robots: economically similar but emotionally different. J. Econ. Psychol. 78, 102253 (2020).

    Google Scholar 

  98. De Melo, C., Marsella, S. & Gratch, J. People do not feel guilty about exploiting machines. ACM Trans. Comput. Hum. Interact. 23 (2016).

  99. Mazar, N. & Ariely, D. Dishonesty in everyday life and its policy implications. J. Public Policy Mark. 25, 117–126 (2006).

    Google Scholar 

  100. Köbis, N., Starke, C. & Rahwan, I. Artificial intelligence as an anti-corruption tool (AI-ACT)–potentials and pitfalls for top-down and bottom-up approaches. Preprint at https://arxiv.org/abs/2102.11567 (2021).

  101. Drugov, M., Hamman, J. & Serra, D. Intermediaries in corruption: an experiment. Exp. Econ. 17, 78–99 (2014).

    Google Scholar 

  102. Van Zant, A. B. & Kray, L. J. ‘I can’t lie to your face’: minimal face-to-face interaction promotes honesty. J. Exp. Soc. Psychol. 55, 234–238 (2014).

    Google Scholar 

  103. Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S. & Floridi, L. The ethics of algorithms: mapping the debate. Big Data Soc. https://doi.org/10.1177/2053951716679679 (2016).

  104. Gogoll, J. & Uhl, M. Rage against the machine: automation in the moral domain. J. Behav. Exp. Econ. 74, 97–103 (2018).

    Google Scholar 

  105. McAllister, A. Stranger than science fiction: the rise of AI interrogation in the dawn of autonomous robots and the need for an additional protocol to the UN convention against torture. Minn. Law Rev. 101, 2527–2574 (2016).

    Google Scholar 

  106. Mell, J., Lucas, G., Mozgai, S. & Gratch, J. The effects of experience on deception in human–agent negotiation. J. Artif. Intell. Res. 68, 633–660 (2020).

    Google Scholar 

  107. Miller, T. Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–36 (2019).

    Google Scholar 

  108. Gunning, D., Stefik, M., Choi, J. & Miller, T. XAI—explainable artificial intelligence. Sci. Robot. 4, eaay7120 (2019).

    PubMed  Google Scholar 

  109. King, T. C., Aggarwal, N., Taddeo, M. & Floridi, L. Artificial intelligence crime: an interdisciplinary analysis of foreseeable threats and solutions. Sci. Eng. Ethics 26, 89–120 (2020).

    PubMed  Google Scholar 

  110. Dana, J., Weber, R. A. & Kuang, J. X. Exploiting moral wiggle room: experiments demonstrating an illusory preference for fairness. Econ. Theory 33, 67–80 (2007).

    Google Scholar 

  111. Hancock, J. T. & Guillory, J. in The Handbook of the Psychology of Communication Technology (ed. Sundar, S. S.) 270–289 (Wiley, 2015).

  112. Seymour, J. & Tully, P. Weaponizing data science for social engineering: automated E2E spear phishing on Twitter. Black Hat USA https://www.blackhat.com/docs/us-16/materials/us-16-Seymour-Tully-Weaponizing-Data-Science-For-Social-Engineering-Automated-E2E-Spear-Phishing-On-Twitter-wp.pdf (2016).

  113. Caldwell, M., Andrews, J. T. A., Tanay, T. & Griffin, L. D. AI-enabled future crime. Crime Sci. 9, 14 (2020).

    Google Scholar 

  114. Sharkey, N., Goodman, M. & Ross, N. The coming robot crime wave. Computer 43, 115–116 (2010).

    Google Scholar 

  115. Jagatic, T. N., Johnson, N. A., Jakobsson, M. & Menczer, F. Social phishing. Commun. ACM 50, 94–100 (2007).

    Google Scholar 

  116. Ferrara, E., Varol, O., Davis, C., Menczer, F. & Flammini, A. The rise of social bots. Commun. ACM 59, 96–104 (2016).

    Google Scholar 

  117. Brundage, M. et al. The malicious use of artificial intelligence: forecasting, prevention, and mitigation. Preprint at https://arxiv.org/abs/1802.07228 (2018).

  118. Bendel, O. The synthetization of human voices. AI Soc. 34, 83–89 (2019).

    Google Scholar 

  119. McKelvey, F. & Dubois, E. Computational Propaganda in Canada: the Use of Political Bots (Computational Propaganda Research Project, 2017).

  120. Ostermaier, A. & Uhl, M. Spot on for liars! How public scrutiny influences ethical behavior. PLoS ONE 12, e0181682 (2017).

    PubMed  PubMed Central  Google Scholar 

  121. Köbis, N. C., Verschuere, B., Bereby-Meyer, Y., Rand, D. & Shalvi, S. Intuitive honesty versus dishonesty: meta-analytic evidence. Perspect. Psychol. Sci. 14, 778–796 (2019).

    PubMed  Google Scholar 

  122. Rauhut, H. Beliefs about lying and spreading of dishonesty: undetected lies and their constructive and destructive social dynamics in dice experiments. PLoS ONE 8, e77878 (2013).

    PubMed  PubMed Central  Google Scholar 

  123. Leyer, M. & Schneider, S. Me, you or AI? How do we feel about delegation. In Proc. 27th European Conference on Information Systems (ECIS) https://aisel.aisnet.org/ecis2019_rp/36 (2019).

  124. Wellman, M. P. & Rajan, U. Ethical issues for autonomous trading agents. Minds Mach. 27, 609–624 (2017).

    Google Scholar 

  125. Tenbrunsel, A. E. & Messick, D. M. Ethical fading: the role of self-deception in unethical behavior. Soc. Justice Res. 17, 223–236 (2004).

    Google Scholar 

  126. Bazerman, M. H. & Banaji, M. R. The social psychology of ordinary ethical failures. Soc. Justice Res. 17, 111–115 (2004).

    Google Scholar 

  127. Bazerman, M. H. & Tenbrunsel, A. E. Blind Spots: Why We Fail to Do What’s Right and What to Do about It. (Princeton Univ. Press, 2012).

  128. Sloane, M. & Moss, E. AI’s social sciences deficit. Nat. Mach. Intell. 1, 330–331 (2019).

    Google Scholar 

  129. Irving, G. & Askell, A. AI safety needs social scientists. Distill 4, e14 (2019).

    Google Scholar 

  130. Crawford, K. & Calo, R. There is a blind spot in AI research. Nature 538, 311–313 (2016).

    CAS  PubMed  Google Scholar 

  131. Awad, E. et al. The Moral Machine experiment. Nature 563, 59–64 (2018).

    CAS  PubMed  Google Scholar 

  132. Bigman, Y. E., Waytz, A., Alterovitz, R. & Gray, K. Holding robots responsible: the elements of machine morality. Trends Cogn. Sci. 23, 365–368 (2019).

    PubMed  Google Scholar 

  133. Burton, J. W., Stein, M. & Jensen, T. B. A systematic review of algorithm aversion in augmented decision making. J. Behav. Decis. Mak. 33, 220–239 (2020).

    Google Scholar 

  134. Fisman, R. & Golden, M. How to fight corruption. Science 356, 803–804 (2017).

    CAS  PubMed  Google Scholar 

  135. De Angeli, A. Ethical implications of verbal disinhibition with conversational agents. PsychNology J. 7, 49–57 (2009).

    Google Scholar 

  136. McDonnell, M. & Baxter, D. Chatbots and gender stereotyping. Interact. Comput. 31, 116–121 (2019).

    Google Scholar 

  137. Schwickerath, A. K., Varraich, A. & Smith, L.-L. How to research corruption. In Conference Proceedings Interdisciplinary Corruption Research Forum (eds Schwickerath, A. K. et al.) 7–8 (Interdisciplinary Corruption Research Network, 2016).

  138. Salganik, M. J. Bit by Bit (Princeton Univ. Press, 2017).

  139. Fisman, R. & Miguel, E. Corruption, norms, and legal enforcement: evidence from diplomatic parking tickets. J. Polit. Econ. 115, 1020–1048 (2007).

    Google Scholar 

  140. Pierce, L. & Balasubramanian, P. Behavioral field evidence on psychological and social factors in dishonesty and misconduct. Curr. Opin. Psychol. 6, 70–76 (2015).

    Google Scholar 

  141. Dai, Z., Galeotti, F. & Villeval, M. C. Cheating in the lab predicts fraud in the field: an experiment in public transportation. Manag. Sci. 64, 1081–1100 (2018).

    Google Scholar 

  142. Cohn, A. & Maréchal, M. A. Laboratory measure of cheating predicts school misconduct. Econ. J. 128, 2743–2754 (2018).

    Google Scholar 

  143. Floridi, L. & Sanders, J. W. On the morality of artificial agents. Minds Mach. 14, 349–379 (2004).

    Google Scholar 

  144. Hagendorff, T. Ethical behavior in humans and machines–evaluating training data quality for beneficial machine learning. Preprint at https://arxiv.org/abs/2008.11463 (2020).

  145. Mullainathan, S. Biased algorithms are easier to fix than biased people. The New York Times https://www.nytimes.com/2019/12/06/business/algorithm-bias-fix.html (6 December 2019).

  146. Hutson, M. Artificial intelligence faces reproducibility crisis. Science 359, 725–726 (2018).

    PubMed  Google Scholar 

  147. IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically Aligned Design: a Vision for Prioritizing Human Well-Being with Autonomous and Intelligent Systems Version 2 https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead_v2.pdf (IEEE, 2017).

  148. Russell, S., Dewey, D. & Tegmark, M. Research priorities for robust and beneficial artificial intelligence. AI Mag. 36, 105–114 (2015).

    Google Scholar 

  149. Amir, O. et al. Psychology, behavioral economics, and public policy. Mark. Lett. 16, 443–454 (2005).

    Google Scholar 

  150. OECD. Recommendation of the Council on Artificial Intelligence OECD/LEGAL/0449 (OECD, 2020).

  151. Fisman, R. & Golden, M. A. Corruption: What Everyone Needs to Know (Oxford Univ. Press, 2017).

  152. Shin, D. & Park, Y. J. Role of fairness, accountability, and transparency in algorithmic affordance. Comput. Human Behav. 98, 277–284 (2019).

    Google Scholar 

  153. Diakopoulos, N. Accountability in algorithmic decision making. Commun. ACM 59, 56–62 (2016).

    Google Scholar 

  154. Walsh, T. Turing’s red flag. Commun. ACM 59, 34–37 (2016).

    Google Scholar 

  155. Webb, A. The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity (Hachette UK, 2019).

  156. Crawford, K. Halt the use of facial-recognition technology until it is regulated. Nature 572, 565 (2019).

    CAS  PubMed  Google Scholar 

  157. Hagendorff, T. Forbidden knowledge in machine learning reflections on the limits of research and publication. AI Soc. https://doi.org/10.1007/s00146-020-01045-4 (2020).

  158. Finkel, A. What will it take for us to trust AI? World Economic Forum https://www.weforum.org/agenda/2018/05/alan-finkel-turing-certificate-ai-trust-robot (12 May 2018).

  159. Awad, E., Dsouza, S., Bonnefon, J.-F., Shariff, A. & Rahwan, I. Crowdsourcing moral machines. Commun. ACM 63, 48–55 (2020).

    Google Scholar 

Download references

Acknowledgements

We thank A. Bouza da Costa for designing the illustrations, and M. Leib and L. Karim for valuable comments on the manuscript. J.-F.B. acknowledges support from the Institute for Advanced Study in Toulouse, grant ANR-19-PI3A-0004 from the Artificial and Natural Intelligence Toulouse Institute and grant ANR-17-EURE-0010 from Investissements d’Avenir.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nils Köbis.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Peer review information Nature Human Behaviour thanks Thilo Hagendorff and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Köbis, N., Bonnefon, JF. & Rahwan, I. Bad machines corrupt good morals. Nat Hum Behav 5, 679–685 (2021). https://doi.org/10.1038/s41562-021-01128-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s41562-021-01128-2

Further reading

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing