Recent advances in artificial intelligence and deep learning have made it possible for bots to pass as humans, as is the case with the recent Google Duplex—an automated voice assistant capable of generating realistic speech that can fool humans into thinking they are talking to another human. Such technologies have drawn sharp criticism due to their ethical implications, and have fueled a push towards transparency in human–machine interactions. Despite the legitimacy of these concerns, it remains unclear whether bots would compromise their efficiency by disclosing their true nature. Here, we conduct a behavioural experiment with participants playing a repeated prisoner’s dilemma game with a human or a bot, after being given either true or false information about the nature of their associate. We find that bots do better than humans at inducing cooperation, but that disclosing their true nature negates this superior efficiency. Human participants do not recover from their prior bias against bots despite experiencing cooperative attitudes exhibited by bots over time. These results highlight the need to set standards for the efficiency cost we are willing to pay in order for machines to be transparent about their non-human nature.
This is a preview of subscription content, access via your institution
Open Access articles citing this article.
Scientific Reports Open Access 25 August 2022
Journal of Business Ethics Open Access 26 May 2021
Scientific Reports Open Access 24 May 2021
Subscribe to Nature+
Get immediate online access to the entire Nature family of 50+ journals
Subscribe to Journal
Get full journal access for 1 year
only $8.25 per issue
All prices are NET prices.
VAT will be added later in the checkout.
Tax calculation will be finalised during checkout.
Get time limited or full article access on ReadCube.
All prices are NET prices.
The data that support the findings of this study have been deposited in the Open Science Framework (https://doi.org/10.17605/OSF.IO/AK3TF).
The software and all code used to generate the findings of this study have been deposited in the Open Science Framework (https://doi.org/10.17605/OSF.IO/AK3TF).
Berkeley, J., Dietvorst, J. P. S. & Massey, C. Algorithm aversion: people erroneously avoid algorithms after seeing them err. J. Exp. Psychol. 144, 114–126 (2015).
Kiesler, S., Sproull, L. & Miller, J. A prisoner’s dilemma experiment on cooperation with people and human-like computers. J. Pers. Soc. Psychol. 70, 47–65 (1996).
Oudah, M., Babushkin, V., Chenlinangjia, T. & Crandall, J. W. Learning to interact with a human partner. In Proc. Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction 311–318 (ACM, 2015).
Merritt, T. & McGee, K. Protecting artificial team-mates: more seems like less. In Proc. SIGCHI Conference on Human Factors in Computing Systems 2793–2802 (ACM, 2012).
Eastwood, J., Snook, B. & Luther, K. What people want from their professionals: attitudes toward decision-making strategies. J. Behav. Decis. Mak. 25, 458–468 (2012).
Promberger, M. & Baron, J. Do patients trust computers? J. Behav. Decis. Mak. 19, 455–468 (2006).
Neda Ratanawongsa, M. et al. Association between clinician computer use and communication with patients in safety-net clinics. JAMA Intern. Med. 176, 125–128 (2016).
Mori, M., MacDorman, K. F. & Kageki, N. The uncanny valley [from the field]. IEEE Robot. Autom. Mag. 19, 98–100 (2012).
Leviathan, Y. & Matias, Y. Google Duplex: an AI system for accomplishing real-world tasks over the phone. Google AI Blog https://ai.googleblog.com/2018/05/duplex-ai-system-for-natural-conversation.html (2018).
Statt, N. Google now says controversial AI voice calling system will identify itself to humans. The Verge https://www.theverge.com/2018/5/10/17342414/google-duplex-ai-assistant-voice-calling-identify-itself-update (2018).
Vomiero, J. Google’s AI assistant must identify itself as a robot during phone calls: report. Global News https://globalnews.ca/news/4204648/googles-ai-identify-itself-robot-phone-calls/ (2018).
Hern, A. Google’s ‘deceitful’ AI assistant to identify itself as a robot during calls. The Guardian https://www.theguardian.com/technology/2018/may/11/google-duplex-ai-identify-itself-as-robot-during-calls (2018).
Bergen, M. Google grapples with ‘horrifying’ reaction to uncanny AI tech. Bloomberg https://www.bloomberg.com/news/articles/2018-05-10/google-grapples-with-horrifying-reaction-to-uncanny-ai-tech (2018).
Harwell, D. A google program can pass as a human on the phone. should it be required to tell people it’s a machine? The Washington Post https://www.washingtonpost.com/news/the-switch/wp/2018/05/08/a-google-program-can-pass-as-a-human-on-the-phone-should-it-be-required-to-tell-people-its-a-machine/ (2018).
Axelrod, R. The Evolution of Cooperation (Basic Books, 1984).
Nowak, M. A. & May, R. M. Evolutionary games and spatial chaos. Nature 359, 826–829 (1992).
Nowak, M. A. & Sigmund, K. Evolution of indirect reciprocity by image scoring. Nature 393, 573–577 (1998).
Fehr, E. & Gachter, S. Altruistic punishment in humans. Nature 415, 137–140 (2002).
Ohtsuki, H., Hauert, C., Lieberman, E. & Nowak, M. A. A simple rule for the evolution of cooperation on graphs and social networks. Nature 441, 502–505 (2006).
Nowak, M. A. Five rules for the evolution of cooperation. Science 314, 1560–1563 (2006).
Rand, D. G., Dreber, A., Ellingsen, T., Fudenberg, D. & Nowak, M. A. Positive interactions promote public cooperation. Science 325, 1272–1275 (2009).
Fudenberg, D., Rand, D. G. & Dreber, A. Slow to anger and fast to forgive: cooperation in an uncertain world. Am. Econ. Rev. 102, 720–749 (2012).
Dorrough, A. R. & Glöckner, A. Multinational investigation of cross-societal cooperation. Proc. Natl Acad. Sci. USA 113, 10836–10841 (2016).
Bear, A. & Rand, D. G. Intuition, deliberation, and the evolution of cooperation. Proc. Natl Acad. Sci. USA 113, 936–941 (2016).
Nowak, M. & Sigmund, K. A strategy of win-stay, lose-shift that outperforms tit-for-tat in the prisoner’s dilemma game. Nature 364, 56–58 (1993).
Bó, P. D. Cooperation under the shadow of the future: experimental evidence from infinitely repeated games. Am. Econ. Rev. 364, 1591–1604 (2005).
Dreber, A., Rand, D. G., Fudenberg, D. & Nowak, Ma Winners don’t punish. Nature 452, 348–351 (2008).
Crandall, J. W. Towards minimizing disappointment in repeated games. J. Artif. Intell. Res. 49, 111–142 (2014).
Fudenberg, D. & Levine, D. K. The Theory of Learning in Games (The MIT Press, 1998).
Littman, M. L. Markov games as a framework for multi-agent reinforcement learning. In Proc. 11th International Conference on Machine Learning 157–163 (ACM, 1994).
Auer, P., Cesa-Bianchi, N., Freund, Y. & Schapire, R. E. Gambling in a rigged casino: the adversarial multi-armed bandit problem. In Proc. 36th Symposium on the Foundations of Computer Science 322–331 (IEEE, 1995).
Sandholm, T. W. & Crites, R. H. Multiagent reinforcement learning in the iterated prisoner’s dilemma. Biosystems 37, 147–166 (1996).
Karandikar, R., Mookherjee, D., Ray, D. & Vega-Redondo, F. Evolving aspirations and cooperation. J. Econ. Theory 80, 292–331 (1998).
Claus, C. & Boutilier, C. The dynamics of reinforcement learning in cooperative multiagent systems. In Proc. 15th National Conference on Artificial Intelligence 746–752 (AAAI, 1998).
de Farias, D. & Megiddo, N. Exploration–exploitation tradeoffs for expert algorithms in reactive environments. In Advances in Neural Information Processing Systems 17 (eds Saul, L. K. et al.) 409–416 (NIPS, 2004).
Bouzy, B. & Metivier, M. Multi-agent learning experiments in repeated matrix games. In Proc. 27th International Conference on Machine Learning 119–126 (Omnipress, 2010).
Iliopoulos, D., Hintze, A. & Adami, C. Critical dynamics in the evolution of stochastic strategies for the iterated prisoner’s dilemma. PLoS Comput. Biol. 6, e1000948 (2010).
Littman, M. L. & Stone, P. A polynomial-time Nash equilibrium algorithm for repeated games. Decis. Support Syst. 39, 55–66 (2005).
Crandall, J. W. et al. Cooperating with machines. Nat. Commun. 9, 233 (2018).
Chaudhuri, A. Sustaining cooperation in laboratory public goods experiments: a selective survey of the literature. Exp. Econ. 14, 47–83 (2011).
Wang, J., Suri, S. & Watts, D. J. Cooperation and assortativity with dynamic partner updating. Proc. Natl Acad. Sci. USA 109, 14363–14368 (2012).
Citron, D. K. & Pasquale, F. The scored society: due process for automated predictions. Wash. Law Rev. 89, 1 (2014).
Diakopoulos, N. Accountability in algorithmic decision making. Commun. ACM 59, 56–62 (2016).
Selbst, A. D. & Barocas, S. The intuitive appeal of explainable machines. Fordham Law Rev. 87, 1085 (2018).
Bonnefon, J. F., Feeney, A. & De Neys, W. The risk of polite misunderstandings. Curr. Dir. Psychol. Sci. 20, 321–324 (2011).
Goldin, C. & Rouse, C. Orchestrating impartiality: the impact of ‘blind’ auditions on female musicians. Am. Econ. Rev. 90, 715–741 (2000).
We thank E. Awad for his help running the experiments on MTurk. J.-F.B. acknowledges support from the ANR-Labex Institute for Advanced Study in Toulouse, the ANR-3IA Artificial and Natural Intelligence Toulouse Institute, and the grant ANR-17-EURE-0010 Investissements d’Avenir.
The authors declare no competing interests.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Ishowo-Oloko, F., Bonnefon, JF., Soroye, Z. et al. Behavioural evidence for a transparency–efficiency tradeoff in human–machine cooperation. Nat Mach Intell 1, 517–521 (2019). https://doi.org/10.1038/s42256-019-0113-5
This article is cited by
Scientific Reports (2022)
Journal of Business Ethics (2022)
Nature Human Behaviour (2021)
Scientific Reports (2021)
Integrative Psychological and Behavioral Science (2020)