Skip to main content

Thank you for visiting You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Behavioural evidence for a transparency–efficiency tradeoff in human–machine cooperation


Recent advances in artificial intelligence and deep learning have made it possible for bots to pass as humans, as is the case with the recent Google Duplex—an automated voice assistant capable of generating realistic speech that can fool humans into thinking they are talking to another human. Such technologies have drawn sharp criticism due to their ethical implications, and have fueled a push towards transparency in human–machine interactions. Despite the legitimacy of these concerns, it remains unclear whether bots would compromise their efficiency by disclosing their true nature. Here, we conduct a behavioural experiment with participants playing a repeated prisoner’s dilemma game with a human or a bot, after being given either true or false information about the nature of their associate. We find that bots do better than humans at inducing cooperation, but that disclosing their true nature negates this superior efficiency. Human participants do not recover from their prior bias against bots despite experiencing cooperative attitudes exhibited by bots over time. These results highlight the need to set standards for the efficiency cost we are willing to pay in order for machines to be transparent about their non-human nature.

Access options

Rent or Buy article

Get time limited or full article access on ReadCube.


All prices are NET prices.

Fig. 1: Prejudice against purported bots early in the game.
Fig. 2: The tradeoff between efficiency and transparency.
Fig. 3: Bots learn to expect less from humans, especially when they are transparent.

Data availability

The data that support the findings of this study have been deposited in the Open Science Framework (

Code availability

The software and all code used to generate the findings of this study have been deposited in the Open Science Framework (


  1. 1.

    Berkeley, J., Dietvorst, J. P. S. & Massey, C. Algorithm aversion: people erroneously avoid algorithms after seeing them err. J. Exp. Psychol. 144, 114–126 (2015).

    Article  Google Scholar 

  2. 2.

    Kiesler, S., Sproull, L. & Miller, J. A prisoner’s dilemma experiment on cooperation with people and human-like computers. J. Pers. Soc. Psychol. 70, 47–65 (1996).

    Article  Google Scholar 

  3. 3.

    Oudah, M., Babushkin, V., Chenlinangjia, T. & Crandall, J. W. Learning to interact with a human partner. In Proc. Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction 311–318 (ACM, 2015).

  4. 4.

    Merritt, T. & McGee, K. Protecting artificial team-mates: more seems like less. In Proc. SIGCHI Conference on Human Factors in Computing Systems 2793–2802 (ACM, 2012).

  5. 5.

    Eastwood, J., Snook, B. & Luther, K. What people want from their professionals: attitudes toward decision-making strategies. J. Behav. Decis. Mak. 25, 458–468 (2012).

    Article  Google Scholar 

  6. 6.

    Promberger, M. & Baron, J. Do patients trust computers? J. Behav. Decis. Mak. 19, 455–468 (2006).

    Article  Google Scholar 

  7. 7.

    Neda Ratanawongsa, M. et al. Association between clinician computer use and communication with patients in safety-net clinics. JAMA Intern. Med. 176, 125–128 (2016).

    Article  Google Scholar 

  8. 8.

    Mori, M., MacDorman, K. F. & Kageki, N. The uncanny valley [from the field]. IEEE Robot. Autom. Mag. 19, 98–100 (2012).

    Article  Google Scholar 

  9. 9.

    Leviathan, Y. & Matias, Y. Google Duplex: an AI system for accomplishing real-world tasks over the phone. Google AI Blog (2018).

  10. 10.

    Statt, N. Google now says controversial AI voice calling system will identify itself to humans. The Verge (2018).

  11. 11.

    Vomiero, J. Google’s AI assistant must identify itself as a robot during phone calls: report. Global News (2018).

  12. 12.

    Hern, A. Google’s ‘deceitful’ AI assistant to identify itself as a robot during calls. The Guardian (2018).

  13. 13.

    Bergen, M. Google grapples with ‘horrifying’ reaction to uncanny AI tech. Bloomberg (2018).

  14. 14.

    Harwell, D. A google program can pass as a human on the phone. should it be required to tell people it’s a machine? The Washington Post (2018).

  15. 15.

    Axelrod, R. The Evolution of Cooperation (Basic Books, 1984).

  16. 16.

    Nowak, M. A. & May, R. M. Evolutionary games and spatial chaos. Nature 359, 826–829 (1992).

    Article  Google Scholar 

  17. 17.

    Nowak, M. A. & Sigmund, K. Evolution of indirect reciprocity by image scoring. Nature 393, 573–577 (1998).

    Article  Google Scholar 

  18. 18.

    Fehr, E. & Gachter, S. Altruistic punishment in humans. Nature 415, 137–140 (2002).

    Article  Google Scholar 

  19. 19.

    Ohtsuki, H., Hauert, C., Lieberman, E. & Nowak, M. A. A simple rule for the evolution of cooperation on graphs and social networks. Nature 441, 502–505 (2006).

    Article  Google Scholar 

  20. 20.

    Nowak, M. A. Five rules for the evolution of cooperation. Science 314, 1560–1563 (2006).

    Article  Google Scholar 

  21. 21.

    Rand, D. G., Dreber, A., Ellingsen, T., Fudenberg, D. & Nowak, M. A. Positive interactions promote public cooperation. Science 325, 1272–1275 (2009).

    MathSciNet  Article  Google Scholar 

  22. 22.

    Fudenberg, D., Rand, D. G. & Dreber, A. Slow to anger and fast to forgive: cooperation in an uncertain world. Am. Econ. Rev. 102, 720–749 (2012).

    Article  Google Scholar 

  23. 23.

    Dorrough, A. R. & Glöckner, A. Multinational investigation of cross-societal cooperation. Proc. Natl Acad. Sci. USA 113, 10836–10841 (2016).

    Article  Google Scholar 

  24. 24.

    Bear, A. & Rand, D. G. Intuition, deliberation, and the evolution of cooperation. Proc. Natl Acad. Sci. USA 113, 936–941 (2016).

    Article  Google Scholar 

  25. 25.

    Nowak, M. & Sigmund, K. A strategy of win-stay, lose-shift that outperforms tit-for-tat in the prisoner’s dilemma game. Nature 364, 56–58 (1993).

    Article  Google Scholar 

  26. 26.

    Bó, P. D. Cooperation under the shadow of the future: experimental evidence from infinitely repeated games. Am. Econ. Rev. 364, 1591–1604 (2005).

    Article  Google Scholar 

  27. 27.

    Dreber, A., Rand, D. G., Fudenberg, D. & Nowak, Ma Winners don’t punish. Nature 452, 348–351 (2008).

    Article  Google Scholar 

  28. 28.

    Crandall, J. W. Towards minimizing disappointment in repeated games. J. Artif. Intell. Res. 49, 111–142 (2014).

    MathSciNet  Article  Google Scholar 

  29. 29.

    Fudenberg, D. & Levine, D. K. The Theory of Learning in Games (The MIT Press, 1998).

  30. 30.

    Littman, M. L. Markov games as a framework for multi-agent reinforcement learning. In Proc. 11th International Conference on Machine Learning 157–163 (ACM, 1994).

  31. 31.

    Auer, P., Cesa-Bianchi, N., Freund, Y. & Schapire, R. E. Gambling in a rigged casino: the adversarial multi-armed bandit problem. In Proc. 36th Symposium on the Foundations of Computer Science 322–331 (IEEE, 1995).

  32. 32.

    Sandholm, T. W. & Crites, R. H. Multiagent reinforcement learning in the iterated prisoner’s dilemma. Biosystems 37, 147–166 (1996).

    Article  Google Scholar 

  33. 33.

    Karandikar, R., Mookherjee, D., Ray, D. & Vega-Redondo, F. Evolving aspirations and cooperation. J. Econ. Theory 80, 292–331 (1998).

    MathSciNet  Article  Google Scholar 

  34. 34.

    Claus, C. & Boutilier, C. The dynamics of reinforcement learning in cooperative multiagent systems. In Proc. 15th National Conference on Artificial Intelligence 746–752 (AAAI, 1998).

  35. 35.

    de Farias, D. & Megiddo, N. Exploration–exploitation tradeoffs for expert algorithms in reactive environments. In Advances in Neural Information Processing Systems 17 (eds Saul, L. K. et al.) 409–416 (NIPS, 2004).

  36. 36.

    Bouzy, B. & Metivier, M. Multi-agent learning experiments in repeated matrix games. In Proc. 27th International Conference on Machine Learning 119–126 (Omnipress, 2010).

  37. 37.

    Iliopoulos, D., Hintze, A. & Adami, C. Critical dynamics in the evolution of stochastic strategies for the iterated prisoner’s dilemma. PLoS Comput. Biol. 6, e1000948 (2010).

    MathSciNet  Article  Google Scholar 

  38. 38.

    Littman, M. L. & Stone, P. A polynomial-time Nash equilibrium algorithm for repeated games. Decis. Support Syst. 39, 55–66 (2005).

    Article  Google Scholar 

  39. 39.

    Crandall, J. W. et al. Cooperating with machines. Nat. Commun. 9, 233 (2018).

    Article  Google Scholar 

  40. 40.

    Chaudhuri, A. Sustaining cooperation in laboratory public goods experiments: a selective survey of the literature. Exp. Econ. 14, 47–83 (2011).

    Article  Google Scholar 

  41. 41.

    Wang, J., Suri, S. & Watts, D. J. Cooperation and assortativity with dynamic partner updating. Proc. Natl Acad. Sci. USA 109, 14363–14368 (2012).

    Article  Google Scholar 

  42. 42.

    Citron, D. K. & Pasquale, F. The scored society: due process for automated predictions. Wash. Law Rev. 89, 1 (2014).

    Google Scholar 

  43. 43.

    Diakopoulos, N. Accountability in algorithmic decision making. Commun. ACM 59, 56–62 (2016).

    Article  Google Scholar 

  44. 44.

    Selbst, A. D. & Barocas, S. The intuitive appeal of explainable machines. Fordham Law Rev. 87, 1085 (2018).

    Google Scholar 

  45. 45.

    Bonnefon, J. F., Feeney, A. & De Neys, W. The risk of polite misunderstandings. Curr. Dir. Psychol. Sci. 20, 321–324 (2011).

    Article  Google Scholar 

  46. 46.

    Goldin, C. & Rouse, C. Orchestrating impartiality: the impact of ‘blind’ auditions on female musicians. Am. Econ. Rev. 90, 715–741 (2000).

    Article  Google Scholar 

Download references


We thank E. Awad for his help running the experiments on MTurk. J.-F.B. acknowledges support from the ANR-Labex Institute for Advanced Study in Toulouse, the ANR-3IA Artificial and Natural Intelligence Toulouse Institute, and the grant ANR-17-EURE-0010 Investissements d’Avenir.

Author information




All authors conceived and designed the experiments. F.I.-O. and Z.S. conducted the experiments. F.I.-O. and J.-F.B. analysed the data and produced the figures and tables. F.I.-O., J.-F.B., J.C., I.R. and T.R. wrote the manuscript.

Corresponding authors

Correspondence to Iyad Rahwan or Talal Rahwan.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Ishowo-Oloko, F., Bonnefon, JF., Soroye, Z. et al. Behavioural evidence for a transparency–efficiency tradeoff in human–machine cooperation. Nat Mach Intell 1, 517–521 (2019).

Download citation

Further reading


Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing