Nat. Commun. 9, 233 (2018)

Machines have become increasingly capable of tackling complex tasks in ways that we perceive as intelligent. However, as these technologies get more impressive, displaying even superhuman powers, when defeating us in games like chess or Go, one weakness remains apparent: algorithms are poor at interacting with humans in scenarios where subtle forms of cooperation are required. Can algorithms capture some of the spirit of human cooperation, which we establish making use of a rich spectrum of intuition, cultural norms and signals?

Artbabii/Alamy Stock Photo

To address this question, Jacob W. Crandall, of Brigham Young University, and colleagues modelled human cooperation in situations where goals are neither completely aligned nor in conflict, by endowing an algorithm with the capability to engage in cheap talk.

The algorithm played up to 50 rounds of cooperative games against itself as well as against humans. When players, human or machine, could use a range of speech messages to signal intentions, from the cheery, “Sweet. We are getting rich.” to the menacing “You will pay for this!”, much stronger cooperation was established resulting in higher payoffs. The results show that cooperative human–machine interactions can be improved not by adding computational power, but by introducing a very human feature — cheap talk.