We trained an artificial intelligence (AI) system to recommend different interactions and connections between humans playing a group game together. Through trial and error, the AI system learned to take an encouraging approach to uncooperative individuals, keeping them engaged with the group and boosting cooperation levels for everyone.
This is a preview of subscription content, access via your institution
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$119.00 per year
only $9.92 per issue
Rent or buy this article
Prices vary by article type
Prices may be subject to local taxes which are calculated during checkout
Meserole, C. How do recommender systems work on digital platforms? Brookings Institution (21 September 2022). This commentary provides an accessible explanation of how recommender systems work.
Stray, J. et al. Building human values into recommender systems: an interdisciplinary synthesis. Preprint at arXiv, https://doi.org/10.48550/arXiv.2207.10192 (2022). This preprint describes positive and negative effects of recommender systems, exploring various values that could be used to guide the behaviour of recommender systems.
Dafoe, A. et al. Open problems in cooperative AI. Preprint at arXiv, https://doi.org/10.48550/arXiv.2012.08630 (2020). This preprint discusses the application of modern AI to long-standing cooperation challenges.
Rand, D. G., Arbesman, S. & Christakis, N. A. Dynamic social networks promote cooperation in experiments with humans. Proc. Natl Acad. Sci. USA 108, 19193–19198 (2011). This empirical article presents formative experimental evidence on the effects of network structure on group cooperation.
Shirado, H. & Christakis, N. A. Network engineering using autonomous agents increases cooperation in human groups. iScience 23, 1–11 (2020). This empirical article demonstrates the effectiveness of simple, rule-based algorithms at supporting cooperation in human groups.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This is a summary of: McKee, K. R. et al. Scaffolding cooperation in human groups with deep reinforcement learning. Nat. Hum. Behav., https://doi.org/10.1038/s41562-023-01686-7 (2023).
About this article
Cite this article
AI learns to encourage group cooperation by making new connections. Nat Hum Behav 7, 1618–1619 (2023). https://doi.org/10.1038/s41562-023-01699-2