In cooperative games, humans are biased against AI systems even when such systems behave better than our human counterparts. This raises a question: should AI systems ever be allowed to conceal their true nature and lie to us for our own benefit?
This is a preview of subscription content, access via your institution
Access options
Subscribe to Nature+
Get immediate online access to Nature and 55 other Nature journal
$29.99
monthly
Subscribe to Journal
Get full journal access for 1 year
$99.00
only $8.25 per issue
All prices are NET prices.
VAT will be added later in the checkout.
Tax calculation will be finalised during checkout.
Buy article
Get time limited or full article access on ReadCube.
$32.00
All prices are NET prices.
References
Ishowo-Oloko, F. et al. Nat. Mach. Intell. https://doi.org/10.1038/s42256-019-0113-5 (2019).
Chikaraishi, T., Yoshikawa, Y., Ogawa, K., Hirata, O. & Ishiguro, H. Future Internet 9, 75 (2017).
Crandall, J. W. et al. Nat. Commun. 9, 233 (2018).
Axelrod, R. & Hamilton, W. D. Science 211, 1390–1396 (1981).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Rovatsos, M. We may not cooperate with friendly machines. Nat Mach Intell 1, 497–498 (2019). https://doi.org/10.1038/s42256-019-0117-1
Published:
Issue Date:
DOI: https://doi.org/10.1038/s42256-019-0117-1
This article is cited by
-
The imperative of interpretable machines
Nature Machine Intelligence (2020)