Correspondence | Published:

Decision-making

Be wary of 'ethical' artificial intelligence

Nature volume 540, page 525 (22 December 2016) | Download Citation

Jim Davies's suggestion that we programme ethics into artificial intelligence meta-systems as a safeguard could well backfire — by compromising our abilities to judge ethical implications (Nature 538, 291; 2016).

In an earlier version of the future, robot lawnmowers and kitchen appliances promised us more leisure time. We now face the spectre of mass human displacement from a consumption-based economy by equipment that can do things much more efficiently than people can.

The 'age of information' promised global connectivity, but this has wrought distraction to a point at which only lurid excesses can focus our undivided attention on the society to which we all belong.

And as computer-generated imagery colonizes our imaginations, many are barely swayed by real violence (the wanton destruction of Syrian cities comes to mind). There is even evidence that video gaming driven by computer-generated imagery can alter a player's perception of acceleration and gravity (see, for example, A. B. Ortiz de Gortari and M. D. Griffiths Int. J. Hum. Comput. Interact. 30, 95–105; 2014) — compromising their decision-making skills in a world where real physics is the law. Such trends don't bode well for 'ethical' computers.

Author information

Affiliations

  1. Ocean Conservation Research, Lagunitas, California, USA.

    • Michael Stocker

Authors

  1. Search for Michael Stocker in:

Corresponding author

Correspondence to Michael Stocker.

About this article

Publication history

Published

DOI

https://doi.org/10.1038/540525b

Further reading

Comments

By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Newsletter Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing