‘Moral machine’ experiment is no basis for policymaking

Carnegie Mellon University, Pittsburgh, Pennsylvania, USA.

Search for this author in:

Carnegie Mellon University, Pittsburgh, Pennsylvania, USA.

Search for this author in:

Lund University, Lund, Sweden.

Search for this author in:

The ‘moral machine’ experiment for autonomous vehicles devised by Edmond Awad and colleagues is not a sound starting place for incorporating public concerns into policymaking (Nature 563, 59–64; 2018).

The experiment presents participants with stylized moral dilemmas that are intended to resemble choices facing designers and regulators. For example, participants must choose between a crash that kills three elderly pedestrians and one that kills three non-elderly occupants of an autonomous vehicle.

The study would have benefited from a premise common to philosophy and psychology: namely, that stylized dilemmas are a means rather than an end. They are meant to pose questions rather than answer them, and to inform public discourse rather than attempt to resolve it (B. Fischhoff Science 350, aaa6516; 2015).

Philosophers use stylized tasks to analyse the complex and uncertain situations in which moral choices are actually made. Dilemmas have no meaning outside such discourse. Although survey responses might stimulate enquiry, taking them literally is an antithesis to philosophical practice.

Psychologists use stylized tasks to test individuals’ sensitivity to cues that could help them to decide between options. A single representation of a dilemma cannot stand alone, without knowing how participants interpret it, how they respond to alternative wording and how they view the ethics of a society guided by survey responses (see D. Medin et al. Nature Hum. Behav. 1, 0088; 2017).

Nature 567, 31 (2019)

doi: 10.1038/d41586-019-00766-x
Nature Briefing

Sign up for the daily Nature Briefing email newsletter

Stay up to date with what matters in science and why, handpicked from Nature and other publications worldwide.

Sign Up