The call of Iyad Rahwan and colleagues for a science of “machine behaviour” that empirically studies artificial intelligence (AI) “in the wild” (Nature 568, 477–486; 2019) is an example of ‘columbusing’. That is, what they claim to have discovered is, in fact, an existing field of study that has been producing vibrant, engaged research for decades. Cybernetics, the science of communications and automatic control systems in machines and living things, has been flourishing since the 1940s.
In our view, this prior art exposes serious ethical and scientific problems with the authors’ proposal. Studying AI agents as if they are animate moves responsibility for the behaviour of machines away from their designers, thereby undermining efforts to establish professional ethics codes for AI practitioners.
The authors’ idea that those who create machine-learning systems and study their behaviour cannot anticipate their “downstream societal effects” is false. Sociologists and anthropologists have long contributed to research on AI. For example, social scientists have described how AI can embed human intentions in material infrastructures (W. E. Bijker et al. (eds) The Social Construction of Technological Systems; 2012). Most would foresee AI agents’ societal outcomes.
Columbusing fails to give due credit. It rides roughshod over long-fought struggles to centre science and technology’s ethical implications for crucial issues such as inclusivity and diversity. All too often, those struggles have been fought by women and individuals of colour, who have laid much of the overlooked intellectual foundations of their disciplines.
Nature 574, 176 (2019)