CORRESPONDENCE

AI surveillance studies need ethics review

University of Sydney, Australia.
Contact

Search for this author in:

University of Sydney, Australia.

Search for this author in:

Scientists who develop algorithms based on user data face a moral dilemma: if their work is subverted to manipulate democracy or to support oppressive regimes, they could become part of something they would not knowingly endorse.

In common with other fields, research on artificial intelligence (AI) and machine learning needs to be subject to approval from institutional review boards and compliance with data protection. In my view, journals should demand this as a condition of publication. Data scientists in industry must also adhere to professional guidelines from organizations such as the IEEE (see go.nature.com/2vt6ngr). And research that combines academic and corporate interests should be disclosed, as in other fields.

As surveillance is combined with intelligent forms of behaviour-change technology, a new social contract around data is needed (see also H. Shah Nature 556, 7; 2018). Unlike the media companies, governments and organizations that use surveillance data, those who are monitored do not benefit. Nor do they have any control over how their data are used (uninformed consent), as was poignantly illustrated by the Cambridge Analytica scandal (see Nature 555, 559–560; 2018).

Nature 557, 31 (2018)

doi: 10.1038/d41586-018-05015-1
Nature Briefing

Sign up for the daily Nature Briefing email newsletter

Stay up to date with what matters in science and why, handpicked from Nature and other publications worldwide.

Sign Up