Over the past 18 months, a number of universities and companies have been removing online data sets containing thousands — or even millions — of photographs of faces used to improve facial-recognition algorithms.
In most cases, researchers scraped these images from the Internet. The pictures are classified as public data, and their collection didn’t seem to alarm institutional review boards (IRBs) and other research-ethics bodies. But none of the people in the photos had been asked for permission, and some were unhappy about the way their faces had been used.
This problem has been brought to prominence by the work of Berlin-based artist and researcher Adam Harvey, who highlighted how public data sets are used by companies to hone surveillance-linked technology — and by the journalists who reported on Harvey’s work. Many researchers in the fields of computer science and artificial intelligence (AI), and those responsible for the relevant institutional ethical review processes, did not see any harm in using public data without consent. But that is starting to change. It is one of many debates that need to be had around how facial-recognition work — and many other kinds of AI research — can be studied more responsibly.
As Nature reports in a series of Features on facial recognition this week, many in the field are rightly worried about how the technology is being used. They know that their work enables people to be easily identified, and therefore targeted, on an unprecedented scale. Some scientists are analysing the inaccuracies and biases inherent in facial-recognition technology, warning of discrimination, and joining the campaigners calling for stronger regulation, greater transparency, consultation with the communities that are being monitored by cameras — and for use of the technology to be suspended while lawmakers reconsider where and how it should be used. The technology might well have benefits, but these need to be assessed against the risks, which is why it needs to be properly and carefully regulated.
Some scientists are urging a rethink of ethics in the field of facial-recognition research, too. They are arguing, for example, that scientists should not be doing certain types of research. Many are angry about academic studies that sought to study the faces of people from vulnerable groups, such as the Uyghur population in China, whom the government has subjected to surveillance and detained on a mass scale.
Others have condemned papers that sought to classify faces by scientifically and ethically dubious measures such as criminality.
Nature conducted a survey to better understand researchers’ views on the ethics of facial-recognition technology and research. Many respondents said that they wanted conferences to introduce mandatory ethics reviews for biometrics studies. This is starting to happen. Next month’s NeurIPS (Neural Information Processing Systems) conference will, for the first time, require that scientists address certain ethical concerns and potential negative outcomes of their work. And the journal Nature Machine Intelligence has begun to ask researchers to write a statement describing the impact of certain types of AI research. These are important first steps, but just that; there is more that journals, funders and institutions could be doing.
For example, researchers are calling for more guidance from their institutions on what research is and isn’t acceptable. Nature’s survey of 480 researchers working in the fields of facial recognition, AI and computer science revealed widespread worry — but also disagreement — about the ethics of facial-recognition studies, and concern that IRBs might not be equipped to provide sufficient guidance. One researcher called for the creation of international guidelines on the ethics of facial-recognition work.
General ethical guidance for AI already exists. And some US and European funders have supported efforts to study the challenges of biometrics research and recommended rethinking what counts as ‘public’ data — as well as urging scientists to consider a study’s potential harm to society. Ultimately, biometrics research involves people, and scientists shouldn’t gather and analyse personal data simply because they can. Public consultation is key: scientists should consult those whom the data describe. If this is impossible because of a data set’s size or because individuals aren’t contactable, researchers should try to reach or organize a panel of representatives who can speak for those affected by the work — voices that count, and are not simply there for box-ticking. Scientists should be careful about sharing data widely, and should consider how the effects of the work might be beneficial or harmful to people and societies.
One problem is that AI guidance tends to consist of principles that aren’t easily translated into practice. Last year, the philosopher Brent Mittelstadt at the University of Oxford, UK, noted that at least 84 AI ethics initiatives had produced high-level principles on both the ethical development and deployment of AI (B. Mittelstadt Nature Mach. Intell. 1, 501–507; 2019). These tended to converge around classical medical-ethics concepts, such as respect for human autonomy, the prevention of harm, fairness and explicability (or transparency). But Mittelstadt pointed out that different cultures disagree fundamentally on what principles such as ‘fairness’ or ‘respect for autonomy’ actually mean in practice. Medicine has internationally agreed norms for preventing harm to patients, and robust accountability mechanisms. AI lacks these, Mittelstadt noted. Specific case studies and worked examples would be much more helpful to prevent ethics guidance becoming little more than window-dressing.
A second concern is that many researchers depend on companies for their funding and data sets. And although most firms say that they are concerned by ethical questions about the way biometrics technology is studied and used, their views are likely to be conflicted because their bottom line is to sell products.
Researchers alone can’t stop companies and governments using facial-recognition technology and analysis tools unethically. But they can argue loudly against it, and campaign for stronger governance, transparency and stricter standards. They can also reflect more deeply on why they’re doing their own work; how they’ve sourced their data sets; whether the community they’re expecting their studies to benefit wants the research to be done; and what the potential negative consequences might be.