WORLD VIEW

Policies designed for drugs won’t work for AI

Health authorities are overlooking risks to systems and society in their evaluations of new digital technologies, says Melanie Smallman.
Melanie Smallman co-directs University College London’s Responsible Research and Innovation Hub.
Contact

Search for this author in:

In many ways, the new code of conduct for artificial-intelligence (AI) systems in health care rolled out last month by the UK government is timely and necessary. The principles, laid out by the Department of Health and Social Care, aim to protect patient data and “ensure that only the best and safest data-driven technologies are used” (see go.nature.com/2gqri5g). The projects that the code covers include efforts by Alphabet-backed AI company DeepMind, which has been working with London’s Moorfields Eye Hospital to crunch through more than one million eye scans to design an algorithm that could detect macular degeneration, and a partnership between Ultromics and John Radcliffe Hospital in Oxford, UK, that is using AI to improve early detection of heart disease and lung cancer.

Yet I fear that the guidelines, likely to become a global benchmark, herald a deluge of inadequate policies to regulate AI. The code reveals a lack of appreciation for how AI is changing health care, the community and society, instead adhering to conventional assessments of medical interventions’ impacts on individual privacy, safety and efficacy.

The impact of AI is more akin to that of automobiles or personal computers than of medicine. Medicines are prescribed to patients, their use tied to individual need. But cars have shaped all our lives, cities and industries — even for individuals who do not drive. Policy around innovation and technology largely neglects tech’s potential to worsen inequalities, even as examples mount. Political scientist Virginia Eubanks of the University at Albany, State University of New York, coined the phrase ‘digital poorhouse’ to describe the effects of AI and automation on low-income households and communities. For example, the city of Los Angeles, California, uses a program to match homeless people with appropriate housing; to gain shelter, individuals are encouraged to state their names, whether they have had unprotected sex with a stranger or have considered self-harm, and how often they have accessed crisis services for sexual assault or domestic abuse. Middle-class communities would not tolerate this level of intrusion. When these data are coordinated by police and public services, the potential for unfairness swells in many ways.

I study the relationship between science and society at University College London and I am on a team considering data ethics and AI in health care at London’s Alan Turing Institute. The risk of widening inequality is not an unintended side effect to be reined in with regulation; it is embedded in the technologies themselves. For instance, most digital firms succeed by producing and selling goods without huge manufacturing and distribution costs. This raises salaries for high-skill workers while reducing demand and conditions for others.

We can already glean how technologies are changing health-care systems. In late 2017, Babylon Health in London launched a smartphone app that provides physician consultations. The Royal College of General Practitioners criticized Babylon for “cherry-picking patients, leaving traditional GP services to deal with the most complex patients, without sufficient resources to do so”. Radiation oncologist Anthony Zietman at Massachusetts General Hospital in Boston has described how the costs of proton-beam therapy distort US health-care markets and channel funds from other areas, such as conventional radiotherapy. My colleagues at King’s College London have found that investment in surgical robotics draws funds from other treatments and centralizes care in large teaching hospitals, requiring many patients to travel longer distances or forego care.

The public understands that the pros and cons of technologies are often inextricably linked, that evaluating technologies means deciding whether benefits outweigh the downsides and that doing so depends on how both are distributed. Over more than a decade of using focus groups and participatory exercises to gauge public opinion — on topics from stem cells to nanoscience — I have seen consistently sophisticated assessments of how effects are felt at multiple, interacting scales, from the individual to societal. People worry about the kind of world that technologies will create, not just about harm to individuals. Our policies must show similar sophistication.

To me, the UK code is a missed opportunity to start things off right, to anticipate wider, inevitable problems and to keep the health system affordable and effective. It is thanks to the comprehensive National Health Service that the United Kingdom has more than seven decades of data — crucial for developing AI for health care. But these same data also warn that social inequality is detrimental to the physical and mental health of all through increased stress, with documented biological effects of poverty to the brain and body.

Technologies can improve health care, speed up diagnoses and reduce costs. But fulfilling that potential will require us to broaden the lens through which we evaluate them, and soon.

It won’t be simple. As with the advent of the car, many serious implications will be emergent, and the harshest effects borne by communities with the least powerful voices. We need to move our gaze from individuals to systems to communities, and back again. We must bring together diverse expertise, including workers and citizens, to develop a framework that health systems can use to anticipate and address issues. This framework needs an explicit mandate to consider and anticipate the social consequences of AI — and to keep watch over its effects. That is the best way to ensure that health technologies meet the needs of all, and not just those in Silicon Valley.

Nature 567, 7 (2019)

doi: 10.1038/d41586-019-00737-2
Nature Briefing

Sign up for the daily Nature Briefing email newsletter

Stay up to date with what matters in science and why, handpicked from Nature and other publications worldwide.

Sign Up