Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • BOOK REVIEW

AI — the people and places that make, use and manage it

An illegal miner sorts tin ore from sand at a mining site, Indonesia

A man sorts tin ore from sand in Indonesia. Many of the metals needed for semiconductors are mined at great human and environmental cost.Credit: Beawiharta/Reuters

The Alignment Problem: Machine Learning and Human Values Brian Christian W. W. Norton (2020)

Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence Kate Crawford Yale Univ. Press (2021)

Artificial intelligence (AI) permeates our lives. It determines what we read and buy, whether we get a job, loan, mortgage, subsidies or parole. It diagnoses diseases and underlies — and undermines — democratic processes. Two new books offer complementary visions of how society is being reshaped by those who build, use and manage AI.

In The Alignment Problem, writer Brian Christian gives an intimate view of the people making AI technology — their aims, expectations, hopes, challenges and desolations. Starting with Walter Pitts’s work on a logical representation of neuron activity in the early twentieth century, he recounts the ideas, aims, successes and failures of researchers and practitioners in fields from cognitive science to engineering. Atlas of AI, from the influential scholar Kate Crawford, deals with how, practically, AI gets into and plays out in our lives. It shows that AI is an extractive industry, exploiting resources from materials to labour and information.

Both books parse how the power of, and over, the digital world is shifting politics and social relations. They nod to alternative approaches to these imbalances — state control in China, or the regulatory efforts of the European Union — but focus on the North American story. Together, they beg for a sequel: on ways forward.

From prophecy to ubiquity

Christian tracks the evolution of AI technology, from prophecy to ubiquity. He shows how researchers try to get AI to interpret human values such as fairness, transparency, curiosity and uncertainty, and the challenges that stand in their way. Largely based on encounters with researchers and practitioners, the book charts a slow, steady, complex progress, with many lows and some incredible highs.

We meet people such as Rich Caruana, now senior principal researcher at Microsoft in Redmond, Washington, who was asked as a graduate student to glance at something that led to his life’s work — optimizing data clustering and compression to make models that are both intelligible and accurate. And we walk along the beach with Marc Bellemare, who pioneered reinforcement learning while working with games for the Atari console and is now at Google Research in Montreal, Canada.

Christian shows researchers’ growing realization that AI developments are affected by societal values — and, more importantly, affect them. They come with a cost and can have profound impacts on communities. At its core, The Alignment Problem asks how we can ensure that AI systems capture our norms and values, understand what we mean and do what we want. We all have different conceptions of and requirements for what such systems should do. As mathematician Norbert Wiener put it in 1960: “We had better be quite sure that the purpose put into the machine is the purpose which we really desire.”

Plastic crates move along a conveyor at the Amazon.com fulfilment centre in Robbinsville, New Jersey

Behind the scenes of the big-data economy: an Amazon fulfilment centre in New Jersey.Credit: Bess Adler/Bloomberg/Getty

Crawford’s collection exposes the dark side of AI success. It traverses the globe exploring the relationships between places and their impact on AI infrastructure. From Nevada and Indonesia, where the lithium and tin central to semiconductors are mined at great human and environmental cost, we travel to an Amazon warehouse in New Jersey. Here, labourers bend their bodies to the will of robots and production lines, instead of the automation adapting to human tempo. In an uneasy reminder of Charlie Chaplin’s 1936 film Modern Times, we witness the hardship of “fauxtomation” — supposedly automated systems that rely heavily on human labour, such as that of workers paid below minimum wage in data-labelling farms.

Crawford concludes with the arresting reminder that AI is not objective, neutral or universal. Instead, it is deeply embedded in the culture and economic reality of those who make it — mostly, the white, wealthy men of California’s Silicon Valley.

Both books are strong in their exposition of the challenges and dangers of current use and development of AI, and what sets it apart from ‘classic’ computing. Reading them side by side highlights three core issues: over-reliance on data-driven, stochastic predictions; automated decisions; and concentration of power.

Data dominance

As AI researcher Joy Buolamwini comments in the 2020 documentary Coded Bias, algorithmic decision-making is undoing decades of progress towards equal rights, reifying the very prejudices it reveals to be still deeply rooted. Why? Because using data to inform automated decisions often ignores the contexts, emotions and relationships that are core to human choices.

Data are not raw materials. They are always about the past, and they reflect the beliefs, practices and biases of those who create and collect them. Yet current application of automated decision-making is informed more by efficiency and economic benefits than by its effects on people.

Worse, most approaches to AI empower those who have the data and the computational capability to process and manage them. Increasingly, these are big tech corporations — private entities outside democratic processes and participatory control. Governments and individuals are the users, not the leaders. The consequences of this shift are enormous and have the potential to alter society.

So, what now? Besides efforts to debias the data and explain decisions algorithms make, we need to address the source of the bias. This will be done not through technological fixes, but by education and social change. At the same time, research is needed to address the field’s perverse dependence on correlations in data. Current AI identifies patterns, not meaning.

Meticulously researched and superbly written, these books ultimately hold up a mirror. They show that the responsible — ethical, legal and beneficial — development and use of AI is not about technology. It is about us: how we want our world to be; how we prioritize human rights and ethical principles; who comprises this ‘we’. This discussion can wait no longer. But the key question is: how can all have a voice?

Nature 593, 499-500 (2021)

doi: https://doi.org/10.1038/d41586-021-01397-x

Competing Interests

The author declares no competing interests.

Subjects

Nature Careers

Jobs

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing

Search

Quick links