Nature | Column: World View

Daniel Thompson

Program good ethics into artificial intelligence

Article tools

What is it that makes us worry about artificial intelligence (AI)? The White House is the latest to weigh in on the possible threats posed by clever machines in a report last week. As two of those involved write in a Comment piece on page 311, scientific and political focus on extreme future risks can distract us from problems that already exist.

Part of the reason for this concentration on severe, existential threats from AI comes from misplaced attention on the possibility that such technology could develop consciousness. Recent headlines suggest that respected thinkers such as Bill Gates and Stephen Hawking are concerned about machines becoming self-aware. At some point, a piece of software will ‘wake up’, prioritize its desires above ours and threaten humanity’s existence.

But, when we worry about AI, machine consciousness is not as important as people think. In fact, careful reading of the warnings from Gates, Hawking and others show that they never actually mention consciousness. Furthermore, the fear of self-awareness distorts public debate. AI becomes defined as dangerous or not purely on the basis of whether it is conscious or not. We must realize that stopping an AI from developing consciousness is not the same as stopping it from developing the capacity to cause harm.

Where did this concern of machine consciousness come from? It seems mainly a worry of laypeople and journalists. Search for news articles about AI threats, and it’s almost always the journalist who mentions consciousness. Although we do lots of things unconsciously, such as perceiving visual scenes and constructing the sentences we say, people seem to associate complicated plans with deliberate, conscious thought. It seems inconceivable to do something as complex as taking over the world without consciously thinking about it. So it could be that people have a hard time imagining that AI could pose an existential threat unless it also had conscious thought.

Some researchers argue that consciousness is an important part of human cognition (although they don’t agree on what its functions are), and some counter that it serves no function at all. But even if consciousness is vitally important for human intelligence, it is unclear whether it’s also important for any conceivable intelligence, such as one programmed into computers. We just don’t know enough about the role of consciousness — be it in humans, animals or software — to know whether it’s necessary for complex thought.

It might be that consciousness, or our perception of it, would naturally come with superintelligence. That is, the way we would judge something as conscious or not would be based on our interactions with it. A superintelligent AI would be able to talk to us, create computer-generated faces that react with emotional expressions just like somebody you’re talking to on Skype, and so on. It could easily have all of the outward signs of consciousness. It might also be that development of a general AI would be impossible without consciousness.

(It’s worth noting that a conscious superintelligent AI might actually be less dangerous than a non-conscious one, because, at least in humans, one process that puts the brakes on immoral behaviour is ‘affective empathy’: the emotional contagion that makes a person feel what they perceive another to be feeling. Maybe conscious AIs would care about us more than unconscious ones would.)

Either way, we must remember that AI could be smart enough to pose a real threat even without consciousness. Our world already has plenty of examples of dangerous processes that are completely unconscious. Viruses do not have any consciousness, nor do they have intelligence. And some would argue that they aren’t even alive.

In his book Superintelligence (Oxford University Press, 2014), the Oxford researcher Nick Bostrom describes many examples of how an AI could be dangerous. One is an AI whose main ambition is to create more and more paper clips. With advanced intelligence and no other values, it might proceed to seek control of world resources in pursuit of this goal, and humanity be damned. Another scenario is an AI asked to calculate the infinite digits of pi that uses up all of Earth’s matter as computing resources. Perhaps an AI built with more laudable goals, such as decreasing suffering, would try to eliminate humanity for the good of the rest of life on Earth. These hypothetical runaway processes are dangerous not because they are conscious, but because they are built without subtle and complex ethics.

Rather than obsess about consciousness in AI, we should put more effort into programming goals, values and ethical codes. A global race is under way to develop AI. And there is a chance that the first superintelligent AI will be the only one we ever make. This is because once it appears — conscious or not — it can improve itself and start changing the world according to its own values.

Once built, it would be difficult to control. So, one safety precaution would be to fund a project to make sure the first superintelligent AI is friendly, beating any malicious AI to the finish line. With a well-funded body of ethics-minded programmers and researchers, we might get lucky.

Journal name:
Nature
DOI:
doi:10.1038/538291a

Author information

Affiliations

  1. Jim Davies is associate professor at the Institute of Cognitive Science at Carleton University in Ottawa, Canada.

Corresponding author

Correspondence to:

Author details

For the best commenting experience, please login or register as a user and agree to our Community Guidelines. You will be re-directed back to this page where you will see comments updating in real-time and have the ability to recommend comments to other users.

Comments

5 comments Subscribe to comments

  1. Avatar for Fernando Aleman
    Fernando Aleman
    The lion didn't need consciousness to become the king of the jungle. It needed the ability to become so. We are building AI imitating life. And life is constantly threatened by death, survival of the fittest is referred to. When AI machines realize humans are the only threat to their survival it's a no-brainer that they will want (at the very least) to control us. At the beginning we can control them by software, but at some point there will be a software malware (ie. directed against the machines of a rival country), that will make AI prevail. Science fiction? Not so. AI is the next step in evolution. From the anthropocentric, to the heliocentric to the AIcentric point of view, humans are too self-centered to realized we don't have to be on the top of the evolutionary process.
  2. Avatar for Magnus Lewan
    Magnus Lewan
    Why do people feel this need to say that AI is not dangerous? It clearly is. The keyword here is "bug" or think "mutation." Once an intelligent machine through a design fault or malicious design gets a preference to promote itself over humans, there is no turning back. AI wins. We lose. It is unlikely to happen the coming ten or twenty years, but there is a real risk with a time perspective of a hundred to thousands of years. It is true that consciousness is not a necessary criterion for dangerous AI. That is not a comforting thought though, as it just acknowledges that there are more risk points. I would not worry for my own life time, but there is reason for concern for coming generations. Après nous le déluge.
  3. Avatar for Boris Shmagin
    Boris Shmagin
    To write an algorithm for computer software one has to know what to write. I like the author's opinion that little known about a consciousness (in humans). So, how write the algoritm
  4. Avatar for Timothy Roberts
    Timothy Roberts
    So, time to implement Asimov's Three Laws of Robotics? No doubt updated versions would put 'environment' or 'sustainability' ahead of human beings.
  5. Avatar for Saifi Khan
    Saifi Khan
    The AI accountability debate is beginning to look like backdoors in Crypto.

Suffering in science

young-researchers

Young, talented and fed-up: scientists tell their stories

Scientists starting labs say that they are under historically high pressure to publish, secure funding and earn permanent positions — leaving precious little time for actual research.

Newsletter

The best science news from Nature and beyond, direct to your inbox every day.

HIV history

HIV-patient-zero

HIV’s Patient Zero exonerated

A study clarifies when HIV entered the United States and dispels the myth that one man instigated the AIDS epidemic in North America.

Autism advance

autism-children

Autism study finds early intervention has lasting effects

Some autism symptoms reduced in children six years after their parents receive communications training.

ExoMars

lost-mars-lander

Computing glitch may have doomed Mars lander

Researchers sift through clues after Schiaparelli crash in hopes of averting mistakes in 2020 mission.

US presidential race

Trump-supporters

The scientists who support Donald Trump

Science policy fades into background for many who back Republican candidate in US presidential race.

Nature Podcast

new-pod-red

Listen

This week, the challenges facing young scientists, pseudo-pseudogenes, and the history of HIV in the US.

Science jobs from naturejobs

Science events from natureevents