Nature | Column: World View

Intelligent robots must uphold human rights

Article tools

There is a strong possibility that in the not-too-distant future, artificial intelligences (AIs), perhaps in the form of robots, will become capable of sentient thought. Whatever form it takes, this dawning of machine consciousness is likely to have a substantial impact on human society.

Microsoft co-founder Bill Gates and physicist Stephen Hawking have in recent months warned of the dangers of intelligent robots becoming too powerful for humans to control. The ethical conundrum of intelligent machines and how they relate to humans has long been a theme of science fiction, and has been vividly portrayed in films such as 1982's Blade Runner and this year's Ex Machina.

Academic and fictional analyses of AIs tend to focus on human–robot interactions, asking questions such as: would robots make our lives easier? Would they be dangerous? And could they ever pose a threat to humankind?

These questions ignore one crucial point. We must consider interactions between intelligent robots themselves and the effect that these exchanges may have on their human creators. For example, if we were to allow sentient machines to commit injustices on one another — even if these 'crimes' did not have a direct impact on human welfare — this might reflect poorly on our own humanity. Such philosophical deliberations have paved the way for the concept of 'machine rights'.

Most discussions on robot development draw on the Three Laws of Robotics devised by science-fiction writer Isaac Asimov: robots may not injure humans (or through inaction allow them to come to harm); robots must obey human orders; and robots must protect their own existence. But these rules say nothing about how robots should treat each other. It would be unreasonable for a robot to uphold human rights and yet ignore the rights of another sentient thinking machine.

Animals that exhibit thinking behaviour are already afforded rights and protection, and civilized society shows contempt for animal fights that are set up for human entertainment. It follows that sentient machines that are potentially much more intelligent than animals should not be made to fight for entertainment.

“Intelligent robots remain science fiction, but it is not too early to take these issues seriously.”

Of course, military robots are already being deployed in conflicts. But outside legitimate warfare, forcing AIs and robots into conflict, or mistreating them, would be detrimental to humankind's moral, ethical and psychological well-being.

Intelligent robots remain science fiction, but it is not too early to take these issues seriously. In the United Kingdom, for example, the Engineering and Physical Sciences Research Council and the Arts and Humanities Research Council have already introduced a set of principles for robot designers. These reinforce the position that robots are manufactured products, so that “humans, not robots, are responsible agents”.

Scientists, philosophers, funders and policy-makers should go a stage further and consider robot–robot and AI–AI interactions (AIonAI). Together, they should develop a proposal for an international charter for AIs, equivalent to that of the United Nations' Universal Declaration of Human Rights. This could help to steer research and development into morally considerate robotic and AI engineering.

National and international technological policies should introduce AIonAI concepts into current programmes aimed at developing safe AIs. We must engage with educational activities and research, and continue to raise philosophical awareness. There could even be an annual AIonAI prize for the 'most altruistically designed AI'.

Social scientists and philosophers should be linked to cutting-edge robotics and computer research. Technological funders could support ethical studies on AIonAI concepts in addition to funding AI development. Medical funders such as the Wellcome Trust follow this model already: supporting research on both cutting-edge healthcare and medical ethics and history.

Current and future AI and robotic research communities need to have sustained exposure to the ideas of AIonAI. Conferences focused on AIonAI issues could be a hub of research, guidelines and policy statements. The next generation of robotic engineers and AI researchers can also be galvanized to adopt AIonAI principles through hybrid degree courses. For example, many people who hope to get into UK politics take a course in PPE (politics, philosophy and economics) — an equivalent course for students with ambitions in robotics and AI could be CEP (computer science, engineering and philosophy).

We should extend Asimov's Three Laws of Robotics to support work on AIonAI interaction. I suggest a fourth law: all robots endowed with comparable human reason and conscience should act towards one another in a spirit of brotherhood and sisterhood.

Do not underestimate the likelihood of artificial thinking machines. Humankind is arriving at the horizon of the birth of a new intelligent race. Whether or not this intelligence is 'artificial' does not detract from the issue that the new digital populace will deserve moral dignity and rights, and a new law to protect them.

Journal name:
Nature
Volume:
519,
Pages:
391
Date published:
()
DOI:
doi:10.1038/519391a

Author information

Affiliations

  1. Hutan Ashrafian is a lecturer and surgeon at Imperial College London, UK.

Corresponding author

Correspondence to:

Author details

For the best commenting experience, please login or register as a user and agree to our Community Guidelines. You will be re-directed back to this page where you will see comments updating in real-time and have the ability to recommend comments to other users.

Comments for this thread are now closed.

Comments

9 comments Subscribe to comments

  1. Avatar for Rodney Bartlett
    Rodney Bartlett
    Hutan Ashrafian writes, "Humankind is arriving at the horizon of the birth of a new intelligent race". Could this AI apply not merely to robots but to the universe, existing at every scale from the cosmic to the quantum? This means even human consciousness would, if we can see past our egos, be nothing more than AI. To explain, may I propose an alternative to the probabilistic understanding of quantum mechanics - one using hidden variables which give exact predictions, in this case by the variables being base-2 mathematics. I want to propose an alternative to the current understanding of a probabilistic universe that originated from nothingness in a Big Bang. This alternative involves binary digits, Mobius strips and figure-8 Klein bottles (in the process, a Steady State universe will be proposed). While reading this, remember that bits are not only units of information but also pulses of energy. The information in BITS or Binary digITS is the result of electrical switching, with currents normally being either "on", usually represented by the binary digit “one” - or "off“, by “zero”. A binary digit can thus be viewed as a pulse of energy. String theory says everything's composed of tiny, one-dimensional strings that vibrate as clockwise, standing, and counterclockwise currents. We can visualize tiny, one dimensional binary digits of 1 and 0 (base 2 mathematics) forming currents in a two-dimensional program called a Mobius loop – or in 2 Mobius loops, clockwise currents in one loop combining with counterclockwise currents in the other to form a standing current. (The curving of what we call space-time sounds very strange, but I think it can actually be explained by modelling space-time’s construction on the Mobius strip that can be represented by giving a strip of paper a half-twist of 180 degrees before joining its ends.) Joining two Mobius strips (or Mobius bands) forms a four-dimensional Klein bottle. And each Klein bottle can become an observable (or “sub”) universe (figure-8 Klein bottles appear to have the most suitable shape to form subuniverses). This connection of the 2 Mobius strips can be made with the infinitely-long irrational and transcendental numbers. Such an infinite connection translates^ into an infinite number of TANGIBLE figure-8 Klein bottles which are, in fact, “subuniverses”. The infinite numbers make the cosmos as a whole* physically infinite, the union of space and time makes it eternal, and it's in a static or steady state because it’s already infinite. ^ The translation could be via photons and gravitons being ultimately composed of the binary digits of 1 and 0 encoding pi, e, √2 etc.; and matter particles [and even bosons like the Higgs, W and Z particles] being given mass by photons/gravitons interacting in matter particles’ “wave packets”. * (i.e. the cosmos beyond our 13.8-billion-year-old subuniverse, which is expanding and displacing parts of the universe beyond) Informally - if an object in space consists of one piece and does not have any "holes" that pass all the way through it, it is called simply-connected. A doughnut (and the figure-8 Klein bottle it resembles) is “holey” and not simply connected (it’s multiply connected). The universe appears to be infinite, being flat on the largest scales and curved on local scales (from far away, a scene on Earth can appear flat, yet the curves of hills become apparent up close). A flat universe that is also simply connected implies an infinite universe. So it seems the infinite universe cannot be composed of subunits called figure-8 Klein bottles (flat universes that are finite in extent include the torus and Klein bottle). But gaps in, or irregularities between, subuniverses shaped like figure-8 Klein bottles are "filled in" by binary digits in the same way that computer drawings can extrapolate a small patch of blue sky to make a sky that's blue from horizon to horizon. This makes space-time relatively smooth and continuous - and gets rid of holes, making Klein subunits feasible. The Klein bottle is a closed surface with no distinction between inside and outside (there cannot be other universes, neither a space multiverse nor a time multiverse *, outside ours – there’s only one universe). * English mathematical physicist Roger Penrose’s idea of cyclic time seems to be another version of the multiverse hypothesis. Space-time is an indissoluble union, and the traditional multiverse is focused on the spatial component while the Penrose version emphasizes the temporal (time may be nothing more than the electronic display of trillions of trillions of still states each second – what is called motion of the particles in space). Erwin Schrodinger (1887-1961), the Austrian theoretical physicist who achieved fame for his contributions to quantum mechanics and received the Nobel prize in 1933, had a lifelong interest in the Vedanta philosophy of Hinduism and this influenced Schrodinger’s speculations about the possibility of individual consciousness being only a manifestation of a unitary consciousness pervading the universe.
  2. Avatar for Blaine Bateman
    Blaine Bateman
    @Hutan Ashrafian--Thank you for your thought provoking comments. It might be worth taking a step back from the hypothetical situation of "artificial intelligence" and consider the already real situation of machine autonomy. I think that some of the philosophical and ethical questions easily overlap from one to the other. So, we can ask, "What are the ethics of using an autonomous killing machine?". What if it makes a mistake? Is that a bigger ethical transgression than dropping "dumb" (as in, "not smart") bombs and accidentally causing innocent casualties? I am not competent go go very far in such a dialogue, but I find it worthwhile to think about. Thanks again for your thoughts.
  3. Avatar for Peter Marchese
    Peter Marchese
    I object to the term "legitimate war" war can never be legitimate!
  4. Avatar for Mary Finelli
    Mary Finelli
    "Animals that exhibit thinking behaviour are already afforded rights" Aside from humans, what rights are afforded to animals? Tragically and wrongly, none. It's telling that people are so interested in protecting themselves from their creations (which in all probability will not be sentient) yet there is such little regard for -and full on carnage and enslavement of- our fellow sentient species, including to the point of extinction for increasing numbers of them.
  5. Avatar for Blaine Bateman
    Blaine Bateman
    Hi Mary--I would paint the line at least a bit farther towards the animals, at least in a few cases. Municipalities in the US often have laws prohibiting animal cruelty. I do agree that protections for animals are pretty minimal. I think that prevailing public sentiment and taboos regulate human behavior towards animals as much as laws. Certainly there is a significant group of US Citizens who are concerned with animal rights. It is also worth noting the societal norms are very different in different countries around the world with regard to animal rights and treatment.
  6. Avatar for John Bashinski
    John Bashinski
    There is a significant, serious AGI risks community, with a reasonably sophisticated internal discourse that's been going on for many years. That discourse has gone way, way beyond the ideas in ancient science fiction, and way, way beyond the sort of soft thinking that's in this piece. The community may be imperfect, but it addresses the issues far more meaningfully than this kind of puffery. If Nature wants to engage with these issues, perhaps Nature should get commentary from the people who've spent some time thinking about them rigorously. And by this I do not mean narrow AI researchers who dismiss the issues out of hand without themselves having given them any thought. I suggest trying MIRI (intelligence.org) as a starting point. By the way, nobody, but nobody, in the relevant community takes Asimov's laws, or any other such fuzzy, unimplementable rules, seriously. They may appear in "most discussions", but they do not appear in any discussions by serious scholars of the matter, nor do they appear in any discussions that are ever going to produce anything useful. Asimov didn't take them seriously himself; they were a literary device, and many of the stories about them are basically about how unworkable they'd be... even if you could somehow build them into an AI. Furthermore, you can't worry about rule content in isolation. Building ANY set of rules into even a near-human-level AGI is an extremely difficult problem, almost certainly even if you've solved the already tremendously difficult problem of building the AGI itself. It may very well be harder than the problem of formulating the rules themselves, and it may also put serious restrictions on their content. Not to mention other issues like conflating "sentience" with "intelligence"...
  7. Avatar for Kim Solez
    Kim Solez
    John Bashinski, with respect, this approach of author bashing and name calling is not going to get you anywhere here. Nature published this article because it contains valuable new ideas, ideas that most readers of nature find refreshing and useful, with the promise of advancing our understanding of an important area of science. The fact that this important insight comes from a medical doctor, a surgeon is particularly exciting. This is a very fair environment. Let me turn around your complaint about Nature not publishing work from MIRI and challenge you at MIRI to write and submit something to Nature good enough to be published here. I think you will find great difficulty doing that. You will learn a lot from the attempt, and it will cause you to reexamine the "internal discourse" you speak about.
  8. Avatar for John Bashinski
    John Bashinski
    First, I don't work at MIRI, don't speak for MIRI, and have never worked on any project with anybody from MIRI. Ten or fifteen years ago I had some casual contact with some of MIRI's founders, and I responded to something on their blog one time. I think I might have given them 50 or 100 bucks once. That's the extent of my association with them. And if you have an issue with MIRI, there are other places to go to talk to people who actually have something useful to say. [On edit: I also don't consider myself to be one of the relevant community of scholars. It's just that as a generally informed person I'm at least aware that such a community exists.] Second, these ideas are not new. There is absolutely nothing in that piece that wasn't said better decades ago. And most of it was then thoroughly debunked. Third, this is not a scientific paper; it's an op-ed in the general interest news section, which makes it journalism, not scientific publishing. In journalism, you're expected to SEEK OUT the new and interesting material, not wait around for people to submit it to you. If Nature wants to bring people "valuable new ideas" in this section, then Nature needs to take the trouble to go out and find them. I sure HOPE this wouldn't have been publishable in the main "journal" part of Nature.
  9. Avatar for Ezrad Lionel
    Ezrad Lionel
    Beautiful Man.

Internet winter is coming

bandwidth

The bandwidth bottleneck that is throttling the Internet

Researchers are scrambling to repair and expand data pipes worldwide — and to keep the information revolution from grinding to a halt.

Newsletter

The best science news from Nature and beyond, direct to your inbox every day.

Replications, ridicule and a recluse

Han

The controversy over NgAgo gene-editing intensifies

As failures to replicate results using the CRISPR alternative stack up, a quiet scientist stands by his claims.

Long in the tooth

shark

Near-blind shark is world’s longest-lived vertebrate

Greenland shark found to be at least 272 years old.

Expanded editing

beyond-crispr

Beyond CRISPR: A guide to the many other ways to edit a genome

The popular technique has limitations that have sparked searches for alternatives.

Exclusion zone

brexit

E-mails show how UK physicists were dumped over Brexit

Researchers dropped from EU grant proposal because UK inclusion would ‘compromise’ project.

Nature Podcast

new-pod-red

Listen

This week, the migration route of the first Americans, the bandwidth crisis, clever conductors, and the next CRISPR.