BEYOND THE BOMB Science and the military nature.com/military

A president arguing that his nation isn't at war because his forces are using only robotic weapons. An arms-control meeting forlornly trying to ban the development of armed autonomous robots. Criminals using tiny robotic helicopters in a jewellery heist. These are not tales from an Isaac Asimov novel; they are real events that happened within the past year.

From gunpowder to the atomic bomb to robots, history is full of weapons technologies so disruptive that they change the rules. These deadly applications, or 'killer apps', often begin in the military sector but have ripple effects beyond their intended uses. The Manhattan Project to develop the first atomic bomb was at its core a military-funded experiment to bundle the greatest explosive power into the smallest delivery package possible.

But that research opened up entirely new areas of physics, revolutionized the energy industry and transformed world politics.

An RQ-4 Global Hawk unmanned aerial vehicle before a mission in southwest Asia in November 2010. Credit: A. KIN/US AIR FORCE

What is different today is the speed with which our technology can outpace our ethical and policy responses to it. Astounding advances grab the headlines so frequently that the public has become numb to their significance — whether it is robotic planes, directed-energy weapons such as high-energy lasers, or 'electric skin', tiny sensors that are applied to the body like tattoos.

We are “giants” when it comes to technology, but “ethical infants” when it comes to understanding its consequences, as US Army general Omar Bradley remarked in 1948. Bradley was referring to nuclear research, but as the pace of technologic change takes off, that gulf — between our sophisticated inventions and our crude grasp of the consequences — continues to widen. We need to start bridging it.

I, Robot

Robotics is an excellent case study of this gulf. Over the past ten years, the United States and 45 other nations have gone from looking at robots as mere science fiction to using them in their military forces. For example, the US military used only a handful of unmanned aerial systems in the 2003 invasion of Iraq, but now has more than 7,000 unmanned aerial systems and 12,000 unmanned ground systems in its inventory. As a sign of things to come, the US Air Force now trains more unmanned-systems operators than fighter and bomber pilots combined.

Left, new legs and eyes: a mock up of the New BigDog, intended to carry equipment for ground troops; right, a US soldier prepares an RQ-11 Raven in Iraq, 2006. Credit: BOSTON DYNAMICS/WENN.COM/NEWSCOM (LEFT); C. SCHULZE/DEUTSCH PRESSE AGENTUR/NEWSCOM

The effect of this shift goes beyond pilots' lives saved. US President Barack Obama recently argued that he did not need congressional approval for military operations in Libya because they were carried out by unmanned aerial systems such as the MQ-1 Predator and the MQ-9 Reaper. In Pakistan, US unmanned systems have made more than 250 strikes against suspected terrorists since 2004. Notably, these strikes are carried out by CIA drones rather than military ones, meaning even less oversight. The number of US drone strikes last year alone was several times larger than it was in the opening round of the Kosovo war, but — unlike that war — there has been no congressional authorization and little public debate.

The US Air Force now trains more unmanned-systems operators than fighter pilots.

The growth in non-military uses of robotics, especially those developed originally for the military, also raises ethical issues. Police departments in cities such as Miami, Florida, and Ogden, Utah, have sought special licences to operate unmanned aerial surveillance systems. This past spring, Congress legislated that US civilian airspace should be opened to allow more widespread use of such systems by 2015. This will mean a boom for the robotics industry, but it will also raise new challenges to legal concepts such as privacy or probable cause for search or arrest. Police once needed warrants if they wanted to peek over citizens' fences; now they have the technology to do it from above, over an entire city. As one federal district court judge told me, this is “a Supreme Court case waiting to happen”.

History shows us that neglecting to address these issues of law and ethics can have immense consequences. Using a submarine to attack shipping, for example, was once science fiction. When it became reality, the dispute over 'fair use' of such technology drew the United States into the First World War, ultimately leading to the nation's rise as a superpower.

Code of ethics

Today, the US Air Force has argued that its unmanned spy planes, if targeted by radar, have the same right to defend themselves with ammunition as its pilots have. This conferral on unmanned systems of the right to pre-emptive 'self'-defence makes sense from one perspective, but could also be a legal-dispute-turned-international-crisis in the making, as well as a huge (and probably unintentional) first step for the cause of robots' rights.

The importance and urgency of such complex challenges demands cross-disciplinary discussion — among technology researchers and manufacturers, customers and users, regulators and policy-makers, social scientists and philosophers. But traversing the boundaries between those sectors still feels like crossing between foreign lands.

A major reason for this is insularity. Academic journals of each field focus inward, professional conferences are attended only by the like-minded, and those who attempt to straddle disciplines or engage the public are viewed as 'less serious'. In robotics, a striking example of this disconnect comes from a survey of the 25 stakeholders who most shape the field, conducted by the field's professional trade group, the Association for Unmanned Vehicles Systems International based in Arlington, Virginia. Asked whether they foresaw that the continued development of unmanned systems might bring 'any social, ethical, or moral problems', 60% of these leaders answered with a simple 'No'. I experienced this head-in-the-sand attitude when a professor sent me an angry e-mail after a talk I gave at a leading engineering school. He chastised me for “troubling” his students “by asking them to think about the ethics of their work”.

Clockwise from left: a device that fits in a backpack; an MQ-1 Predator; information gathering in Iraq; TALON robots can be used for bomb disarming or combat. Credit: US ARMY; J. LOCK/410TH AIR EXPEDITIONARY WING/ NEWSCOM; J. Howe/WPN/UPPA/PHOTOSHOT; M. Contreras/US NAVY

In turn, our policy leaders are ill-prepared for the questions and debates that inevitably follow technological developments. Those responsible for funding and deployment decisions often fail to understand even the basics of the technology they're considering. I witnessed this when a senior adviser to the US defence secretary expressed surprise to me that the United States was using “so many” robotic systems (even though he drove the budget that paid for them), and then told me how he thought a three-dimensional version of the Internet might be possible “one day”. He spoke about virtual worlds as if they were an exotic concept like time travel, apparently unaware that they already exist.

Similarly, when I gave a talk last year to the strategy office at the US Pentagon on some of the military, policy, legal and ethical ramifications of the growing use of robotics, one senior officer asked me: “Who is thinking about all this stuff?” I replied: “Everyone thinks it's you!”

Brave new world

It doesn't have to be this way. Our academic training still follows the specialized model. Top researchers in artificial intelligence may go through their entire university educations without taking a single class on ethics, history or law.

In turn, there are public-policy undergraduates, international-law professors and philosophy doctoral students writing essays, articles and dissertations on military drones without having seen one, learned how it works or even interviewed anyone who has.

No future scientist or policy-maker should graduate so ill-equipped. We can and must start training students to engage with complex multidisciplinary problems, by requiring those in the sciences to take courses in the humanities and vice versa.

At a public policy level, we need a new approach to handling major programmes of technology research, which always includes an exploration of each one's broader ramifications outside the lab. If environmental impact surveys are mandatory to begin construction of new laboratory buildings, why are no similar 'ethical, legal, and social implications' (ELSI) studies required of the research that goes on in them once built? A better model is the one used by the Human Genome Project, which set aside up to 5% of its annual budget for ELSI discussions.

We have to be realistic about what such studies can achieve. They don't solve all the tough problems, but they can provoke debates that will help us identify the true issues. Today, for example, scientists recognize that their work in genetic testing has implications in areas such as health care or privacy, and policy-makers are aware that the field is potentially powerful. But no one is wasting time on unrealistic arguments about, for example, cloning Super Soldiers. The debates on the implications of genetic testing are not always resolved, but the tenor and content of the discussion — in both the lab and the policy spheres — are much improved. Yet genetics is the exception to the rule.

By comparison, those working on killer apps in robotics and other cutting-edge research fields should be asking themselves questions such as: from whom is it ethical to accept research and development money? What attributes, such as weaponization, autonomy or intelligence, should I design into my technology? Which organizations and individuals should be allowed to buy and use my technology? Who should own or be able to access information gathered by my technology? If someone is harmed in association with the technology, who is responsible, and how is this determined?

Yet, unlike future medical professionals, researchers seeking answers to these questions received little training on ethics in graduate school and have no professional code or support structure to turn to. Policy-makers and legislators should also be better prepared to deal with the issues posed by taking a killer app beyond the lab.

We must get cracking. More killer apps are coming, and they'll bring a host of grand possibilities and perils with them. Mathematician-turned-satirist Tom Lehrer once wrote: “'Once the rockets are up, who cares where they come down? That's not my department,' says Wernher von Braun.”

Until we start learning how to wrestle with the implications of our technologies, the joke will be on the rest of us.