WORLD VIEW

AI researchers should help with some military work

The ethical choice for Google’s artificial-intelligence engineers is to engage in select national-security projects, not to shun them all, says Gregory C. Allen.
Gregory C. Allen is an adjunct fellow at the Center for a New American Security in Washington DC.
Contact

Search for this author in:

In January, Google chief executive Sundar Pichai said that artificial intelligence (AI) would have a “more profound” impact than even electricity. He was following a long tradition of corporate leaders claiming their technologies are both revolutionary and wonderful.

The trouble is that revolutionary technologies can also revolutionize military power. AI is no exception. On 1 June, Google announced that it would not renew its contract supporting a US military initiative called Project Maven. This project is the military’s first operationally deployed ‘deep-learning’ AI system, which uses layers of processing to transform data into abstract representations — in this case, to classify images in footage collected by military drones. The company’s decision to withdraw came after roughly 4,000 of Google’s 85,000 employees signed a petition to ban Google from building “warfare technology”.

Such recusals create a great moral hazard. Incorporating advanced AI technology into the military is as inevitable as incorporating electricity once was, and this transition is fraught with ethical and technological risks. It will take input from talented AI researchers, including those at companies such as Google, to help the military to stay on the right side of ethical lines.

Last year, I led a study on behalf of the US Intelligence Community, showing that AI’s transformative impacts will cover the full spectrum of national security. Military robotics, cybersecurity, surveillance and propaganda are all vulnerable to AI-enabled disruption. The United States, Russia and China all expect AI to underlie future military power, and the monopoly enjoyed by the United States and its allies on key military technologies, such as stealth aircraft and precision-guided weapons, is nearing an end.

I sympathize with researchers, both academic and corporate, who are confronted with the difficult question of whether to assist the armed forces. On the one hand, the mission of keeping the United States and its allies safe, free and strong enough to deter potential threats is as vital as ever. Helping the military to incorporate new technology can simultaneously reduce dangers to soldiers and civilians trapped in combat zones and bolster national security.

On the other hand, researchers who help the military sometimes regret it. Some of the scientists who worked on the Manhattan Project, which developed the nuclear bombs used in the Second World War, later concluded that the world would have been better off without that research. Many uses of AI are ethically and legally dubious — witness the troubles with software used to aid policing and sentencing.

Fortunately, US AI researchers are free to choose their projects, and can influence their employers. This is not the case for their counterparts in China, where the government can compel companies or individuals to work on national-security efforts.

Even if researchers decline to participate in a project, however, they cannot truly opt out of the national-security consequences. Many hobby drone manufacturers were aghast to learn that their products were being used by the Islamist terrorist group ISIS to drop explosives on US troops. No doubt many researchers working on driverless cars have not fully considered the implications of this technology for driverless tanks or car bombs. But ignoring potential applications won’t keep them from happening.

Moreover, AI scientists publish much of their research openly. In such cases, publishing algorithms, code libraries and training data sets makes those building blocks available to all militaries, and benign projects can enable malign applications. Blanket refusals by technology companies to work with US national-security organizations are counterproductive, even if other companies opt to do this work. The country’s AI researchers need to hear from the military about the security consequences of technologies, and the military needs broad expert advice to be able to apply AI ethically and effectively.

This is not to say that AI researchers should blithely support every project the US military imagines. Some proposals will be unethical. Some will be stupid. Some will be both. Where they see such proposals, researchers should oppose them.

But there will be AI projects that genuinely advance national security and that do so legally and ethically. Take, for example, work by the US Defense Advanced Research Projects Agency on countering video and audio forgeries enabled by AI. The AI research community should consider working on such projects, or at least refrain from demonizing those who do.

It pays to remember bacteriologist Theodor Rosebury, who researched bioweapons for the US military in the 1940s. After the Second World War, Rosebury restricted his bioweapons work to defensive research exclusively, and advocated for defence as the only US military policy. His position was eventually enshrined in the Biological Weapons Convention of 1972.

Which brings us back to Google and Project Maven.

For years, I have been among those advocating for the US military to increase its use of advanced AI technologies and to do so in a cautious and ethically conscious manner. Project Maven, which performs a non-safety-critical task that is not directly connected to the use of force, is exactly what I had hoped for. The system uses AI computer vision to automate the most mundane aspect of drone-video analysis: counting the number of people, vehicles and buildings. Companies involved deserve credit, not criticism, for their support.

The all-or-nothing stance is a dangerous oversimplification. Corporate and academic AI experts have a unique and vital opportunity to help the military incorporate AI technology in a way that bolsters both national and international security ethically. They should take it.

doi: 10.1038/d41586-018-05364-x
Nature Briefing

Sign up for the daily Nature Briefing email newsletter

Stay up to date with what matters in science and why, handpicked from Nature and other publications worldwide.

Sign Up