Perspective: Killer robotsComing Soon to a Battlefield: Robots That Can Kill

Published 3 September 2019

A Marine Corps program called Sea Mob aims to develop cutting-edge technology which would allow vessels to undertake lethal assaults without a direct human hand at the helm. A handful of such systems have been deployed for decades, though only in limited, defensive roles, such as shooting down missiles hurtling toward ships. But with the development of AI-infused systems, the military is now on the verge of fielding machines capable of going on the offensive, picking out targets and taking lethal action without direct human input.

Last October, a secretive exercise—part of a Marine Corps program called Sea Mob—was meant to demonstrate that vessels equipped with cutting-edge technology could soon undertake lethal assaults without a direct human hand at the helm. It was successful: Sources familiar with the test described it as a major milestone in the development of a new wave of artificially intelligent weapons systems soon to make their way to the battlefield.

Zachary Fryer-Biggs writes in The Atlantic that lethal, largely autonomous weaponry isn’t entirely new: A handful of such systems have been deployed for decades, though only in limited, defensive roles, such as shooting down missiles hurtling toward ships. But with the development of AI-infused systems, the military is now on the verge of fielding machines capable of going on the offensive, picking out targets and taking lethal action without direct human input.

“So far, U.S. military officials haven’t given machines full control, and they say there are no firm plans to do so,” Fryer-Biggs writes. “Many officers—schooled for years in the importance of controlling the battlefield—remain deeply skeptical about handing such authority to a robot. Critics, both inside and outside of the military, worry about not being able to predict or understand decisions made by artificially intelligent machines, about computer instructions that are badly written or hacked, and about machines somehow straying outside the parameters created by their inventors. Some also argue that allowing weapons to decide to kill violates the ethical and legal norms governing the use of force on the battlefield since the horrors of World War II.”

But if the drawbacks of using artificially intelligent war machines are obvious, so are the advantages. Fryer-Biggs notes that humans generally take about a quarter of a second to react to something we see—think of a batter deciding whether to swing at a baseball pitch. But now machines we’ve created have surpassed us, at least in processing speed. Earlier this year, for example, researchers at Nanyang Technological University, in Singapore, focused a computer network on a data set of 1.2 million images; the computer then tried to identify all the pictured objects in just 90 seconds, or 0.000075 seconds an image.