KILLER ROBOTSWhat Killer Robots Mean for the Future of War

By Jonathan Erskine and Miranda Mowbray

Published 10 January 2023

As AI weapons become increasingly sophisticated, public concern is growing over fears about lack of accountability and the risk of technical failure.

You might have heard of killer robots, slaughterbots or terminators – officially called lethal autonomous weapons (LAWs) – from films and books. And the idea of super-intelligent weapons running rampant is still science fiction. But as AI weapons become increasingly sophisticated, public concern is growing over fears about lack of accountability and the risk of technical failure.

Already we have seen how so-called neutral AI have made sexist algorithms and inept content moderation systems, largely because their creators did not understand the technology. But in war, these kinds of misunderstandings could kill civilians or wreck negotiations.

For example, a target recognition algorithm could be trained to identify tanks from satellite imagery. But what if all of the images used to train the system featured soldiers in formation around the tank? It might mistake a civilian vehicle passing through a military blockade for a target.

Why Do We Need Autonomous Weapons?
Civilians in many countries (such as VietnamAfghanistan and Yemen) have suffered because of the way global superpowers build and use increasingly advanced weapons. Many people would argue they have done more harm than good, most recently pointing to the Russian invasion of Ukraine early in 2022.

In the other camp are people who say a country must be able to defend itself, which means keeping up with other nations’ military technology. AI can already outsmart humans at chess and poker. It outperforms humans in the real world too. For example Microsoft claims its speech recognition software has an error rate of 1% compared to the human error rate of around 6%. So it is hardly surprising that armies are slowly handing algorithms the reins.

But how do we avoid adding killer robots to the long list of things we wish we had never invented? First of all: know thy enemy.

What Are Lethal Autonomous Weapons (LAWs)?

The US Department of Defense defines an autonomous weapon system as: “A weapon system that, once activated, can select and engage targets without further intervention by a human operator.”

Many combat systems already fit this criteria. The computers on drones and modern missiles have algorithms that can detect targets and fire at them with far more precision than a human operator. Israel’s Iron Dome is one of several active defense systems that can engage targets without human supervision.