What Killer Robots Mean for the Future of War

While designed for missile defense, the Iron Dome could kill people by accident. But the risk is seen as acceptable in international politics because the Iron Dome generally has a reliable history of protecting civilian lives.

There are AI enabled weapons designed to attack people too, from robot sentries to loitering kamikaze drones used in the Ukraine war. LAWs are already here. So, if we want to influence the use of LAWs, we need to understand the history of modern weapons.

The Rules of War
International agreements, such as the Geneva conventions establish conduct for the treatment of prisoners of war and civilians during conflict. They are one of the few tools we have to control how wars are fought. Unfortunately, the use of chemical weapons by the US in Vietnam, and by Russia in Afghanistan, are proof these measures aren’t always successful.

Worse is when key players refuse to sign up. The International Campaign to Ban Landmines (ICBL) has been lobbying politicians since 1992 to ban mines and cluster munitions (which randomly scatter small bombs over a wide area). In 1997 the Ottawa treaty included a ban of these weapons, which 122 countries signed. But the US, China and Russia didn’t buy in.

Landmines have injured and killed at least 5,000 soldiers and civilians per year since 2015 and as many as 9,440 people in 2017. The Landmine and Cluster Munition Monitor 2022 report said:

Casualties…have been disturbingly high for the past seven years, following more than a decade of historic reductions. The year 2021 was no exception. This trend is largely the result of increased conflict and contamination by improvised mines observed since 2015. Civilians represented most of the victims recorded, half of whom were children.

Despite the best efforts of the ICBL, there is evidence both Russia and Ukraine (a member of the Ottawa treaty) are using landmines during the Russian invasion of Ukraine. Ukraine has also relied on drones to guide artillery strikes, or more recently for “kamikaze attacks” on Russian infrastructure.

Our Future
But what about more advanced AI enabled weapons? The Campaign to Stop Killer Robots lists nine key problems with LAWs, focusing on the lack of accountability, and the inherent dehumanization of killing that comes with it.

While this criticism is valid, a full ban of LAWs is unrealistic for two reasons. First, much like mines, pandora’s box has already been opened. Also the lines between autonomous weapons, LAWs and killer robots are so blurred it’s difficult to distinguish between them. Military leaders would always be able to find a loophole in the wording of a ban and sneak killer robots into service as defensive autonomous weapons. They might even do so unknowingly.

We will almost certainly see more AI enabled weapons in the future. But this doesn’t mean we have to look the other way. More specific and nuanced prohibitions would help keep our politicians, data scientists and engineers accountable.

For example, by banning:

·  black box AI: systems where the user has no information about the algorithm beyond inputs and outputs

·  unreliable AI: systems that have been poorly tested (such as in the military blockade example mentioned previously).

And you don’t have to be an expert in AI to have a view on LAWs. Stay aware of new military AI developments. When you read or hear about AI being used in combat, ask yourself: is it justified? Is it preserving civilian life? If not, engage with the communities that are working to control these systems. Together, we stand a chance at preventing AI from doing more harm than good.

Jonathan Erskine is a PhD Student, Interactive AIUniversity of Bristol. Miranda Mowbray is Lecturer in Interactive AIUniversity of Bristol. This articleis published courtesy of The Conversation.