RobokillersLethal Autonomous Weapons May Soon Make Life-and-Death Decisions – on Their Own

Published 14 August 2020

With drone technology, surveillance software, and threat-predicting algorithms, future conflicts could computerize life and death. “It’s a big question – what does it mean to hand over some of the decision making around violence to machines, and everybody on the planet will have a stake in what happens on this front,” says one expert.

An armed weapons system capable of making decisions sounds like it’s straight out of a Terminator movie. But once lethal autonomous weapons are out in the world, there could be no turning back.

“It’s very possible that we can’t put the genie back in the bottle with lethal autonomous weapon systems,” says Dr. Michael Richardson, who was recently named a Top 5 Humanities Researcher by the ABC for his work on drone technologies.

Richardson, who researches political violence and emerging technologies, says developments in lethal autonomous weapons are accelerating radically. He says these systems are not yet deployed operationally, but there are still several reasons why we should all be concerned.

“Most of the major militaries around the world are developing lethal autonomous weapons of different kinds, sometimes even in partnership with big tech companies,” says the Senior Research Fellow from UNSW Arts & Social Sciences.

“It’s a big question – what does it mean to hand over some of the decision making around violence to machines, and everybody on the planet will have a stake in what happens on this front.”

Military technologies that are close to being autonomous are already in the field. With the help of drones, distant battles can increasingly be fought from control stations full of screens and interfaces, he says.

The real transformation, however, is in ‘predicting’ threats – and eliminating them – before they arise.

“With the U.S. military, in particular, lethal drone strikes are carried out based on a determination of whether someone or some group of people might become a threat ­– that is, they might do [harm] in the future,” he says.

“If your threat is someone pointing a gun in your face, the threat is pretty imminent. But someone driving a car around in rural Afghanistan with a rice cooker in the back and a bag of nails … you might have some data points that would suggest the person is going to create an improvised explosive device that could put troops at risk.

“In that instance, the threat is several steps removed from what you’re observing. The person might also just have a new rice cooker and a broken fence to fix. So, you have the potential to kill someone based on a prediction about what might come about, rather than based on anything they’re specifically doing at the time.”