Lethal Autonomous Weapons May Soon Make Life-and-Death Decisions – on Their Own

While militaries are turning to machine-learning algorithms to identify threats before they happen, they’re yet to hand over control to the machine. But Richardson says, that isn’t too far a leap.

“The move towards killing that is intensely predictive is certainly happening, and it’s a very scary development,” he says. “We would have the technological capacity in many instances to take human decision making out of the process and to push those predictions, to the forefront.

“We haven’t necessarily granted decision making power to those technologies to fire a weapon, but the threshold that needs to be crossed is not so much a technological one at this point, it’s a moral one or a strategic one,” he says.

Regulating Lethal Autonomous Weapons
Richardson says global governance and arms control are among several measures we need to take to restrict lethal autonomous weapons.

“While that didn’t stop the development of nuclear weapons, widespread public opposition could help to hold the development of lethal autonomous weapon systems, or to slow down the production of the technology at least,” he says.

“But at the moment, lethal autonomous weapon systems are not currently regulated under the Convention on Certain Conventional Weapons. In fact, there’s not even an accepted and agreed definition of what a lethal autonomous weapon system is.”

He says Australia is ‘infrastructurally complicit’ in the development of such technologies with its partnerships with the US military and could push other countries to be more accountable.

“We might claim that there’s little we can do to influence the main players in this space, like the US, China and Russia. Nevertheless, Australia is part of that world system.

“While it’s happening ‘over there’ in terms of kinetic violence, we’re an integral part of this,” he says. “The kinds of methods of surveillance and control [that] start in military spaces often expand elsewhere, and we’ve certainly seen that since 9/11 with surveillance in particular.”

Richardson believes the widespread public opposition to nuclear weapons, as well as facial recognition software, shows that it is still possible to slow the development of lethal autonomous weapons.

“We’ve gone from widespread adoption and development of facial recognition technologies to major tech companies undertaking a moratorium on their development. That’s a result of sustained public pressure, but also critical pressure from academics and from advocacy groups, who have shown … they’re deeply problematic and result in greater injustice rather than more justice and safety.

“If we can create that push around facial recognition, perhaps we can also push against lethal autonomous weapons. They rely on similar kinds of technologies of automatic detection and recognition but make far more disastrous choices than whether a person gets a job or not, whether they’re allowed inside a venue or not.

“They might be choices over life and death.”