Lethal Autonomous Weapons and World War III: It’s Not Too Late to Stop the Rise of “Killer Robots”

If after this war a situation is allowed to develop in the world which permits rival powers to be in uncontrolled possession of these new means of destruction, the cities of the United States as well as the cities of other nations will be in continuous danger of sudden annihilation. All the resources of the United States, moral and material, may have to be mobilized to prevent the advent of such a world situation …

Billions of dollars have since been spent on nuclear arsenals that maintain the threat of mutually assured destruction, the “continuous danger of sudden annihilation” that the physicists warned about in July 1945.

A Warning to the World
Six years ago, thousands of my colleagues issued a similar warning about a new threat. Only this time, the petition wasn’t secret. The world wasn’t at war. And the technologies weren’t being developed in secret. Nevertheless, they pose a similar threat to global stability.

The threat comes this time from artificial intelligence, and in particular the development of lethal autonomous weapons: weapons that can identify, track and destroy targets without human intervention. The media often like to call them “killer robots”.

Our open letter to the UN carried a stark warning.

The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable. The endpoint of such a technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.

Strategically, autonomous weapons are a military dream. They let a military scale its operations unhindered by manpower constraints. One programmer can command hundreds of autonomous weapons. An army can take on the riskiest of missions without endangering its own soldiers.

Nightmare Swarms
There are many reasons, however, why the military’s dream of lethal autonomous weapons will turn into a nightmare. First and foremost, there is a strong moral argument against killer robots. We give up an essential part of our humanity if we hand to a machine the decision of whether a person should live or die.

Beyond the moral arguments, there are many technical and legal reasons to be concerned about killer robots. One of the strongest is that they will revolutionize warfare. Autonomous weapons will be weapons of immense destruction.

Previously, if you wanted to do harm, you had to have an army of soldiers to wage war. You had to persuade this army to follow your orders. You had to train them, feed them and pay them. Now just one programmer could control hundreds of weapons.

In some ways lethal autonomous weapons are even more troubling than nuclear weapons. To build a nuclear bomb requires considerable technical sophistication. You need the resources of a nation state, skilled physicists and engineers, and access to scarce raw materials such as uranium and plutonium. As a result, nuclear weapons have not proliferated greatly.

Autonomous weapons require none of this, and if produced they will likely become cheap and plentiful. They will be perfect weapons of terror.

Can you imagine how terrifying it will be to be chased by a swarm of autonomous drones? Can you imagine such drones in the hands of terrorists and rogue states with no qualms about turning them on civilians? They will be an ideal weapon with which to suppress a civilian population. Unlike humans, they will not hesitate to commit atrocities, even genocide.

Time for a Treaty
We stand at a crossroads on this issue. It needs to be seen as morally unacceptable for machines to decide who lives and who dies. And for the diplomats at the UN to negotiate a treaty limiting their use, just as we have treaties to limit chemical, biological and other weapons. In this way, we may be able to save ourselves and our children from this terrible future.

Toby Walsh is Professor of AI at UNSW, Research Group Leader, UNSWThis article is published courtesy of The Conversation.