KILLER ROBOTS“Killer Robots” Are Coming, and UN Is Worried
Long the stuff of science fiction, autonomous weapons systems, known as “killer robots,” are poised to become a reality, thanks to the rapid development of artificial intelligence. In response, international organizations have been intensifying calls for limits or even outright bans on their use. Human rights specialist lays out legal, ethical problems of military weapons systems that attack without human guidance.
Long the stuff of science fiction, autonomous weapons systems, known as “killer robots,” are poised to become a reality, thanks to the rapid development of artificial intelligence.
In response, international organizations have been intensifying calls for limits or even outright bans on their use. The U.N General Assembly in November adopted the first-ever resolution on these weapons systems, which can select and attack targets without human intervention.
To shed light on the legal and ethical concerns they raise, the Gazette interviewed Bonnie Docherty, lecturer on law at Harvard Law School’s International Human Rights Clinic (IHRC), who attended some of the U.N. meetings. Docherty is also a senior researcher in the Arms Division of Human Rights Watch. This interview has been condensed and edited for length and clarity.
What exactly are killer robots? To what extent are they a reality?
Killer robots, or autonomous weapons systems to use the more technical term, are systems that choose a target and fire on it based on sensor inputs rather than human inputs. They have been under development for a while but are rapidly becoming a reality. We are increasingly concerned about them because weapons systems with significant autonomy over the use of force are already being used on the battlefield.
What are those? Where have they been used?
It’s a little bit of a fine line about what counts as a killer robot and what doesn’t. Some systems that were used in Libya and others that have been used in [the ethnic and territorial conflict between Armenia and Azerbaijan over] Nagorno-Karabakh show significant autonomy in the sense that they can operate on their own to identify a target and to attack.
They’re called loitering munitions, and they are increasingly using autonomy that allows them to hover above the battlefield and wait to attack until they sense a target. Whether systems are considered killer robots depends on specific factors, such as the degree of human control, but these weapons show the dangers of autonomy in military technology.
What are the ethical concerns posed by killer robots?
The ethical concerns are very serious. Delegating life-and-death decisions to machines crosses a red line for many people. It would dehumanize violence and boil down humans to numerical values.