ARGUMENT: Autonomous weaponsAdding AI to Autonomous Weapons Increases Risks to Civilians in Armed Conflict

Published 26 March 2021

Earlier this month, a high-level, congressionally mandated commission released its long-awaited recommendations for how the United States should approach artificial intelligence (AI) for national security. The recommendations were part of a nearly 800-page report from the National Security Commission on AI (NSCAI) that advocated for the use of AI but also highlighted important conclusions on key risks posed by AI-enabled and autonomous weapons, particularly the dangers of unintended escalation of conflict. Neil Davison and Jonathan Horowitz write that “The NSCAI recommends that the United States excludes the use of autonomous nuclear weapons.”

Earlier this month, a high-level, congressionally mandated commission released its long-awaited recommendations for how the United States should approach artificial intelligence (AI) for national security. The recommendations were part of a nearly 800-page report from the National Security Commission on AI (NSCAI) that advocated for the use of AI but also highlighted important conclusions on key risks posed by AI-enabled and autonomous weapons, particularly the dangers of unintended escalation of conflict. Neil Davison and Jonathan Horowitz write in Just Security that the commission identified these risks as stemming from several factors, including system failures, unknown interactions between these systems in armed conflict, challenges in human-machine interaction, as well as an increasing speed of warfare that reduces the time and space for de-escalation.

They add:

These same factors also contribute to the inherent unpredictability in autonomous weapons, whether AI-enabled or not. From a humanitarian and legal perspective, the NSCAI could have explored in more depth the risks such unpredictability poses to civilians in conflict zones and to international law. Autonomous weapons are generally understood, including by the United States and the ICRC, as those that select and strike targets without human intervention; in other words, they fire themselves. This means the user of an autonomous weapon does not choose a specific target and so they do not know exactly where (or when) a strike will occur, or even specifically who (or what) will be killed, injured or destroyed.

AI-enabled autonomous weapons — particularly those that would “learn” what to target — complicate matters even further. Developers may not be able to predict, understand, or explain what happens within the machine learning “black box.” So how would users of the weapon verify how it will function in practice, or assess when it might not function as intended? This challenge is not unique to the United States or the types of technologies it is pursuing. It is a challenge fundamental to the international debate on AI-enabled and autonomous weapons.

Davison and Horowitz note that the unpredictability of autonomous weapons undermines human decision-making process at worst and complicates it at best, including by potentially speeding up the process beyond human control.

The NSCAI recommends that the United States excludes the use of autonomous nuclear weapons. Almost everyone agrees on this, but the question remains: What other constraints on autonomous weapons are needed to address humanitarian, legal, and ethical concerns?

Finding these answers is becoming urgent as autonomous weapons are being rapidly developed and militaries are seeking to deploy them in armed conflicts.

….

Essentially, strict limits are needed on the types of autonomous weapons and the situations they are used in, as well as requirements for humans to supervise, intervene, and be able to switch them off.