AI WARFARENeeded: Ground Rules for the Age of AI Warfare

Published 6 June 2023

The time has arrived form an international agreement on autonomous weapons. Lauren Kahn writes in Foreign Affairs that AI is at an inflection point: the technology is maturing and is increasingly suitable for military use, while the exact outlines of future AI military systems, and the degree of disruption they will cause, remain uncertain and, hence, can be, at least somewhat, shaped.

This is a summary of an article originally published by Foreign Affairs.

·  Traditional military systems and technologies come from a world where humans make onsite, or at least real-time, decisions over life and death. AI-enabled systems are less dependent on this human element; future autonomous systems may lack it entirely. This prospect not only raises thorny questions of accountability but also means there are no established protocols for when things go wrong. … When the inevitable happens, and a partially or fully autonomous system is involved in an accident, states will need a mechanism they can turn to — a framework to guide the involved parties and provide them with potential off-ramps to avert unwanted conflict.

·  In the 1970s, U.S. and Soviet leaders calmed rising tensions between their navies by setting rules for unplanned encounters on the high seas. Governments today should take a similar route through the uncharted waters of AI-driven warfare. They should agree on basic guidelines now, along with protocols to maximize transparency and minimize the risk of fatal miscalculation and miscommunication.

·  The time for an Autonomous Incidents Agreement is ripe, given that AI is at an inflection point.

·  On the one hand, the technology is maturing and increasingly suitable for military use, whether as part of wargaming exercises or in combat, such as in Ukraine.

·  On the other hand, the exact outlines of future AI military systems — and the degree of disruption they will cause — remain uncertain and, by extension, somewhat malleable.

·  States willing to take the initiative could build on existing momentum for stricter rules. The private sector appears willing to at least somewhat self-regulate its AI development. And in response to member state requests, the International Civil Aviation Organization is working on a model regulatory framework for uncrewed aircraft systems and has encouraged states to share existing regulations and best practices.

·  An Autonomous Incidents Agreement would put these nascent efforts on solid footing. The need for clearer norms, for a baseline mechanism of responsibility and accountability, is as great as it is urgent. So is the need for a protocol for handling interstate skirmishes involving these cutting-edge systems. States should start preparing now, since the real question regarding such incidents is not whether they will occur, but when.

Lauren Kahn is a Research Fellow at the Council on Foreign Relations. The summary, originally published in Russia Matters, is published here courtesy of the Harvard Kennedy School’s Russia Matters. Read the full article at Foreign Affairs.