ARGUMENT: AI & NUCLEAR WEAPONSAI Nuclear Weapons Catastrophe Can Be Avoided

Published 2 March 2023

There is a growing concern that emerging AI features will only increase the potential for disaster through the possibility of semiautonomous or fully autonomous nuclear weapons. Noah Greene writes that “As the Soviet-era Col. Petrov case kindly taught us, without a human firmly in control of the nuclear command-and-control structure, the odds of disaster creep slowly toward an unintended or uncontrolled nuclear exchange.”

In October 2022, the Pentagon released its National Defense Strategy, which included a Nuclear Posture Review. Notably, the department committed to always maintain human control over nuclear weapons: “In all cases, the United States will maintain a human ‘in the loop’ for all actions critical to informing and executing decisions by the President to initiate and terminate nuclear weapon employment.” 

Noah Greene writes in Lawfare that this commitment is a valuable first step that other nuclear powers should follow. “Still, it is not enough. Commitments like these are time and circumstance dependent. The U.S. military does not currently feel the need to produce and deploy such weapons, in part because it does not see other nuclear powers engaging in similar behavior.”

The threat of an artificial intelligence (AI)-enabled arms race is not a high-level concern for military planners, but in the future, emerging AI features will only increase the potential for disaster through the possibility of semiautonomous or fully autonomous nuclear weapons.

Greene continues:

The absence of a firm agreement [on lethal autonomous weapons systems, or LAWS]… also provides a key insight into the perceptions of U.N. member states: A crisis that involves LAWS-related systems is considered to be an issue for the future, not today. 

However, autonomous weapons in this vein are far from abstract. During the Cold War, Soviet military planners developed and placed into use a semiautonomous nuclear system known as Perimeter. In the event of nuclear war, Perimeter was designed to launch the Soviet Union’s vast missile arsenal without express guidance from central command. In theory, after a human activated the system, network sensors then determined whether the country had been attacked. If the system determined that the country had been attacked, it would check with leaders at the top of the command-and-control structure to confirm. If no response was given, the onus to deploy the missiles fell on a designated official. This was essentially an attempt to ensure mutually assured destruction even in the event of the decapitation of a central government or a “dead hand” scenario. 

A lack of urgency in banning such weapons is due to concerns regarding long-term international security implications. At its core, states don’t want to make a commitment that could negate a first-mover advantage in adopting certain AI systems, nor do they want to lock themselves out of the market for becoming an early adopter should their enemies decide to utilize these systems. AI-enabled nuclear weapons are particularly concerning due to their civilization-destroying nature. As James Johnson highlighted in War on the Rocks last year, the question of AI technology being integrated into nuclear mechanisms is not a question of if, but “by whom, when, and to what degree.” If viewed along a spectrum, the most extreme degree of AI involvement would be a nuclear weapons system capable of identifying targets and firing on those targets without human approval. The second most extreme example would be a nuclear weapons system capable of firing on a target independently, after a human has locked the target into the system. While neither of these specific systems is known to exist, the future environment for more risky research in this area is far from certain. And both scenarios could be catastrophic. They would also increase the chances of a “broken arrows” incident, in which a nuclear weapon is released accidentally. To at least better humanity’s odds of survival, initiating a total ban on these weapons through a P5-led agreement would be a substantial step forward. 

Greene concludes:

As the Soviet-era Col. Petrovcase kindly taught us, without a human firmly in control of the nuclear command-and-control structure, the odds of disaster creep slowly toward an unintended or uncontrolled nuclear exchange. An agreement between nuclear powers on this issue led by P5 states would be an important step toward recreating a patchwork of nuclear treaties that has dissolved over the past two decades. To do otherwise would be to flirt with an AI-enabled nuclear arms race.