ARGUMENT: AI & NUCLEAR BRINKMANSHIPNuclear Brinkmanship in AI-Enabled Warfare: A Dangerous Algorithmic Game of Chicken

Published 2 October 2023

Russian nuclear saber-rattling and coercion have loomed large throughout the Russo-Ukrainian War. James Johnson writes that this dangerous rhetoric has been amplified and radicalized by AI-powered technology — “false-flag” cyber operations, fake news, and deepfakes. “Rapid AI technological maturity raises the issue of delegating the launch authority of nuclear weapons to AI (or non–human-in-the-loop nuclear command and control systems), viewed simultaneously as dangerous and potentially stabilizing,” he writes.

Russian nuclear saber-rattling and coercion have loomed large throughout the Russo-Ukrainian War. James Johnson writes in War on the Rockthatthis dangerous rhetoric has been amplified and radicalized by AI-powered technology — “false-flag” cyber operations, fake news, and deepfakes.

Johnson continues:

Throughout the war, both sides have invoked the specter of nuclear catastrophe, including false Russian claims that Ukraine was building a “dirty bomb” and President Volodymyr Zelensky’s allegation that Russia had planted explosives to cause a nuclear disaster at a Ukrainian power plant. The world is once again forced to grapple with the psychological effects of the most destructive weapons the world has ever known in a new era of nuclear brinkmanship. 

Rapid AI technological maturity raises the issue of delegating the launch authority of nuclear weapons to AI (or non–human-in-the-loop nuclear command and control systems), viewed simultaneously as dangerous and potentially stabilizing. This potential delegation is dangerous because weapons could be launched accidentally. It is potentially stabilizing because of the lower likelihood that a nuclear strike would be contemplated if retaliation was known to benefit from autonomy, machine speed, and precision. For now, at least, there is a consensus amongst nuclear-armed powers that the devastating outcome of an accidental nuclear exchange obviates any potential benefits of automating the retaliatory launch of nuclear weapons.

Regardless, it is important to grapple with a question: How might AI-enabled warfare affect human psychology during nuclear crises? Thomas Schelling’s theory of “threat that leaves something to chance” (i.e., the risk that military escalation cannot be entirely controlled) helps analysts understand how and why nuclear-armed states can manipulate risk to achieve competitive advantage in bargaining situations and how this contest of nerves, resolve, and credibility can lead states to stumble inadvertently into war. How might the dynamics of the age of AI affect Schelling’s theory? Schelling’s insights on crisis stability between nuclear-armed rivals in the age of AI-enabling technology, contextualized with the broader information ecosystem, offer fresh perspectives on the AI-nuclear dilemma” — the intersection of technological change, strategic thinking, and nuclear risk. 

In the digital age, the confluence of increased speed, truncated decision-making, dual-use technology, reduced levels of human agency, critical network vulnerabilities, and dis/misinformation injects more randomness, uncertainty, and chance into crises. This creates new pathways for unintentional (accidental, inadvertent, and catalytic) escalation to a nuclear level of conflict. New vulnerabilities and threats (perceived or otherwise) to states’ nuclear deterrence architecture in the digital era will become novel generators of accidental risk — mechanical failure, human error, false alarms, and unauthorized launches. 

These vulnerabilities will make current and future crises (Russia-Ukraine, India-Pakistan, the Taiwan Straits, the Korean Peninsula, the South China Seas, etc.) resemble a multiplayer game of chicken, where the confluence of Schelling’s “something to chance” coalesces with contingency, uncertainty, luck, and the fallacy of control, under the nuclear shadow. In this dangerous game, either side can increase the risk that a crisis unintentionally blunders into nuclear war. Put simply, the risks of nuclear-armed states leveraging Schelling’s “something to chance” in AI-enabled warfare preclude any likely bargaining benefits in brinkmanship.

Johnson concludes:

Because of the limited empirical evidence available on nuclear escalation, threats, bluffs, and war termination, the arguments presented (much like Schelling’s own) are mostly deductive. In other words, conclusions are inferred by reference to various plausible (and contested) theoretical laws and statistical reasoning rather than empirically deduced by reason. Robust falsifiable counterfactuals that offer imaginative scenarios to challenge conventional wisdom, assumptions, and human bias (hindsight bias, heuristics, availability bias, etc.) can help fill this empirical gap. Counterfactual thinking can also avoid the trap of historical and diplomatic telos that retrospectively constructs a path-dependent causal chain that often neglects or rejects the role of uncertainty, chance, luck, overconfidence, the “illusion of control,” and cognitive bias.

Furthermore, AI machine-learning techniques (modeling, simulation, and analysis) can complement counterfactuals and low-tech table-top wargaming simulations to identify contingencies under which “perfect storms” might form — not to predict them, but rather to challenge conventional wisdom, and highlight bias and inertia, to highlight and, ideally, mitigate these conditions. American philosopher William James wrote: “Concepts, first employed to make things intelligible, are clung to often when they make them unintelligible.”