PerspectiveStrangelove Redux: U.S. Experts Propose Having AI Control Nuclear Weapons

Published 4 September 2019

In an article in War on the Rocks titled, ominously, “America Needs a ‘Dead Hand,’” U.S. deterrence experts Adam Lowther and Curtis McGiffin propose a nuclear command, control, and communications setup with some eerie similarities to the Soviet system referenced in the title to their piece. The Dead Hand was a semiautomated system developed to launch the Soviet Union’s nuclear arsenal under certain conditions, including, particularly, the loss of national leaders who could do so on their own. Matt Field writes in the Bulletin of the Atomic Scientists that we should think long and hard before considering the Dead Hand idea. History is replete with instances in which it seems, in retrospect, that nuclear war could have started were it not for some flesh-and-blood human refusing to begin Armageddon.

In an article in War on the Rocks titled, ominously, “America Needs a ‘Dead Hand,’” U.S. deterrence experts Adam Lowther and Curtis McGiffin propose a nuclear command, control, and communications setup with some eerie similarities to the Soviet system referenced in the title to their piece. The Dead Hand was a semiautomated system developed to launch the Soviet Union’s nuclear arsenal under certain conditions, including, particularly, the loss of national leaders who could do so on their own. Given the increasing time pressure Lowther and McGiffin say US nuclear decision makers are under, “[I]t may be necessary to develop a system based on artificial intelligence, with predetermined response decisions, that detects, decides, and directs strategic forces with such speed that the attack-time compression challenge does not place the United States in an impossible position.”

Matt Field writes in the Bulletin of the Atomic Scientists that we should think long and hard before considering the Dead Hand idea. History is replete with instances in which it seems, in retrospect, that nuclear war could have started were it not for some flesh-and-blood human refusing to begin Armageddon. Perhaps the most famous such hero was Stanislav Petrov, a Soviet lieutenant colonel, who was the officer on duty in charge of the Soviet Union’s missile-launch detection system when it registered five inbound missiles on Sept. 26, 1983. Petrov decided the signal was in error and reported it as a false alarm. It was. Whether an artificial intelligence would have reached the same decision is, at the least, uncertain.

One of the risks of incorporating more artificial intelligence into the nuclear command, control, and communications system involves the phenomenon known as automation bias. Studies have shown that people will trust what an automated system is telling them. In one study, pilots who told researchers that they wouldn’t trust an automated system that reported an engine fire unless there was corroborating evidence nonetheless did just that in simulations. (Furthermore, they told experimenters that there had in fact been corroborating information, when there hadn’t.)

University of Pennsylvania political science professor and Bulletin columnist Michael Horowitz, who researches military innovation, counts automation bias as a strike against building an artificial intelligence-based nuclear command, control, and communications system. “A risk in a world of automation bias is that the Petrov of the future doesn’t use his judgment,” he says, “or that there is no Petrov,” he told Field.