PerspectiveHow Artificial Intelligence Could Make Nuclear War More Likely

Published 26 September 2019

If you are a millennial, computers have been trying to get you killed since the days you were born. On 26 September 1983, the satellites and computers of the Soviet Air Defense Forces, tasked with using data to determine if the United States was launching a nuclear attack, told the humans in charge exactly that was happening—five U.S. ballistic missiles were incoming and the time for the USSR to prepare to launch a retaliatory attack was now. The reason why you are alive today to read this item is that the human involved, then-Lt. Col. Stanislav Petrov, believed that the computer was wrong. Over the next year, the Pentagon will spend $1 billion to develop artificial intelligence (AI) technology that will “compete, deter and, if necessary, fight and win the wars of the future“—including, presumably, an apocalyptic scenario of the kind Petrov, a human, averted. Among the jobs that could be outsourced to decision-making computers are the jobs of modern-day Petrovs and other humans tasked with deciding if it’s time to end humanity with a nuclear strike.

If you are a millennial, computers have been trying to get you killed since the days you were born.

On September 26, 1983, the satellites and computers of the Soviet Air Defense Forces, tasked with using data to determine if the United States was launching a nuclear attack, told the humans in charge exactly that was happening—five U.S. ballistic missiles were incoming and the time for the USSR to prepare to launch a retaliatory attack was now.

Chris Roberts writes in the Observer that the reason why you are alive today to read this item is that the human involved, then-Lt. Col. Stanislav Petrov, believed that the computer was wrong. He was right—and thus a nuclear war of the kind seen the following year on the BBC made-for-TV movie Threads did not happen. If the computers were in charge, it likely would have, and civilization as we know it would be over. These are all objective statements.

Of course, both computing power and sophistication have grown by leaps and bounds since Ronald Reagan’s first term. Today’s average consumer smartphone is almost unfathomably more powerful than Cold War-era nuclear weapons’ command-and-control technology. Over the next year, the Pentagon will spend $1 billion to develop artificial intelligence (AI) technology that will “compete, deter and, if necessary, fight and win the wars of the future“—including, presumably, an apocalyptic scenario of the kind Petrov, a human, averted.

Among the jobs that could be outsourced to decision-making computers are the jobs of modern-day Petrovs and other humans tasked with deciding if it’s time to end humanity with a nuclear strike. In fact, this outsourcing of command-and-control of the nuclear arsenal must happen, some policy wonks have recently argued, because both nuclear capabilities and computing power have advanced so far that the timeframe required to assess whether a retaliatory second-strike is necessary—and then launch it—has decreased from the 20 or so minutes in Petrov’s time to perhaps the length of a Lil Nas X song.

Roberts write: “This is what the think-tanks, government-funded nuclear labs and college professors who influence U.S. nuclear policy and strategy are currently pondering. Is it a good idea?”