Belief in AI as a 'Great Machine' Could Weaken National Security Crisis Responses: Study
Death rays caught on, inspiring both science fiction stories and real-life attempts to build them during World War I and the interwar period. It wasn’t until a few years before World War II that scientists began to build something practical with radio waves: radar.
Society currently faces the same problem with AI, Whyte said, which is what he calls a “general purpose” technology that could either help or hurt society. The technology has already dramatically changed how some people think about the world and their place in it.
“It does so many different things that you really do have this emergent area of replacement mentalities,” he said. “As in, the world of tomorrow will look completely different, and my place in it simply won’t exist because [AI] will fundamentally change everything.”
That line of thinking could pose problems for national security professionals as the new technology upends how they think about their own abilities and changes how they respond to emergency situations.
“That is the kind of psychological condition where we unfortunately end up having to throw out the rulebook on what we know is going to combat bias or uncertainty,” Whyte said.
Combating “Skynet”-Level Threats
To study how AI affects professionals’ decision-making abilities, Whyte recruited almost 700 emergency management and homeland security professionals from the United States, Germany, the United Kingdom and Slovenia to participate in a simulation game.
During the experiment, the professionals were faced with a typical national security threat: A foreign government interfering in an election in their country. They were then assigned to one of three scenarios: a control scenario, where the threat only involved human hackers; a scenario with light, “tactical” AI involvement, where hackers were assisted by AI; and a scenario with heavy levels of AI involvement, where participants were told that the threat was orchestrated by a “strategic” AI program.
When confronted with a strategic AI-based threat — what Whyte calls a “Skynet”-level threat, referencing the “Terminator” movie franchise — the professionals tended to doubt their training and were hesitant to act. They were also more likely to ask for additional intelligence information compared with their colleagues in the other two groups, who generally responded to the situation according to their training.
In contrast, the participants who thought about AI as a “Great Machine” that could completely replace them acted without restraint and made decisions that contradicted their training.
And while experience and education helped moderate how the professionals responded to the AI-assisted attacks, they didn’t affect how the professionals reacted to the “Skynet”-level threat. That could pose a threat as AI attacks become more common, Whyte said.
“People have variable views on whether AI is about augmentation, or whether it really is something that’s going to replace them,” Whyte said. “And that meaningfully changes how people will react in a crisis.”
Madeline Reinsel is Science and Research Public Relations Specialist at Virginia Commonwealth University. The article was originally posted to the website of Virginia Commonwealth University.