Explainable AI: A Must for Nuclear Nonproliferation, National Security

“It can be difficult to incorporate a new and disruptive technology like AI into current scientific approaches. One approach is to build new ways that humans can work more effectively with AI,” said Sheffield. “We must create tools that help developers understand how these sophisticated techniques work so that we can take full advantage of them.”

Scarce Data Make Explainable AI Essential
The most common technique for training an AI system is presenting it with reams of data. When there are near-limitless photos of faces available, for example, a system learns a lot about the nuances involving eyes, noses, and mouths to use facial recognition to determine whether a glance from you should open your phone. The AI system relies on the availability and input of a huge amount of data to enable the system to classify information correctly. 

But—thankfully—data are much more sparse when it comes to nuclear explosions or weapons development. That good news complicates the challenge of using AI in the national security space, making AI training less reliable and amplifying the need to understand every step of the process.

“We’re working to understand why systems give the answers they do,” said Mark Greaves, a PNNL scientist involved with the research. “We can’t directly use the same AI technologies that Amazon uses to decide that I am prepared to buy a lawn mower, to decide whether a nation is prepared to create a nuclear weapon. Amazon’s available data are massive, and a mistaken lawn mower recommendation isn’t a big problem. 

“But if an AI system yields a mistaken probability about whether a nation possesses a nuclear weapon, that’s a problem of a different scale entirely. So our system must at least produce explanations so that humans can check its conclusions and use their own expertise to correct for AI training gaps caused by the sparsity of data,” Greaves added. “We are inspired by the huge advances that AI is continuing to make, and we are working to develop new and specialized AI techniques that can give the United States an additional window into nuclear proliferation activity.”

A Pinch of AI, a Dash of Domain Knowledge
Sheffield notes that PNNL’s strengths spring from two sources. One is significant experience in AI; PNNL scientists are frequent presenters at conferences that also feature researchers from entities such as Google, Microsoft, and Apple. But the other is domain knowledge—technical details understood by staff at PNNL about issues such as how plutonium is processed, the type of signals unique to nuclear weapons development, and the ratios of isotopes produced by such materials.

The combination of data science, artificial intelligence, and national security experience gives PNNL a unique role in protecting the nation in the AI—national security space. It’s combining the raw scientific power of AI with the no-nonsense street smarts of a nuclear sleuth.

“It takes a special set of knowledge, skills, and technical ability to advance the state of the art in national security,” Sheffield said. “The consequences of what we do are very high, and we must go far beyond standard practice to be responsible.”