ARGUMENT: AI & WMDGenerative AI and Weapons of Mass Destruction: Will AI Lead to Proliferation?

Published 23 December 2023

Large Language Models (LLMs) caught popular attention in 2023 through their ability to generate text based on prompts entered by the user. Ian J. Stewart writes that “some have raised concerns about the ability of LLMs to contribute to nuclear, chemical and biological weapons proliferation (CBRN). Put simply, could a person learn enough through an interaction with an LLM to produce a weapon? And if so, would this differ from what the individual could learn by scouring the internet?”

Large Language Models (LLMs) caught popular attention in 2023 through their ability to generate text based on prompts entered by the user. LLMs have also proven capable of generating code, summarizing text, and adding structure to unstructured text, among others. There remain questions around the real-world usefulness of LLMs in many domains, particularly given some of the difficulties in solving limitations of LLMs such as hallucination.

Ian J. Stewart writes in Medium that

Nonetheless, some have raised concerns about the ability of LLMs to contribute to nuclear, chemical and biological weapons proliferation (CBRN). Put simply, could a person learn enough through an interaction with an LLM to produce a weapon? And if so, would this differ from what the individual could learn by scouring the internet?

Stewart examines the question of whether generative AI can contribute to proliferation, and identifies four pathways of possible concern: knowledge retrieval, technical troubleshooting, mathematical, biological, or engineering design, and production of physical goods. He notes that additional pathways may also come to light.

The four pathways:

Knowledge retrieval: The most obvious role for LLMs is in knowledge retrieval. That is, the LLM answers questions posed by the user through prompts. Here, the LLM draws on inferences from its training data, which is primarily based on information available on the internet. However, the LLM may be trained on domain-specific or non-public data related to specific domains and its knowledge may be augmented by access to custom data including through Retrieval Augmented Generation (RAG). It is foreseeable that LLMs will in the future be capable of providing highly specific knowledge related to each category of weapons of mass destruction including in relation to each stage of their production. Questions remain about whether LLMs can facilitate tacit knowledge transfer rather than only explicit knowledge transfer. The tacit knowledge transfer point will require domain-specific research to understand the potential contribution of LLMs.