BIOSECURITYAI Tools Can Enhance U.S. Biosecurity; Monitoring and Mitigation Will Be Needed to Protect Against Misuse

Published 1 April 2025

A new report recommends ways for the U.S. to reap the benefits of artificial intelligence in biotechnology while minimizing risks that AI may be misused to develop harmful biological agents.

new report from the National Academies of Sciences, Engineering, and Medicine recommends ways for the U.S. to reap the benefits of artificial intelligence in biotechnology while minimizing risks that AI may be misused to develop harmful biological agents.

AI is being used for an array of beneficial applications in health care, including drug discovery. AI models can analyze large amounts of data to help design medical countermeasures to prevent, treat, and mitigate health threats. However, concerns have been raised that AI-enabled biological tools could also be misused for harmful applications — such as designing a new biological agent with pandemic potential, or modifying an existing virus or bacterium to be more harmful or transmissible.

The report assesses the degree to which AI can amplify the benefits or risks of applying biological tools. At present, no AI-enabled biological tools are capable of designing a completely new virus and are limited in modifying an existing infectious agent with the potential for epidemic- or pandemic-scale consequences, the report says. But in view of rapid advances in AI technologies, the report recommends an approach to monitor for the development of datasets and related AI capabilities that could pose risks. Given how critical data are for training AI models, the report also urges strategic collection of AI-ready biological datasets with an emphasis on ensuring data provenance. Building new national data resources and other forms of infrastructure to support AI should be a research priority for the United States in order to maintain scientific competitiveness and innovation.

“In light of how quickly AI is advancing, agencies should continuously assess and mitigate the risks that AI-enabled biological tools will be misused, and our report offers an approach for doing so,” said Lynda Stuart, former executive director of the Institute for Protein Design at the University of Washington School of Medicine, and co-chair of the committee that wrote the report.

Capabilities of Existing AI Tools
The report provides a technical review of the current capabilities of AI-enabled biological tools to enable either beneficial or harmful applications. The limitations of current AI tools include insufficient biological knowledge and datasets to train AI models for designing novel or modified viruses with specific characteristics that raise national security concerns. In addition, physical production of AI-enabled designs remains a significant barrier, the report says.

The committee weighed in on whether existing AI-enabled biological tools are currently capable of three types of harmful applications:

Enabling the design of biomolecules, such as toxins. Available AI-enabled biological tools are capable of designing and redesigning toxins using different amino acid building blocks. The scale of potential threats would likely be limited to the local level rather than elevated to the epidemic or pandemic level.

Enabling the modification of existing pathogens to make them more virulent. Available AI-enabled biological tools may be capable of modeling very specific features that may predict traits linked to virulence. However, certain limitations impact model performance, including insufficient datasets and the challenge of modeling biological complexity.

Enabling the design of a completely new virus. No available AI-enabled biological tool currently possesses the capability to design a novel virus. Furthermore, no biological datasets that can be used to train such models are known to exist.

AI Tools to Improve Biosecurity
AI-enabled biological tools can improve biosecurity and mitigate biological threats by enhancing prediction, detection, prevention, and response, the report says. This includes improving biosurveillance and accelerating the development of medical countermeasures in response to both intentional biological threats and naturally occurring infectious disease outbreaks.

The report offers recommendations urging the U.S. departments of Defense, Health and Human Services, Energy, and other federal agencies to continue to invest in research, data infrastructure, and high-performance computing to drive advances in AI and also monitor for potential risks.

AI tools have the potential to help protect us by enhancing biosecurity and developing medical countermeasures,” said committee co-chair Michael Imperiale, Arthur F. Thurnau Professor Emeritus in the Department of Microbiology and Immunology at the University of Michigan. “At the same time, a risk-benefit assessment may be useful to carefully balance the need to invest in critical research intended to protect against harm, with the potential for misuse of the information or tools by malicious actors with harmful intent.”

The study — undertaken by the Committee on Assessing and Navigating Biosecurity Concerns and Benefits of Artificial Intelligence Use in the Life Sciences — was sponsored by the U.S. Department of Defense.