AI & WMDDoes AI Enable and Enhance Biorisks?
The diversity of the biorisk landscape highlights the need to clearly identify which scenarios and actors are of concern. It is important to consider AI-enhanced risk within the current biorisk landscape, in which both experts and non-experts can cause biological harm without the need for AI tools, thus highlighting the need for layered safeguards throughout the biorisk chain.
More and more government directives, international conferences, and media headlines reflect a growing concern with the potential of artificial intelligence to exacerbate biological threats. AI tools are cited as enablers and enhancers of biorisks as a result of the ability of AI to lower information barriers, enhance novel biothreat design, or otherwise increase a malicious actor’s capabilities.
Georgetown University’s CSET has just published a new report, written by Steph Batalis, on the AI and biorisks. Here are the report’s Executive Summary and Conclusions:
Executive Summary
Recent government directives, international conferences, and media headlines reflect growing concern that artificial intelligence could exacerbate biological threats. When it comes to biorisk, AI tools are cited as enablers that lower information barriers, enhance novel biothreat design, or otherwise increase a malicious actor’s capabilities.
It is important to evaluate AI’s impact within the existing biorisk landscape to assess the relationship between AI-agnostic and AI-enhanced risks. While AI can alter the potential for biological misuse, focusing attention solely on AI may detract from existing, foundational biosecurity gaps that could be addressed with more comprehensive oversight.
Policies that effectively mitigate biorisks will also need to account for the varied risk landscape, because safeguards that work in one case are unlikely to be effective for all actors and scenarios. In this explainer, we outline the AI-agnostic and AI-enhanced biorisk landscape to inform targeted policies that mitigate real scenarios of risk without overly inhibiting AI’s potential to accelerate cutting-edge biotechnology.
Our Key Takeaways regarding AI and biorisk are:
1. Biorisk is already possible without AI, even for non-experts. AI tools are not needed to access the foundational information and resources to cause biological harm. This highlights the need for layered safeguards throughout the process, from monitoring certain physical materials to bolstering biosafety and biosecurity training for researchers. The recent Executive Order on AI’s requirement to screen DNA synthesis for federally-funded research is an example of a barrier to material acquisition.
2. The biorisk landscape is not uniform, and specific scenarios and actors should be assessed individually. Distinct combinations of users and AI tools impact the potential for harm and the most effective likely policy solutions. Future strategies should identify clearly defined scenarios of concern and design policies to target them.
3. Existing policies regarding biosecurity and biosafety oversight need to be clarified and strengthened. AI-enabled biological designs are digital predictions that do not cause physical harm until they are produced in the real world. Such gain-of-function research, which modifies pathogens to be more dangerous, is already the target of existing policies. However, these policies do not adequately define what characteristics constitute risky research of concern, making them difficult to interpret and implement. These policies are currently under review, and could be strengthened by establishing a standard framework of acceptable and unacceptable risk applicable to both AI-enhanced and AI-agnostic biological experimentation.
….
Concluding Thoughts
The diversity of the biorisk landscape highlights the need to clearly identify which scenarios and actors are of concern. If this step is not considered, future policies may fail to address precisely those scenarios and actors of most concern. For example, biosecurity initiatives that use federal research funding as the policy lever only address one type of potentially risky actor. This approach can mitigate unintentional risk during the course of federally-funded biological research, but is unlikely to prevent deliberate bioweapon production from lone malicious actors (because such actors are unlikely to seek such funding to support bioweapon development). If lone malicious actors are of concern, then they will need to be targeted with different policy tools.
It will also be important to consider AI-enhanced risk within the current biorisk landscape. Both experts and non-experts can cause biological harm without the need for AI tools, highlighting the need for layered safeguards throughout the biorisk chain. Strategies that evaluate both AI-enhanced and AI-agnostic risks can differentiate between pre-existing risks and novel ones. This will be critical to build an effective foundation for biosecurity and biosafety oversight and more targeted measures to safeguard against AI-enabled risk.
As the United States revisits its biosecurity and biosafety oversight frameworks, a comprehensive review of the biorisk landscape could help to avoid ineffective policies that do not address the scenario of concern, or overbearing policies that hinder beneficial applications. By clearly defining the threats of concern and developing targeted mitigation measures, future policy can safeguard against the next generation of emerging biothreats.