ARGUMENT: AI-DESIGNED BIOWEAPONS LOOMAre We Ready for a ‘DeepSeek for Bioweapons’?

Published 6 June 2025

Anthropic’s Claude 4 is a warning sign: AI that can help build bioweapons is coming, and could be widely available soon. Steven Adler writes that we need to be prepared for the consequences: “like a freely downloadable ‘DeepSeek for bioweapons,’ available across the internet, loadable to the computer of any amateur scientist who wishes to cause mass harm. With Anthropic’s Claude Opus 4 having finally triggered this level of safety risk, the clock is now ticking.”

The announcement of a powerful new artificial intelligence (AI) model is a leading indicator that many similar AI models are close behind. Steven Adler writes in Lawfare that the January 2025 release from the Chinese company DeepSeek is an example of the small gap between when an AI ability is first demonstrated and when others can match it: Only four months earlier, OpenAI had previewed their then-leading o1 “reasoning model,” which used a new approach for getting the model to think harder. Within months, the much smaller DeepSeek had roughly matched OpenAI’s results, and in doing so indicated that Chinese AI companies may not be far behind those in the U.S.

Adler writes:

In that case, matching o1’s abilities posed little specific risk, even though DeepSeek took a different approach to safety than did the leading Western companies (for instance, DeepSeek’s model is freely downloadable by anyone, and so has fewer protections against misuse). The replicated abilities were general reasoning skills, not something outright dangerous. In contrast, the abilities feared by the leading AI companies tend to be more specific, like helping people to cause harm with bioweapons.

But, as of last week, we have a leading indicator of widespread models with dangerous capabilities. Specifically, Anthropic’s recent model release—Claude Opus 4—sounded a warning bell: It is the first AI model to demonstrate a certain level of capability related to bioweapons. In particular, Anthropic can’t rule out if the model can “significantly help” relatively ordinary people “create/obtain and deploy” bioweapons: “uplifting” their abilities beyond what they can achieve with other technologies (like a search engine and the internet). These dangerous capability evaluations have been conceived of as an “early warning system” for catastrophic AI capabilities, and the system has now been triggered.

As the researcher who led this kind of dangerous-capabilities work at OpenAI and designed some of the industry’s first evaluations for bioweapons capabilities, I am confident that we can no longer count on AI systems being too weak to be dangerous. Are we ready for these abilities to become more commonplace, perhaps while also lacking safety mitigations? In other words, what happens when there is a widely available “DeepSeek for bioweapons”?