WMDAs AI Worsens WMD Threat, Australia Must Lead Response
When dealing with AI-enabled CBRN threats, we cannot afford to wait until the first catastrophic incident occurs. AI companies have acknowledged that frontier models have capabilities that, without adequate safeguards, could enable novices to create biological and chemical weapons.
Forty years ago, Australia proactively responded to the proliferation of chemical weapons by convening the highly successful Australia Group. That same proactive leadership is needed now to counter emerging AI-enabled chemical, biological, radiological and nuclear (CBRN) threats.
The first meetings of what became the Australia Group followed Iraq’s use of chemical weapons in 1984. In that important moment, Australia identified that inconsistent and uncoordinated export controls had, in part, enabled a disastrous outcome in this kind of weapon of mass destruction (WMD). Australia’s response to that regulatory gap became a key legacy of its diplomacy. Australia should face today’s challenges with the same leadership and initiative.
When dealing with AI-enabled CBRN threats, we cannot afford to wait until the first catastrophic incident occurs. AI companies have acknowledged that frontier models have capabilities that, without adequate safeguards, could enable novices to create biological and chemical weapons.
While some are voluntarily putting preventative measures in place, a recent assessment of AI industry leaders found that ‘none have robust, reliable, plans for ensuring the public remains safe from their products’. Relying solely on private actors would be foolhardy. Open-weight AI models are also publicly available to anyone with an internet connection. These are more vulnerable to circumvention of safeguards and are ripe for potential misuse.
This lowers the barriers to chemical and biological weapons development, including by terrorist groups, lone actors and doomsday cults.
AI could also disrupt the precarious stability between nuclear-armed powers. The integration of AI within nuclear command and control systems could further complicate critical decision-making processes in a nuclear crisis. AI enhanced cyberattacks also introduce new risk vectors for nuclear facilities.
Other jurisdictions are already ahead of Australia in identifying these threats and investing in responses. The United States’ AI Action Plan, issued on 10 July, includes screening and customer verification requirements for DNA synthesis providers. New Zealand is also considering additional regulation of such services. Britain updated its Biological Security Strategy in 2023 and established a ‘standing capability to evaluate how advanced AI could assist chemical and biological misuse’ in its AI Security Institute. The European Union has recognized the systemic risk of AI enabling chemical and biological attacks and accidents in its General-Purpose AI Code of Practice.
To lead internationally, we must first catch up domestically. Australia’s regulatory response to a world rapidly changed by AI remains at an early stage. This creates gaps in our understanding of the risk and leaves potentially dangerous vulnerabilities unaddressed.
At a minimum, Australia needs to move forward with mandatory guardrails for high-risk and general-purpose AI, completing a process that was underway at the end of 2024. These guardrails need to set clear minimum standards for AI developers and deployers in Australia, ensuring we are not at the mercy of the weakest or least scrupulous link in the AI supply chain. They should also clarify legal responsibility and liability to incentivize developers and deployers to ensure their systems cannot be misused.
The guardrails should be accompanied by a robust monitoring and incident reporting mechanism. Such mechanisms are foundational in many other industries, from aviation to healthcare. Given the number of existing regulators grappling with AI risk, effective coordination of responses to systemic incidents across sectors would help provide regulatory certainty to the private sector and protect the Australian public. Such a mechanism would also provide the government with better visibility of harms and vulnerabilities, establishing the evidentiary basis for further regulatory interventions before bad actors can exploit these weaknesses.
Going further, Australia should assess relevant regulatory settings to identify areas for reform. This could include areas such as DNA synthesis or controls on novel chemical molecules and compounds.
Finally, international cooperation and coordination on these issues will be key. Discussions in the Australia Group have already included consideration of the dual-use concerns arising from AI. This work should be prioritized to ensure the group continues to play a vital role in global counter-proliferation efforts. Australia should also consider the implications of AI for the work done under the Biological Weapons Convention and Chemical Weapons Convention, as well as in discussions between member states of the Treaty on the Non-Proliferation of Nuclear Weapons (particularly as part of its 2026 Review Conference).
AI-enabled CBRN threats present a real and growing risk, and Australia should respond through available policy options. Acting now will protect Australia’s national security, safeguard public safety and position Australia once again as a global leader in managing the security challenges of the next forty years.
Devon Whittle is the Australia Director of Global Shield Australia, an independent non-profit organization focused on effective and pragmatic policy action to reduce global catastrophic risk, including from advanced AI. This article is published courtesy of the Australian Strategic Policy Institute (ASPI).