AIWalking the Artificial Intelligence and National Security Tightrope

By Jack Goldsmith

Published 21 September 2023

Artificial intelligence (AI) presents nations’ security as many challenges as it does opportunities. While it could create mass-produced malware, lethal autonomous weapons systems, or engineered pathogens, AI solutions could also prove the counter to these threats. Regulating AI to maximize national security capabilities and minimize the risks presented to them will require focus, caution and intent.

Artificial intelligence (AI) presents Australia’s security as many challenges as it does opportunities. While it could create mass-produced malwarelethal autonomous weapons systems, or engineered pathogens, AI solutions could also prove the counter to these threats. Regulating AI to maximize Australia’s national security capabilities and minimize the risks presented to them will require focus, caution and intent.

One of Australia’s first major public forays into AI regulation is the Department of Industry, Science and Resources (DISR)’s recently released discussion paper on responsibly supporting AI. The paper notes AI’s numerous positive use cases if it’s adopted responsibly—including improvements in the medical imagery, engineering, and services sectors—but also recognizes its enormous risks, such as the spread of disinformation and harms of AI-enabled cyberbullying.

While national security is beyond the scope of DISR’s paper, any general regulation of AI would affect its use in national security contexts. National security is a battleground comprised of multiple political, economic, social and strategic fronts and any whole-of-government approach to regulating AI must recognize this.

Specific opportunities for AI in national security include enhanced electronic warfarecyber offense and defense, as well as improvements in defense logistics. One risk is that Australia’s adversaries will possess these same capabilities, and another is that AI could be misused or perform unreliably in life or death national security situations. Inaccurate AI-generated intelligence, for instance, could undermine Australia’s ability to deliver  effective and timely interventions, with few systems of accountability currently in place for when AI contributes to mistakes.

Australia’s adversaries will not let us take our time pontificating, however. Indeed, ASPI’s Critical Technologies Tracker has identified China’s primacy in several key AI technologies, including machine learning and data analytics—the bedrock of modern and emerging AI systems. Ensuring that AI technologies are auditable, for instance, may come at strategic disadvantage. Many so-called ‘glass box’ models, though capable of tracing the sequencing of their decision-making algorithms, are often inefficient compared to ‘black box’ options with inscrutable inner workings. The race for AI supremacy will continue apace regardless of how Australia regulates it, and those actors less burdened by ethical considerations could gain a lead over their competitors.

Equally though, fears of China’s technological superiority should not lead to cutting corners and blind acceleration. This would exponentially increase risk the likelihood of incurring AI-induced disasters over time. It could also trigger an AI arms race, adding to global strategic tension.

Regulation should therefore adequately safeguard AI whilst not hampering our ability to employ it for our national security.

This will be tough and may overlap or contradict other regulatory efforts around the world. While their behavior often raises eyebrows, big American tech companies’ hold over most major advances in AI is at the core of strategic relationships such as AUKUS. If governments ‘trust bust’, fragment or restrict these companies, they must also account for how a more diffuse market could contend with China’s ‘command economy’.

As with many complex national security challenges, walking this tightrope will take a concerted effort from government, industry, academia, civil society and the broader public. AI technologies can be managed, implemented and used safely, efficiently and securely if regulators find a balance that is neither sluggish adoption nor rash acceleration. If they pull it off, it would be the circus act of the century.

Jack Goldsmith is a visiting fellow at the Australian National University’s School of Regulation and Global Governance (RegNet). This article is published courtesy of the Australian Strategic Policy Institute (ASPI).