ARGUMENT: AI Cyber VulnerabilitiesManaging the Cybersecurity Vulnerabilities of Artificial Intelligence

Published 17 November 2021

The National Security Commission on Artificial Intelligence found that, “While we are on the front edge of this phenomenon, commercial firms and researchers have documented attacks that involve evasion, data poisoning, model replication, and exploiting traditional software flaws to deceive, manipulate, compromise, and render AI systems ineffective.” Jim Dempsey writes that “In assembling a toolkit to deal with AI vulnerabilities, insights and approaches may be derived from the field of cybersecurity. Indeed, vulnerabilities in AI-enabled information systems are, in key ways, a subset of cyber vulnerabilities.”

Last week, Andy Grotto and Jim Dempsey published a new working paper on policy responses to the risk that artificial intelligence (AI) systems, especially those dependent on machine learning (ML), can be vulnerable to intentional attack. As the National Security Commission on Artificial Intelligence found, “While we are on the front edge of this phenomenon, commercial firms and researchers have documented attacks that involve evasion, data poisoning, model replication, and exploiting traditional software flaws to deceive, manipulate, compromise, and render AI systems ineffective.”

Dempsey writes in Lawfare that

The demonstrations of vulnerability are remarkable: In the speech recognition domain, research has shown it is possible to generate audio that sounds like speech to ML algorithms but not to humans. There are multiple examples of tricking image recognition systems to misidentify objects using perturbations that are imperceptible to humans, including in safety critical contexts (such as road signs). One team of researchers fooled three different deep neural networks by changing just one pixel per image. Attacks can be successful even when an adversary has no access to either the model or the data used to train it. Perhaps scariest of all: An exploit developed on one AI model may work across multiple models.

As AI becomes woven into commercial and governmental functions, the consequences of the technology’s fragility are momentous. As Lt. Gen. Mary O’Brien, the Air Force’s deputy chief of staff for intelligence, surveillance, reconnaissance and cyber effects operations, said recently, “if our adversary injects uncertainty into any part of that [AI-based] process, we’re kind of dead in the water on what we wanted the AI to do for us.”

Research is underway to develop more robust AI systems, but there is no silver bullet. The effort to build more resilient AI-based systems involves many strategies, both technological and political, and may require  deciding not to deploy AI at all in a highly risky context.

He adds:

In assembling a toolkit to deal with AI vulnerabilities, insights and approaches may be derived from the field of cybersecurity. Indeed, vulnerabilities in AI-enabled information systems are, in key ways, a subset of cyber vulnerabilities. After all, AI models are software programs.

Consequently, policies and programs to improve cybersecurity should expressly address the unique vulnerabilities of AI-based systems; policies and structures for AI governance should expressly include a cybersecurity component.