AIAI Networks Are More Vulnerable to Malicious Attacks Than Previously Thought

Published 13 December 2023

Artificial intelligence tools hold promise for applications ranging from autonomous vehicles to the interpretation of medical images. However, a new study finds these AI tools are more vulnerable than previously thought to targeted attacks that effectively force AI systems to make bad decisions.

Artificial intelligence tools hold promise for applications ranging from autonomous vehicles to the interpretation of medical images. However, a new study finds these AI tools are more vulnerable than previously thought to targeted attacks that effectively force AI systems to make bad decisions.

At issue are so-called “adversarial attacks,” in which someone manipulates the data being fed into an AI system in order to confuse it. For example, someone might know that putting a specific type of sticker at a specific spot on a stop sign could effectively make the stop sign invisible to an AI system. Or a hacker could install code on an X-ray machine that alters the image data in a way that causes an AI system to make inaccurate diagnoses.

“For the most part, you can make all sorts of changes to a stop sign, and an AI that has been trained to identify stop signs will still know it’s a stop sign,” says Tianfu Wu, co-author of a paper on the new work and an associate professor of electrical and computer engineering at North Carolina State University. “However, if the AI has a vulnerability, and an attacker knows the vulnerability, the attacker could take advantage of the vulnerability and cause an accident.”

The new study from Wu and his collaborators focused on determining how common these sorts of adversarial vulnerabilities are in AI deep neural networks. They found that the vulnerabilities are much more common than previously thought.

“What’s more, we found that attackers can take advantage of these vulnerabilities to force the AI to interpret the data to be whatever they want,” Wu says. “Using the stop sign example, you could make the AI system think the stop sign is a mailbox, or a speed limit sign, or a green light, and so on, simply by using slightly different stickers – or whatever the vulnerability is.

“This is incredibly important, because if an AI system is not robust against these sorts of attacks, you don’t want to put the system into practical use – particularly for applications that can affect human lives.”

To test the vulnerability of deep neural networks to these adversarial attacks, the researchers developed a piece of software called QuadAttacK. The software can be used to test any deep neural network for adversarial vulnerabilities.