CYBERSECURITYProtecting Computer Vision from Adversarial Attacks

Published 12 July 2022

Advances in computer vision and machine learning have made it possible for a wide range of technologies to perform sophisticated tasks with little or no human supervision — from autonomous drones and self-driving cars to medical imaging and product manufacturing. Engineers are developing methods to keep these autonomous machines and devices from being hacked.

Advances in computer vision and machine learning have made it possible for a wide range of technologies to perform sophisticated tasks with little or no human supervision. From autonomous drones and self-driving cars to medical imaging and product manufacturing, many computer applications and robots use visual information to make critical decisions. Cities increasingly rely on these automated technologies for public safety and infrastructure maintenance. 

However, compared to humans, computers see with a kind of tunnel vision that leaves them vulnerable to attacks with potentially catastrophic results. For example, a human driver, seeing graffiti covering a stop sign, will still recognize it and stop the car at an intersection. The graffiti might cause a self-driving car, on the other hand, to miss the stop sign and plow through the intersection. And, while human minds can filter out all sorts of unusual or extraneous visual information when making a decision, computers get hung up on tiny deviations from expected data.

This is because the brain is infinitely complex and can process multitudes of data and past experiences simultaneously to arrive at nearly instantaneous decisions appropriate for the situation. Computers rely on mathematical algorithms trained on datasets. Their creativity and cognition are constrained by the limits of technology, math, and human foresight.

Malicious actors can exploit this vulnerability by changing how a computer sees an object, either by altering the object itself or some aspect of the software involved in the vision technology. Other attacks can manipulate the decisions the computer makes about what it sees. Either approach could spell calamity for individuals, cities, or companies. 

A team of researchers at UC Riverside’s Bourns College of Engineering are working on ways to foil attacks on computer vision systems. To do that, Salman AsifSrikanth KrishnamurthyAmit Roy-Chowdhury, and Chengyu Song are first figuring out which attacks work.

“People would want to do these attacks because there are lots of places where machines are interpreting data to make decisions,” said Roy-Chowdhury, the principal investigator on a recently concluded DARPA AI Explorations program called Techniques for Machine Vision Disruption. “It might be in the interest of an adversary to manipulate the data on which the machine is making a decision. How does an adversary attack a data stream so the decisions are wrong?”