PrivacyProtecting Yourself against Facial Recognition Software

Published 6 August 2020

The rapid rise of facial recognition systems has placed the technology into many facets of our daily lives, whether we know it or not. What might seem innocuous when Facebook identifies a friend in an uploaded photo grows more ominous in enterprises such as Clearview AI, a private company that trained its facial recognition system on billions of images scraped without consent from social media and the internet. A new research project from the University of Chicago provides a powerful new protection mechanism.

The rapid rise of facial recognition systems has placed the technology into many facets of our daily lives, whether we know it or not. What might seem innocuous when Facebook identifies a friend in an uploaded photo grows more ominous in enterprises such as Clearview AI, a private company that trained its facial recognition system on billions of images scraped without consent from social media and the internet.

But thus far, people have had few protections against this use of their images—apart from not sharing photos publicly at all.

A new research project from the University of Chicago Department of Computer Science provides a powerful new protection mechanism. Named Fawkes, the software tool “cloaks” photos to trick the deep learning computer models that power facial recognition, without noticeable changes visible to the human eye. With enough cloaked photos in circulation, a computer observer will be unable to identify a person from even an unaltered image, protecting individual privacy from unauthorized and malicious intrusions. The tool targets unauthorized use of personal images, and has no effect on models built using legitimately obtained images, such as those used by law enforcement.

“It’s about giving individuals agency,” said Emily Wenger, a third-year PhD student and co-leader of the project with first-year PhD student Shawn Shan. “We’re not under any delusions that this will solve all privacy violations, and there are probably both technical and legal solutions to help push back on the abuse of this technology. But the purpose of Fawkes is to provide individuals with some power to fight back themselves, because right now, nothing like that exists.”

UChicago notes that the technique builds off the fact that machines “see” images differently than humans. To a machine learning model, images are simply numbers representing each pixel, which systems known as neural networks mathematically organize into features that they use to distinguish between objects or individuals. When fed with enough different photos of a person, these models can use these unique features to identify the person in new photos, a technique used for security systems, smartphones, and—increasingly—law enforcement, advertising, and other controversial applications.