How Deepfakes Are Being Used
There are YouTube channels devoted to deepfaked celebrities doing everyday things, like this one of Keanu Reeves hyping himself up to make a phone call. Deepfake technologies also have been used extensively in the dubbing process to match the actor’s lip movements to the audio track, creating a more seamless experience across languages. The Center for Innovation in Teaching and Learning at Illinois has been exploring educational applications for deepfakes by helping faculty create AI avatars with HeyGen.
At their worst, deepfakes undermine performers’ livelihoods and well-being. The voice-acting industry has been particularly hard hit because companies are using AI-generated narration instead of hiring actors. Even worse, actresses and Twitch streamers are finding out that their likenesses have been used to create pornographic content without their consent. While deepfakes are opening up new creative possibilities, these applications are compromising opportunities for artists to contribute to the entertainment industry on their terms.
What do you expect to see regarding the use of political deepfakes during this presidential election cycle?
Voters across the political spectrum need to be especially vigilant this election season when they encounter images, video and sound recordings of candidates. We’ve already seen two prominent examples of how deepfakes have been used to misinform and mislead voters. During the primaries, a deepfaked Joe Biden called New Hampshire residents to discourage them from voting, and the BBC reported that AI-generated images of Donald Trump were being used by political advocacy groups to court Black voters. Social media provides an international platform to share deepfaked content quickly and anonymously, so be especially wary of the content that you’re encountering on Facebook, X, Tik Tok and Instagram.
At the same time, we’re seeing legislators adopt AI to promote conversations about accessibility and inclusion. Rep. Jennifer Wexton of Virginia delivered an address using an AI-generated version of her voice to foreground how adaptive technologies support people with neurological disorders like hers. I think that we’re going to see more candidates address AI and suggest policies to address its impact on political discourse, art and privacy.
Is there any regulation of deepfake images or videos? What are some of the issues with regulating them?
There’s not yet a unified approach to addressing the spread of deepfaked content, but we’re seeing increased momentum within the EU and the U.S. to label images and video created with generative AI tools. So far, this movement has been led by Meta through Instagram and Facebook, as well as prominent news agencies. However, it’s difficult to create firm guidelines for implementing a strategy for labeling content consistently because photo manipulation tools like Photoshop are so common.
To complicate matters further, training AI on copyrighted content is legal. The courts have tended to characterize this as a fair use of the material because it transforms the content into an algorithmic representation of patterns. Access to content can be limited by licensing agreements and terms of service.
If you are a content creator, it’s especially important for you to review a website’s terms of service to see if they permit data mining. If the site does, then anything you post can be used to train generative AI tools. Some sites like DeviantArt and Facebook allow you to opt out. We’re also seeing the rise of tools to protect artists and their work. Programs like Glaze and AntiFake introduce noise into images and audio to disrupt the deep learning process, making protected content more difficult to deepfake.
How can someone detect a fake image or video?
The best detector is you! Automated detection is difficult because these tools rely on the same pattern recognition techniques that are used to create deepfakes in the first place. I recommend using the SILL method to evaluate content: stop, investigate, look and listen.
1. Stop: Deepfakes are designed to make you react. Pausing before you re-share or act on content gives you time to evaluate the content.
2. Investigate the source: Using reverse image searches like Google Lens or TinEye can help you identify the original source of an image.
3. Look for imperfections: Deepfakes tend to be too perfect, so keep an eye out for things like blur, camera movement and shadows. Limited use of hand gestures or an unexpected number of fingers can be signs of an AI-generated image.
4. Listen: Deepfaked voices often sound monotone, lacking the rise and fall in pitch of human speech.
If you’re curious to learn more about how deepfakes are made (and to test your deepfake spotting skills), check out MIT’s “In Event of Moon Disaster.”
Jodi Heckel is the arts and humanities editor for the News Bureau, University of Illinois at Urbana-Champaign. The article was originally posted to the website of the University of Illinois at Urbana-Champaign.