DEEPFAKES & THE ELECTIONVoters: Here’s How to Spot AI “Deepfakes” That Spread Election-Related Misinformation

By Emma Folts

Published 18 October 2024

For years, people have spread misinformation by manipulating photos and videos with tools such as Adobe Photoshop. These fakes are easier to recognize, and they’re harder for bad actors to replicate on a large scale. Generative AI systems, however, enable users to create content quickly and easily. Domestic and foreign adversaries can use deepfakes and other forms of generative AI to spread false information about a politician’s platform or doctor their speeches.

As you scroll social media, you come across a video of a presidential candidate. The video depicts the candidate saying something outrageous, in their own voice, with realistic mannerisms. Shocked, you share the video with your friends and family. 

The video, however, was fake. The candidate never made the inflammatory remark. The content was created with generative artificial intelligence (AI) systems, which have skyrocketed in popularity ahead of the 2024 presidential election. While these tools can be used harmlessly, they allow bad actors to create misinformation more quickly and realistically than before, potentially increasing their influence on voters. 

Generative AI systems, such as ChatGPT, are trained on large datasets to create written, visual or audio content in response to prompts. When fed real images, some algorithms can produce fake photos and videos known as “deepfakes.”

Domestic and foreign adversaries can use deepfakes and other forms of generative AI to spread false information about a politician’s platform or doctor their speeches, said Thomas Scanlon, principal researcher at Carnegie Mellon University’s Software Engineering Institute and an adjunct professor at its Heinz College of Information Systems and Public Policy. 

“The concern with deepfakes is how believable they can be, and how problematic it is to discern them from authentic footage,” Scanlon said. 

Voters have seen more ridiculous AI-generated content –– such as a photo of Trump appearing to ride a lion –– than an onslaught of hyper-realistic deepfakes full of falsehoods, according to the Associated Press. Still, Scanlon is concerned that voters will be exposed to more harmful generative content on or shortly before Election Day, such as videos depicting poll workers saying an open voting location is closed. 

That sort of misinformation, he said, could prevent voters from casting their ballots because there will be little time to correct the false information. Overall, AI-generated deceit could further erode voters’ trust in the country’s democratic institutions and elected officials, according to the university’s Block Center for Technology and Society, housed in Heinz College. 

“People are just constantly being bombarded with information, and it’s up to the consumer to determine: What is the value of it, but also, what is their confidence in it? And I think that’s really where individuals may struggle,” said Randall Trzeciak, director of the Heinz College master’s program in information security policy and management.