AIAI and Election Integrity

By Rehan Mirza

Published 22 February 2024

We don’t yet know the full impact of artificial intelligence-generated deepfake videos on misinforming the electorate. And it may be the narrative around them — rather than the deepfakes themselves — that most undermines election integrity.

Last month, a robocall impersonating U.S. President Joe Biden went out to New Hampshire voters, advising them not to vote in the state’s presidential primary election.  The voice, generated by artificial intelligence, sounded quite real.

“Save your vote for the November election,” the voice stated, falsely asserting that a vote in the primary would prevent voters from being able to participate in the November general election.  

The robocall incident reflects a growing concern that generative AI will make it cheaper and easier to spread misinformation and run disinformation campaigns. The Federal Communications Commission last week issued a ruling to make AI-generated voices in robocalls illegal.

Deepfakes already have affected other elections around the globe. In recent elections in Slovakia, for example, AI-generated audio recordings circulated on Facebook, impersonating a liberal candidate discussing plans to raise alcohol prices and rig the election. During the February 2023 Nigerian elections, an AI-manipulated audio clip falsely implicated a presidential candidate in plans to manipulate ballots. With elections this year in over 50 countries involving half the globe’s population, there are fears deepfakes could seriously undermine their integrity.  

Media outlets including the BBC and the New York Times sounded the alarm on deepfakes as far back as 2018. However, in past elections, including the 2022 U.S. midterms, the technology did not produce believable fakes and was not accessible enough, in terms of both affordability and ease of use, to be “weaponized for political disinformation.” Instead, those looking to manipulate media narratives relied on simpler and cheaper ways to spread disinformation, including mislabeling or misrepresenting authentic videos, text-based disinformation campaigns, or just plain old lying on air.  

As Henry Ajder, a researcher on AI and synthetic media writes in a 2022 Atlantic piece, “It’s far more effective to use a cruder form of media manipulation, which can be done quickly and by less sophisticated actors, than to release an expensive, hard-to-create deepfake, which actually isn’t going to be as good a quality as you had hoped.”