States Rush to Combat AI Threat to Elections

On March 27, Oregon became the latest state — after Wisconsin, New Mexico, Indiana and Utah — to enact a law on AI-generated election disinformation. Florida and Idaho lawmakers have passed their own measures, which are currently on the desks of those states’ governors.

Arizona, Georgia, Iowa and Hawaii, meanwhile, have all passed at least one bill — in the case of Arizona, two — through one chamber.

As that list of states makes clear, red, blue and purple states all have devoted attention to the issue.

States Urged to Act
Meanwhile, a new report on how to combat the AI threat to elections, drawing on input from four Democratic secretaries of state, was released March 25 by the NewDEAL Forum, a progressive advocacy group.

“[G]enerative AI has the ability to drastically increase the spread of election mis- and disinformation and cause confusion among voters,” the report warned. “For instance, ‘deepfakes’ [AI-generated images, voices or videos] could be used to portray a candidate saying or doing things that never happened.”

The NewDEAL Forum report urges states to take several steps to respond to the threat, including requiring that certain kinds of AI-generated campaign material be clearly labeled; conducting role-playing exercises to help anticipate the problems that AI could cause; creating rapid-response systems for communicating with voters and the media, in order to knock down AI-generated disinformation; and educating the public ahead of time.

Secretaries of State Steve Simon of Minnesota, Jocelyn Benson of Michigan, Maggie Toulouse Oliver of New Mexico and Adrian Fontes of Arizona provided input for the report. All four are actively working to prepare their states on the issue.

Loopholes Seen
Despite the flurry of activity by lawmakers, officials and outside experts, several of the measures examined in the Voting Rights Lab analysis appear to have weaknesses or loopholes that may raise questions about their ability to effectively protect voters from AI.

Most of the bills require that creators add a disclaimer to any AI-generated content, noting the use of AI, as the NewDEAL Forum report recommends.

But the new Wisconsin law, for instance, requires the disclaimer only for content created by campaigns, meaning deepfakes produced by outside groups but intended to influence an election — hardly an unlikely scenario — would be unaffected.

In addition, the measure is limited to content produced by generative AI, even though experts say other types of synthetic content that don’t use AI, such as Photoshop and CGI — sometimes referred to as “cheap fakes” — can be just as effective at fooling viewers or listeners, and can be more easily produced.

For that reason, the NewDEAL Forum report recommends that state laws cover all synthetic content, not just that which use AI.

The Indiana, Utah and Wisconsin laws also contain no criminal penalties — violations are punishable by a $1,000 fine — raising questions about whether they will work as a deterrent.

The Arizona and Florida bills do include criminal penalties. But Arizona’s two bills apply only to digital impersonation of a candidate, meaning plenty of other forms of AI-generated deception — impersonating a news anchor reporting a story, for instance — would remain legal.

And one of the Arizona bills, as well as New Mexico’s law, applied only in the 90 days before an election, even though AI-generated content that appears before that window could potentially still affect the vote.

Experts say the shortcomings exist in large part because, since the threat is so new, states don’t yet have a clear sense of exactly what form it will take.

“The legislative bodies are trying to figure out the best approach, and they’re working off of examples that they’ve already seen,” said Bellamy, pointing to the examples of the Slovakian audio and the Biden robocalls.

“They’re just not sure what direction this is coming from, but feeling the need to do something.”

Bellamy added: “I think that we will see the solutions evolve. The danger of that is that AI-generated content and what it can do is also likely to evolve at the same time. So hopefully we can keep up.”

Zachary Roth is the National Democracy Reporter for States Newsroom. The article was originally appeared in Stateline