State Lawmakers Eye Promise, Pitfalls of AI Ahead of November Elections

“Election propaganda and disinformation has been part of the zeitgeist for the existence of humanity,” said Euer, who chairs the Senate Judiciary Committee. “Now, we have high-tech tools to do it.”

Connecticut state Sen. James Maroney, a Democrat, agreed that concerns about AI’s effects on elections are legitimate. But he emphasized that most deepfakes target women with digitally generated nonconsensual intimate images or revenge porn. Research firm Sensity AI has tracked online deepfake videos for years, finding 90% of them are nonconsensual porn, mostly targeting women.

Maroney sponsored legislation this year that would have regulated artificial intelligence and criminalized deepfake porn and false political messaging. That bill passed the state Senate, but not the House. Democratic Gov. Ned Lamont opposed the measure, saying it was premature and potentially harmful to the state’s technology industry.

While Maroney has concerns about AI, he said the upsides far outweigh the risks. For example, AI can help lawmakers communicate with constituents through chatbots or translate messaging into other languages.

Top election officials on AI

During one session in Louisville, New Hampshire Republican Secretary of State David Scanlan said AI could improve election administration by making it easier to organize election statistics or get official messaging out to the public.

Still, New Hampshire experienced firsthand some of the downside of the new technology earlier this year when voters received robocalls that used artificial intelligence to imitate President Joe Biden’s voice to discourage participation in a January primary.

Prosecutors charged the political operative who allegedly organized the fake calls with more than a dozen crimes, including voter suppression, and the Federal Communications Commission proposed a $6 million fine against him.

While the technology may be new, Scanlan said election officials have always had to keep a close eye on misinformation about elections and extreme tactics by candidates or their supporters and opponents.

“You might call them dirty tricks, but it has always been in candidates’ arsenals, and this really was a form of that as well,” he said. “It’s just more complex.”

The way state officials responded, by quickly identifying the calls as fake and investigating their origins, serves as a playbook for other states ahead of November’s elections, said Cait Conley, a senior adviser at the federal Cybersecurity and Infrastructure Security Agency focused on election security.

“What we saw New Hampshire do is best practice,” she said during the presentation. “They came out quickly and clearly and provided guidance, and they really just checked the disinformation that was out there.”

Kentucky Republican Secretary of State Michael Adams told Stateline that AI could prove challenging for swing states in the presidential election. But he said it may still be too new of a technology to cause widespread problems for most states.

“Of the 99 things that we chew our nails over, it’s not in the top 10 or 20,” he said in an interview. “I don’t know that it’s at a maturity level that it’ll be utilized everywhere.”

Adams this year received the John F. Kennedy Profile in Courage Award for championing the integrity of elections despite pushback from fellow Republicans. He said AI is yet another obstacle facing election officials who already must combat challenges including disinformation and foreign influence.

More bills coming

With an absence of congressional action, states have increasingly sought to regulate the quickly evolving world of AI on their own.

NCSL this year tracked AI bills in at least 40 states, Puerto Rico, the Virgin Islands and Washington, D.C.

As states examine the issue, many are looking at Colorado, which this year became the first state to create a sweeping regulatory framework for artificial intelligence. Technology companies opposed the measure, worried it will stifle innovation in a new industry.

Colorado Senate Majority Leader Robert Rodriguez, a Democrat who sponsored the bill, said lawmakers modeled much of their language on European Union regulations to avoid creating mismatched rules for companies using AI. Still, the law will be examined by a legislative task force before going into effect in 2026.

“It’s a first-in-the nation bill, and I’m under no illusion that it’s perfect and ready to go,” he said. “We’ve got two years.”

When Texas lawmakers reconvene next January, state Rep. Giovanni Capriglione expects to see many AI bills flying.

A Republican and co-chair of a state artificial intelligence advisory council, Capriglione said he’s worried about how generative AI may influence how people vote — or even if they vote — in both local and national elections.

“Without a doubt, artificial intelligence is being used to sow disinformation and misinformation,” he said, “and I think as we get closer to the election, we’ll see a lot more cases of it being used.”

Kevin Hardy covers business, labor and rural issues for Stateline from the Midwest. The article was originally appeared in Stateline