AI and the Spread of Fake News Sites: Experts Explain How to Counteract Them

expert at Virginia Tech. “Despite these challenges, there is a global recognition that something needs to be done. This is vitally important given that the U.S., U.K., India, and the E.U. all have important elections in 2024, which will likely see a host of disinformation posted throughout social media.    

“Easy access to AI means that disinformation, specifically deepfakes, are easier to create and disseminate, and the law will have a tough time catching. Legal accountability for deepfake content presents certain logistical problems, as many of the individuals creating the content may never be identified or caught. Some of these content creators live outside of the nation in which their content gets posted, which makes it harder to hold them accountable.    

“Technological developments such as Sora show why so many people are concerned about the connection between AI and disinformation. While Sora is not yet released to the public, it demonstrates that users will increasingly have few barriers to creating high quality AI-generated content. Generative AI video and images are so good they cannot be distinguished from actual photographs of real events.  Even watermarking and disclosures may not be enough because they can be altered and removed. As a result, politicians, campaigns, and voters are entering a new political reality where disinformation will be higher quality and more prolific.

“In the U.S., under the Communications Decency Act, Section 230, social media sites that host political disinformation, including deepfakes, are legally immune from responsibility. Combatting election disinformation largely falls to platforms’ self-imposed terms of use, which have drawn criticisms and allegations of unfair bias. 

“There have been calls to hold the AI platforms legally responsible for disinformation, an approach that may result in internal guardrails on creating disinformation.  However, AI platforms are still developing and proliferating, so a full-proof structure that prevents AI from creating disinformation is not in place and likely would be impossible to create,” Myers said.  

Julia Feerrar, an associate professor at the University Libraries at Virginia Tech and head of the Digital Literacy Initiativeson how you can guard against disinformation
 “AI-generated and other false or misleading online content can look very much like quality content,” said Julia Feerrar, librarian and digital literacy educator at Virginia Tech. “As AI continues to evolve and improve, we need strategies to detect fake articles, videos, and images that don’t just rely on how they look.

“One of the most powerful things you can do to identify misinformation, whether AI-generated or not, is to look at where it’s coming from. Is it from a reputable, professional news organization or from a website or account you don’t recognize? If you’re even a little unsure, open a new browser tab and do a quick Google search for the name of the website. The goal is to find a description that isn’t from the original source itself — for example, many organizations will have a Wikipedia article that describes them.

“Experts refer to this process as lateral reading: searching beyond the content itself to find out more about what you’re looking at. Another way to read laterally is to see if other trusted news outlets are reporting on the same headline you’re seeing,” said Feerrar.  

More tips from Feerrar for evaluating news articles:

·  Fake news content is often designed to appeal to our emotions — it’s important to take a pause when something online sparks a big emotional reaction. 

·  Verify headlines and image content by adding fact-check to your Google search.

·  Very generic website titles can be a red flag for AI-generated news. 

·  Some generated articles have contained error text that says things along the lines of being ‘unable to fulfill this request’ because creating the article violated the AI tool’s usage policy. Some sites with little human oversight may miss deleting these messages. 

·  Current red flags for AI-generated images include a hyper-real, strange appearance overall, and unreal-looking hands and feet.