AI and the Spread of Fake News Sites: Experts Explain How to Counteract Them
With national elections looming in the United States, concerns about misinformation are sharper than ever, and advances in artificial intelligence (AI) have made distinguishing genuine news sites from fake ones even more challenging.
With national elections looming in the United States, concerns about misinformation are sharper than ever, and advances in artificial intelligence (AI) have made distinguishing genuine news sites from fake ones even more challenging. AI programs, especially Large Language Models (LLMs), which train to write fluent-reading text using vast data sets, have automated many aspects of fake news generation. The new instant video generator Sora, which produces highly detailed, Hollywood-quality clips, further raises concerns about the easy spread of fake footage.
Virginia Tech experts explore three different facets of the AI-fueled spread of fake news sites and the efforts to combat them.
Walid Saad, a professor in the Bradley Department of Electrical and Computer Engineering at Virginia Tech, on how technology helps generate, and identify, fake news
“The ability to create websites that host fake news or fake information has been around since the inception of the Internet, and they pre-date the AI revolution,” said Walid Saad, engineering and machine learning expert at Virginia Tech. “With the advent of AI, it became easier to sift through large amounts of information and create ‘believable’ stories and articles. Specifically, LLMs made it more accessible for bad actors to generate what appears to be accurate information. This AI-assisted refinement of how the information is presented makes such fake sites more dangerous.
“The websites keep operating as long as people are feeding from them. If misinformation is being widely shared on social networks, the individuals behind the fake sites will be motivated to continue spreading the misinformation.
“Addressing this challenge requires collaboration between human users and technology. While LLMs have contributed to the proliferation of fake news, the also presents potential tools to detect and weed out misinformation. Human input — be it from readers, administrators, or other users — is indispensable. Users and news agencies bear the responsibility not to amplify or share false information, and additionally, users reporting potential misinformation will help refine AI-based detection tools, speeding the identification process.
“Crucially, while these measures aim to assist users in discerning between authentic and fake news, they must align with the principles of the First Amendment and refrain from censoring free speech,” Saad said.
Cayce Myers, a professor of public relations and director of graduate studies at the School of Communication at Virginia Tech, on what legal measures can and can’t do
“Regulating disinformation in political campaigns presents a multitude of practical and legal issues,” said Cayce Myers, communications policy