Social Media Manipulation in the Era of AI

His paper had a touch of science fiction to it when it appeared in a Chinese national defense journal in 2019. Then, three years later, an AI model known as ChatGPT made its public debut. And everything changed.

ChatGPT and other AI systems like it are known as large language models (LLMs). They ingest huge amounts of text—around 10 trillion words, in the case of GPT-4—and learn to mimic human speech. They are “very good at saying what might be said,” RAND researchers wrote, “based on what has been said before.”

You could, for example, ask an LLM to write a tweet in southern-accented American English about its favorite NASCAR driver. And it could respond: “Can’t wait to see my boy Kyle Busch tearing up the asphalt at Bristol Motor Speedway. He’s a true legend. #RowdyNation.”

LLMs can respond to jokes and cultural references. They can engage users in back-and-forth debates. Some multimodal models can generate photo-quality images and, increasingly, audio and video. If a country like China wanted to create a social-media manipulation system like Li Bicheng described, a multimodal LLM would be the way to do it.

“The evidence suggests that parts of the Chinese government are interested in this,” said Nathan Beauchamp-Mustafaga, a China expert and senior policy researcher at RAND. The rise of LLMs, he added, “doesn’t necessarily make it more likely that China will try to interfere in the 2024 U.S. elections. But if Beijing does decide to get involved, it would very likely make any potential interference much more effective.”

China is not the only U.S. adversary exploring the potential propaganda gold mine that AI has opened. Earlier this summer, investigators took down a sophisticated Russian “bot farm.” It was using AI to create fake accounts on X, the social media platform formerly known as Twitter. Those accounts had individual biographies and profile pictures and could post content, comment on other posts, and build up followers. The programmers behind the effort called them souls. Their purpose, law enforcement officials said, was to “assist Russia in exacerbating discord and trying to alter public opinion.”

But China provides a useful case study, in part because its disinformation efforts seem to be getting bolder. U.S. officials believe fake Chinese accounts tried to sway a handful of congressional races in the 2022 midterms. Taiwan officials have also accused China of producing a flurry of fake news videos just before their presidential election this year. Some featured AI-generated hosts—including, in one strange case, Santa Claus.

Pro-China accounts have spread AI images of world leaders screaming and crying. They claimed last year that the U.S. had started a devastating wildfire in Hawaii by testing a “weather weapon.” They used AI-generated photos, showing a hurricane of fire and smoke bearing down on houses and high-rises, to draw attention to those posts. Another meme from a suspected Chinese account showed the Statue of Liberty with a torch in one hand and a rifle in the other. But, coming from way back in 2023, it was easier to spot as a fake than some more-recent AI images. The statue had seven fingers on its right hand.

The most recent U.S. threat assessment notes that China is demonstrating a “higher degree of sophistication” in its influence operations. And it warns: “The PRC (People’s Republic of China) may attempt to influence the U.S. elections in 2024 at some level.”

____________________________________

“We have to assume that AI manipulation is ubiquitous, it’s proliferating, and we’re going to have to learn to live with it. That’s a really scary thing.”

____________________________________

AI is soon going to be everywhere,” Beauchamp-Mustafaga said. “The Chinese government has not publicly embraced Li Bicheng’s vision, of course; it denies doing anything like this at all. But we have to assume that AI manipulation is ubiquitous, it’s proliferating, and we’re going to have to learn to live with it. That’s a really scary thing.”

Social media platforms like Facebook and X should redouble their efforts to identify, attribute, and remove fake accounts, researchers concluded. Media companies and other legitimate content creators should develop digital watermarks or other ways to show that their pictures and videos are real. Federal regulators should at least weigh the pros and cons of requiring social media companies to verify their users’ identities, much like banks do.

But all of those steps are going to take time and require trade-offs. Getting them right will require an open and informed public conversation. That needs to start now, the researchers wrote, not after “another foreign (or domestic) attack on the U.S. democratic process in the 2024 election.” In the meantime, they added, the best defense is likely going to be a heavy dose of skepticism from anyone who ventures onto social media.

“Human beings have spent hundreds of thousands of years interacting with our environment through our senses,” Marcellino said. “Now those senses can be fooled.”

“If you get steamed up over something,” he added, “if you see it and just get immediately outraged, you should probably stop and ask yourself, ‘Am I maybe taking the bait?’”

Doug Irving is a communications analyst at RAND. This article is published courtesy of RAND.