TRUTH DECAYSocial Media Manipulation in the Era of AI

By Doug Irving

Published 7 September 2024

China is not the only U.S. adversary exploring the potential propaganda gold mine that AI has opened. But China provides a useful case study, in part because its disinformation efforts seem to be getting bolder.

Li Bicheng never would have aroused the interest of RAND researchers in his early career. He was a Chinese academic, a computer scientist. He held patents for an online pornography blocker. Then, in 2019, he published a paper that should have raised alarms worldwide.

In it, he sketched out a plan for using artificial intelligence to flood the internet with fake social media accounts. They would look real. They would sound real. And they could nudge public opinion without anyone really noticing. His coauthor was a member of the Chinese military’s political warfare unit.

Li’s vision provides a glimpse at what the future of social media manipulation might look like. In a recent paper, RAND researchers argue it would pose a direct threat to democratic societies around the world. There’s no evidence that China has acted on Li’s proposal, they noted—but that should not give anyone any comfort.

“If they do a good enough job,” said William Marcellino, a senior behavioral scientist at RAND, “I’m not sure we would know about it.”

China has never really been known for the sophistication of its online disinformation efforts. It has an army of internet trolls working across a vast network of fake social media accounts. Their posts are often easy to spot. They sometimes appear in the middle of the night in the United States—working hours in China. They know the raw-nerve issues to touch, but they often use phrases that no native English speaker would use. One recent post about abortion called for legal protections for all “preborn children.”

Li Bicheng saw a way to fix all of that. In his 2019 paper, he described an AI system that would create not just posts, but personas. Accounts generated by such a system might spend most of the time posting about fake jobs, hobbies, or families, researchers warned. But every once in a while, they could slip in a reference to Taiwan or to the social wrongs of the United States. They would not require an army of paid trolls. They would not make mistakes. And little by little, they could seek to bend public opinion on issues that matter to China.

In a nation as hyperpolarized as the United States, the demand for authentic-sounding memes and posts supporting one controversial side or another will always be high. Li’s system would provide a virtually never-ending supply.