DEMOCRACY WATCHHow AI Bots Spread Misinformation Online and Undermine Democratic Politics

By Sophia Melanson Ricciardone

Published 27 July 2024

As we navigate this era of digital discourse, awareness of blind spots in our social psychology is our best defense. Understanding how cues or triggers affect us can reduce their influence over time. The more aware we are of bots and how they work, the more able we are to protect ourselves from misleading rhetoric, ensuring our democratic processes remain robust and inclusive.

Consider a typical morning routine: coffee in hand, you peruse Twitter (now rebranded as X) to catch up on the news. Headlines appear among a flurry of tweets on everything from memes about political leaders to cultural Marxism, free speech, making America great again and draining the swamp.

Before your day has even begun, a burst of disparate ideas coalesces in your mind in response to the appearance of a single word or catchphrase. It’s a scenario repeated daily, where snippets of information mold themselves onto our views and biases, influencing how we interpret online discourse and those who engage in it.

In the heated space of contemporary politics, popularized words and catchphrases wield a lot of influence. Controversial rallying cries like “build the wall” and “Trudeau must go” regularly appear on social media, punctuating debates and discourse with an emotionally palpable fervor.

These phrases are more than mere words; they are ideological shorthand that seek to galvanize people and spark outrage online like never before.

But, in our increasingly digitized world, how do we know whether the accounts we interact with online are other human beings or bots? And given the powerful influence this kind of rhetoric can have, what impact do these bots have on our decision-making and democratic processes?

AI Bots
My PhD research focused on the rise of “botaganda” — online content circulated by automated accounts, or bots, for electioneering purposes.

Bots are automated accounts on social media that can be used to post tweets, like and share content or follow users on social media without needing a person to do it manually.

Scholars have highlighted how bots “could be used to covertly exploit weaknesses in [a person’s] character and persuade them to take action against their own best interest.”

The advent of artificial intelligence and machine learning has certainly equipped us with several advantages in contemporary life, but it has also made independent political thought much harder to achieve. It is increasing the prevalence of digital misinformation, and demands that we exercise vigilance to ensure we can make informed decisions.

Understanding the social psychology that makes us susceptible to catchphrases like “drain the swamp” is integral to combating the impact of misinformation circulated online. Our social brains are susceptible to these kinds of linguistic triggers in three important ways: