• Digital Authoritarianism: Finding Our Way Out of the Darkness

    From Chinese government surveillance in Hong Kong and Xinjiang to Russia’s sovereign internet law and concerns about foreign operatives hacking the 2020 elections, digital technologies are changing global politics — and the United States is not ready to compete, Naazeen Barma, Brent Durbin, and Andrea Kendall-Taylor write. The United States and like-minded countries must thus develop a new strategic framework to combat the rise of high-tech illiberalism, but “as a first step, U.S. government officials need to understand how authoritarian regimes are using these tools to control their populations and disrupt democratic societies around the world.”

  • Bioweapons, Secret Labs, and the CIA: Pro-Kremlin Actors Blame the U.S. for Coronavirus Outbreak

    The Russia (earlier: Soviet) practice of spreading disinformation about public health threats is nothing new. During the Cold War, for example, a Soviet disinformation campaign blamed the United States for the AIDS virus. While epidemiologists work to identify the exact source of the Wuhan2019-nCov outbreak, pro-Kremlin actors are already blaming the United States for supposedly using bioweapons to disseminate the virus.

  • QAnon-ers’ Magic Cure for Coronavirus: Just Drink Bleach!

    QAnon, a fervently pro-Trump conspiracy theory which started with a series of online posts in October 2017 from an anonymous figure called “Q,” imagines a world where Donald Trump is engaged in a secret and noble war with a cabal of pedophile-cannibals in the Democratic Party, the finance industry, Hollywood, and the “deep state.” Will Sommer writes as the global death toll from an alarming new coronavirus surged this week, promoters of the QAnon conspiracy theory were urging their fans to ward off the illness by purchasing and drinking dangerous bleach.

  • Is There a Targeted Troll Campaign Against Lisa Page? A Bot Sentinel Investigation

    “Homewrecker.” “Traitor.” “Tramp.” These are just some of the insults flung at Lisa Page—the former FBI lawyer whom President Trump has targeted for her text messages critical of him during the 2016 election—in the almost 4,000 responses to a tweet she posted on 18 January. “Public figures often receive online abuse, after all. “But the replies to Page’s tweet stand out. They likely represent a targeted trollbot attack—one that nobody has reported on until now,” Christopher Bouzy, the founder and CEO of Bot Sentinel, writes. The troll attack on Page “looks a lot like the coordinated campaigns we witnessed during the 2016 election, when a swarm of accounts would suddenly begin tweeting the same toxic messaging. All this raises a question: Who is behind the apparent trollbot activity against Page?”

  • Artificial Intelligence and the Manufacturing of Reality

    The belief in conspiracy theories highlights the flaws humans carry with them in deciding what is or is not real. The internet and other technologies have made it easier to weaponize and exploit these flaws, beguiling more people faster and more compellingly than ever before. It is likely artificial intelligence will be used to exploit the weaknesses inherent in human nature at a scale, speed, and level of effectiveness previously unseen. Adversaries like Russia could pursue goals for using these manipulations to subtly reshape how targets view the world around them, effectively manufacturing their reality. If even some of our predictions are accurate, all governance reliant on public opinion, mass perception, or citizen participation is at risk.

  • How Russia May have Used Twitter to Seize Crimea

    Online discourse by users of social media can provide important clues about the political dispositions of communities. New research suggests it can even be used by governments as a source of military intelligence to estimate prospective casualties and costs incurred from occupying foreign territories. New research shows real-time social media data may have been a source of military intelligence for the Kremlin and potentially other governments.

  • “Like” at Your Own Risk

    New “Chameleon” Attack Can Secretly Modify Content on Facebook, Twitter. or LinkedIn: That video or picture you “liked” on social media of a cute dog, your favorite team or political candidate can actually be altered in a cyberattack to something completely different, detrimental and potentially criminal.

  • Researcher Tests “Vaccine” Against Hate

    Amid a spike in violent extremism around the world, a communications researcher is experimenting with a novel idea: whether people can be “inoculated” against hate with a little exposure to extremist propaganda, in the same manner vaccines enable human bodies to fight disease.

  • "Redirect Method": Countering Online Extremism

    In recent years, deadly white supremacist violence at houses of worship in Pittsburgh, Christchurch, and Poway demonstrated the clear line from violent hate speech and radicalization online to in-person violence. With perpetrators of violence taking inspiration from online forums, leveraging the anonymity and connectivity of the internet, and developing sophisticated strategies to spread their messages, the stakes couldn’t be higher in tackling online extremism. Researchers have developed the Redirect Method to counter white supremacist and jihadist activity online.

  • YouTube’s Algorithms Might Radicalize People – but the Real Problem Is We’ve No Idea How They Work

    Does YouTube create extremists? It’s hard to argue that YouTube doesn’t play a role in radicalization, Chico Camargo writes. “In fact, maximizing watchtime is the whole point of YouTube’s algorithms, and this encourages video creators to fight for attention in any way possible.” Society must insist on using algorithm auditing, even though it is a difficult and costly process. “But it’s important, because the alternative is worse. If algorithms go unchecked and unregulated, we could see a gradual creep of conspiracy theorists and extremists into our media, and our attention controlled by whoever can produce the most profitable content.”

  • Chinese Communist Party’s Media Influence Expands Worldwide

    Over the past decade, Chinese Communist Party (CCP) leaders have overseen a dramatic expansion in the regime’s ability to shape media content and narratives about China around the world, affecting every region and multiple languages, according to a new report. This trend has accelerated since 2017, with the emergence of new and more brazen tactics by Chinese diplomats, state-owned news outlets, and CCP proxies.

  • Combating the Latest Technological Threat to Democracy: A Comparison of Facebook and Twitter’s Deepfake Policies

    Twitter and Facebook have both recently announced policies for handling synthetic and manipulated media content on their platforms. Side-by-side comparison and analysis of Twitter and Facebook’s policies highlights that Facebook focuses on a narrow, technical type of manipulation, while Twitter’s approach contemplates the broader context and impact of manipulated media. 

  • Countering Hate Speech by Detecting, Highlighting “Help Speech”

    Researchers have developed a system that leverages artificial intelligence to rapidly analyze hundreds of thousands of comments on social media and identify the fraction that defend or sympathize with disenfranchised minorities such as the Rohingya community. Human social media moderators, who couldn’t possibly manually sift through so many comments, would then have the option to highlight this “help speech” in comment sections.

  • Why the Jeffrey Epstein Saga Was the Russian Government-Funded Media’s Top Story of 2019

    In a year featuring a presidential impeachment, Brexit, mass protests in Hong Kong, and widespread geopolitical turmoil, few topics dominated the Russian government-funded media landscape quite like the arrest and subsequent suicide of billionaire financier and serial sex offender Jeffrey Epstein. Given the lack of any notable connection between Epstein and Russian interests, the focus on Epstein highlights the Kremlin’s clear prioritization of content meant to paint a negative image of the West rather than a positive image of Russia.

  • New AI-Based Tool Flags Fake News for Media Fact-Checkers

    A new artificial intelligence (AI) tool could help social media networks and news organizations weed out false stories. The tool uses deep-learning AI algorithms to determine if claims made in posts or stories are supported by other posts and stories on the same subject.