• ExtremismResearcher Tests “Vaccine” Against Hate

    By Masood Farivar

    Amid a spike in violent extremism around the world, a communications researcher is experimenting with a novel idea: whether people can be “inoculated” against hate with a little exposure to extremist propaganda, in the same manner vaccines enable human bodies to fight disease.

  • Extremism"Redirect Method": Countering Online Extremism

    In recent years, deadly white supremacist violence at houses of worship in Pittsburgh, Christchurch, and Poway demonstrated the clear line from violent hate speech and radicalization online to in-person violence. With perpetrators of violence taking inspiration from online forums, leveraging the anonymity and connectivity of the internet, and developing sophisticated strategies to spread their messages, the stakes couldn’t be higher in tackling online extremism. Researchers have developed the Redirect Method to counter white supremacist and jihadist activity online.

  • PerspectiveYouTube’s Algorithms Might Radicalize People – but the Real Problem Is We’ve No Idea How They Work

    Does YouTube create extremists? It’s hard to argue that YouTube doesn’t play a role in radicalization, Chico Camargo writes. “In fact, maximizing watchtime is the whole point of YouTube’s algorithms, and this encourages video creators to fight for attention in any way possible.” Society must insist on using algorithm auditing, even though it is a difficult and costly process. “But it’s important, because the alternative is worse. If algorithms go unchecked and unregulated, we could see a gradual creep of conspiracy theorists and extremists into our media, and our attention controlled by whoever can produce the most profitable content.”

  • China syndromeChinese Communist Party’s Media Influence Expands Worldwide

    Over the past decade, Chinese Communist Party (CCP) leaders have overseen a dramatic expansion in the regime’s ability to shape media content and narratives about China around the world, affecting every region and multiple languages, according to a new report. This trend has accelerated since 2017, with the emergence of new and more brazen tactics by Chinese diplomats, state-owned news outlets, and CCP proxies.

  • Truth decayCombating the Latest Technological Threat to Democracy: A Comparison of Facebook and Twitter’s Deepfake Policies

    By Amber Frankland and Lindsay Gorman

    Twitter and Facebook have both recently announced policies for handling synthetic and manipulated media content on their platforms. Side-by-side comparison and analysis of Twitter and Facebook’s policies highlights that Facebook focuses on a narrow, technical type of manipulation, while Twitter’s approach contemplates the broader context and impact of manipulated media. 

  • Help speechCountering Hate Speech by Detecting, Highlighting “Help Speech”

    Researchers have developed a system that leverages artificial intelligence to rapidly analyze hundreds of thousands of comments on social media and identify the fraction that defend or sympathize with disenfranchised minorities such as the Rohingya community. Human social media moderators, who couldn’t possibly manually sift through so many comments, would then have the option to highlight this “help speech” in comment sections.

  • Truth decayWhy the Jeffrey Epstein Saga Was the Russian Government-Funded Media’s Top Story of 2019

    By Bret Schafer

    In a year featuring a presidential impeachment, Brexit, mass protests in Hong Kong, and widespread geopolitical turmoil, few topics dominated the Russian government-funded media landscape quite like the arrest and subsequent suicide of billionaire financier and serial sex offender Jeffrey Epstein. Given the lack of any notable connection between Epstein and Russian interests, the focus on Epstein highlights the Kremlin’s clear prioritization of content meant to paint a negative image of the West rather than a positive image of Russia.

  • Truth decayNew AI-Based Tool Flags Fake News for Media Fact-Checkers

    A new artificial intelligence (AI) tool could help social media networks and news organizations weed out false stories. The tool uses deep-learning AI algorithms to determine if claims made in posts or stories are supported by other posts and stories on the same subject.

  • Truth decayEnhanced Deepfakes Capabilities for Less-Skilled Threat Actors Mean More Misinformation

    The ability to create manipulated content is not new. But what has changed with the advances in artificial intelligence is you can now build a very convincing deepfake without being an expert in technology. This “democratization” of deepfakes will increase the quantity of misinformation and disinformation aiming to weaken and undermine evidence-based discourse.

  • The Russia connectionThe United States Should Not Act as If It's the Only Country Facing Foreign Interference

    By Sydney Simon

    “Right now, Russia’s security services and their proxies have geared up to repeat their interference in the 2020 election. We are running out of time to stop them.” This stark warning from former National Security Council official Fiona Hill serves as a sharp reminder of the threat to democracy posed by foreign interference and disinformation. Russia’s ongoing interference in U.S. affairs is just a small piece on a big chessboard. A key foreign policy goal of the Kremlin is to discredit, undermine, and embarrass what it sees as a liberal international order intent on keeping Russia down and out. Russia’s systematic attack on U.S. democracy in 2016 was unprecedented, but its playbook is not unique.

  • Online hateContaining Online Hate Speech as If It Were a Computer Virus

    Artificial intelligence is being developed which will allow advisory “quarantining” of hate speech in a manner akin to malware filters – offering users a way to control exposure to “hateful content” without resorting to censorship.

  • PerspectiveSeizure-Triggering Attack Is Stark Example of How Social Media Can Be Weaponized

    Followers of the Epilepsy Foundation’s Twitter handle were targeted last month with posts containing strobe light GIFs and videos which could have caused seizures for people with epilepsy, the foundation announced Monday. “While this kind of activity may not bear the hallmarks of a cyberattack, which can trick users into clicking malicious links or knock a website offline by flooding it with junk traffic, this attack shows that platforms can have even their normal functions weaponized in order to cause physical harm,” Shannon Vavra writes.

  • Perspective: Online killer marketClick Here to Kill

    The idea of an online assassination market was advanced long before it was possible to build one, and long before there was anything resembling the dark web. Susan Choi writes that a threshold had been crossed: advances in encryption and cryptocurrency make this dark vision a reality: Journalists at BBC News Russia confirmed that on 12 March 2019, the first known case of a murder being ordered on the dark web and successfully carried out by hired assassins. The FBI and DHS are worried.

  • Social mediaAuthoritarian Regimes Employ New Twitter Tactics to Quash Dissent

    When protesters use social media to attract attention and unify, people in power may respond with tweeting tactics designed to distract and confuse, according to a new study. Authoritarian regimes appear to be growing more savvy in their use of social media to help suppress mass movements.

  • Truth decayFacebook's Ad Delivery System Deepens the U.S. Political Divide

    Facebook is wielding significant power over political discourse in the United States, thanks to an ad delivery system that reinforces political polarization among users, according to new research. The study shows for the first time that Facebook delivers political ads to its users based on the content of those ads and the information the media company has on its users—and not necessarily based on the audience intended by the advertiser.