• Countering Hate Speech by Detecting, Highlighting “Help Speech”

    Researchers have developed a system that leverages artificial intelligence to rapidly analyze hundreds of thousands of comments on social media and identify the fraction that defend or sympathize with disenfranchised minorities such as the Rohingya community. Human social media moderators, who couldn’t possibly manually sift through so many comments, would then have the option to highlight this “help speech” in comment sections.

  • Why the Jeffrey Epstein Saga Was the Russian Government-Funded Media’s Top Story of 2019

    In a year featuring a presidential impeachment, Brexit, mass protests in Hong Kong, and widespread geopolitical turmoil, few topics dominated the Russian government-funded media landscape quite like the arrest and subsequent suicide of billionaire financier and serial sex offender Jeffrey Epstein. Given the lack of any notable connection between Epstein and Russian interests, the focus on Epstein highlights the Kremlin’s clear prioritization of content meant to paint a negative image of the West rather than a positive image of Russia.

  • New AI-Based Tool Flags Fake News for Media Fact-Checkers

    A new artificial intelligence (AI) tool could help social media networks and news organizations weed out false stories. The tool uses deep-learning AI algorithms to determine if claims made in posts or stories are supported by other posts and stories on the same subject.

  • Enhanced Deepfakes Capabilities for Less-Skilled Threat Actors Mean More Misinformation

    The ability to create manipulated content is not new. But what has changed with the advances in artificial intelligence is you can now build a very convincing deepfake without being an expert in technology. This “democratization” of deepfakes will increase the quantity of misinformation and disinformation aiming to weaken and undermine evidence-based discourse.

  • The United States Should Not Act as If It's the Only Country Facing Foreign Interference

    “Right now, Russia’s security services and their proxies have geared up to repeat their interference in the 2020 election. We are running out of time to stop them.” This stark warning from former National Security Council official Fiona Hill serves as a sharp reminder of the threat to democracy posed by foreign interference and disinformation. Russia’s ongoing interference in U.S. affairs is just a small piece on a big chessboard. A key foreign policy goal of the Kremlin is to discredit, undermine, and embarrass what it sees as a liberal international order intent on keeping Russia down and out. Russia’s systematic attack on U.S. democracy in 2016 was unprecedented, but its playbook is not unique.

  • Containing Online Hate Speech as If It Were a Computer Virus

    Artificial intelligence is being developed which will allow advisory “quarantining” of hate speech in a manner akin to malware filters – offering users a way to control exposure to “hateful content” without resorting to censorship.

  • Seizure-Triggering Attack Is Stark Example of How Social Media Can Be Weaponized

    Followers of the Epilepsy Foundation’s Twitter handle were targeted last month with posts containing strobe light GIFs and videos which could have caused seizures for people with epilepsy, the foundation announced Monday. “While this kind of activity may not bear the hallmarks of a cyberattack, which can trick users into clicking malicious links or knock a website offline by flooding it with junk traffic, this attack shows that platforms can have even their normal functions weaponized in order to cause physical harm,” Shannon Vavra writes.

  • Click Here to Kill

    The idea of an online assassination market was advanced long before it was possible to build one, and long before there was anything resembling the dark web. Susan Choi writes that a threshold had been crossed: advances in encryption and cryptocurrency make this dark vision a reality: Journalists at BBC News Russia confirmed that on 12 March 2019, the first known case of a murder being ordered on the dark web and successfully carried out by hired assassins. The FBI and DHS are worried.

  • Authoritarian Regimes Employ New Twitter Tactics to Quash Dissent

    When protesters use social media to attract attention and unify, people in power may respond with tweeting tactics designed to distract and confuse, according to a new study. Authoritarian regimes appear to be growing more savvy in their use of social media to help suppress mass movements.

  • Facebook's Ad Delivery System Deepens the U.S. Political Divide

    Facebook is wielding significant power over political discourse in the United States, thanks to an ad delivery system that reinforces political polarization among users, according to new research. The study shows for the first time that Facebook delivers political ads to its users based on the content of those ads and the information the media company has on its users—and not necessarily based on the audience intended by the advertiser.

  • Samoa Has Become a Case Study for “Anti-Vax” Success

    In Samoa, Facebook is the main source of information. Michael Gerson writes that it is thus not surprising that anti-vaccination propaganda, much of it generated in the United States, has arrived through social media and discourages Samoan parents from vaccinating their children. “This type of import has helped turn Samoa into a case study of ‘anti-vax’ success — and increased the demand for tiny coffins decorated with flowers and butterflies,” he writes, adding: “Samoa is a reminder of a pre-vaccine past and the dystopian vision of a post-vaccine future.”

  • Social Media Vetting of Visa Applicants Violates the First Amendment

    Beginning in May, the State Department has required almost every applicant for a U.S. visa—more than fourteen million people each year—to register every social media handle they’ve used over the past five years on any of twenty platforms. “There is no evidence that the social media registration requirement serves the government’s professed goals” of “strengthen” the processes for “vetting applicants and confirming their identity,” Carrie DeCell and Harsha Panduranga write, adding: “The registration requirement chills the free speech of millions of prospective visitors to the United States, to their detriment and to ours,” they write.

  • Telegram: The Latest Safe Haven for White Supremacists

    Telegram, the online social networking, may not be as popular in the U.S. as Twitter or Facebook, but with more than 200 million users, it has a significant audience. And it is gaining popularity. ADL reports that Telegram has become a popular online gathering place for the international white supremacist community and other extremist groups who have been displaced or banned from more popular sites.

  • New Research Center Will Fight Misinformation

    On 3 December, the University of Washington launched the Center for an Informed Public (CIP). The CIP, an interdisciplinary center housed in UW’s Information School, will use applied research to engage with the public through community partners such as libraries to confront the misinformation epidemic. “If we care about common goals — things like safe communities, justice, equal opportunity — we have to care also about facts, truth and accuracy,” UW President Ana Mari Cauce said. “Misinformation can be weaponized. It has been weaponized to divide us and to weaken us.”

  • The Dark Psychology of Social Networks

    Every communication technology brings with it different constructive and destructive effects. Jonathan Haidt and Tobias Rose-Stockwell write that it typically takes some time to find and improve the balance between these negative and positive effects. The note that as social media has aged, the initial optimism which welcomed the new technology’s introduction has been replaced by a growing awareness of the technology deleterious effects – especially on the quality and purpose of political discussion.