• White Supremacy Has Triggered a Terrorism Panic

    Our collective response to terrorism seems to swing on a pendulum between rank complacency and terrified myth-making. In January 2014, U.S. President Barack Obama dismissed the Islamic State as al Qaeda’s “JV team.” But by September of that year, after the group had captured Mosul in Iraq and launched a genocidal campaign of slaughter against the Yazidis, he started bombing it. A similar dynamic can be observed in the case of white supremacy today. This is not “to suggest that the threat of white supremacy is not real or that we should be complacent about it,” Simon Cottee writes. “Of course it is real, and of course we need to indict and seriously punish those who have committed or are plotting to commit terrorist atrocities in the name of white supremacy.” But we should resist the urge to treat white supremacy as “a mythical monster against which to signal our moral virtue”: “White supremacy is not a monolith endangering our children and societies, but we might just make it into one by overinflating it into precisely this.”

  • Digital Menace: Using Social Media to Manufacture Consensus, Automate Suppression, and Undermine Trust

    Over the past three years, the Project on Computational Propaganda at Oxford University has monitored the global organization of social media manipulation by governments and political parties. The Project’s 2019 report analyzes the trends of computational propaganda and the evolving tools, capacities, strategies, and resources.

  • The Global Disinformation Order: Excerpts from a New Report

    Around the world, government actors are using social media to manufacture consensus, automate suppression, and undermine trust in the liberal international order. Social media, which was once heralded as a force for freedom and democracy, has come under increasing scrutiny for its role in amplifying disinformation, inciting violence, and lowering levels of trust in media and democratic institutions.

  • Tech Fight against Online Extremism Gets Overhaul

    Facebook fulfilled a long-standing demand from policymakers and advocacy groups this week when Chief Operating Officer Sheryl Sandberg announced that a coalition of the country’s most powerful tech corporations will be formalizing its counterterrorism efforts into an independent organization with a dedicated staff. As the companies face ramped-up criticism from regulators and lawmakers worldwide, they are expanding the Global Internet Forum to Counter Terrorism (GIFCT), which they originally formed to deal with Islamic terrorism online in 2017. The founding members were Facebook, Twitter, YouTube and Microsoft.

  • Innocent Users Have the Most to Lose in the Rush to Address Extremist Speech Online

    Big online platforms tend to brag about their ability to filter out violent and extremist content at scale, but those same platforms refuse to provide even basic information about the substance of those removals. How do these platforms define terrorist content? What safeguards do they put in place to ensure that they don’t over-censor innocent people in the process? Again and again, social media companies are unable or unwilling to answer the questions. Facebook Head of Global Policy Management Monika Bickert claimed that more than 99 percent of terrorist content posted on Facebook is deleted by the platform’s automated tools, but the company has consistently failed to say how it determines what constitutes a terrorist⁠—or what types of speech constitute terrorist speech.

  • Information and Democracy—A Perilous Relationship

    In the 1997 James Bond film “Tomorrow Never Dies,” the villain is Elliot Carver, head of a media conglomerate who has come to believe that information is a more powerful weapon than military force. He blackmails senior British leaders and ultimately tries to spark a war between China and Britain to bring his ally to power in Beijing. At one point in the film, Carver stands underneath massive television screens in the headquarters of his media empire, addressing Bond: “We’re both men of action,” he tells Bond, “but your era…is passing. Words are the new weapons, satellites the new artillery…Caesar had his legions, Napoleon had his armies. I have my divisions—TV, news, magazines.” Fast-forward twenty years, and this scenario appears to be becoming reality. Using techniques far more advanced than those available to Bond villains in the 1990s, today’s practitioners of what a new RAND report terms “hostile social manipulation” employ targeted social media campaigns, sophisticated forgeries, cyberbullying and harassment of individuals, distribution of rumors and conspiracy theories, and other tools and approaches to cause damage to the target state.

  • Women’s March Votes Out Board Member for Anti-Semitic Tweets

    Zahra Billoo, the executive director of the San Francisco Bay Area chapter of the Council on American-Islamic Relations, was dismissed from the board of directors of the Women’s March on Wednesday, only two days after she was appointed to the board on Monday. “We found some of her public statements incompatible with the values and mission of the organization,” the board said. Billoo has called herself a “proud anti-Zionist” and said that she does not believe Israel has a right to exist. She also has accused Israel of committing war crimes “as a hobby,” and wrote: “the Israeli Defense Forces, or the IDF, are no better than ISIS. They are both genocidal terrorist organizations,” and that “racist Zionists who support Apartheid Israel” scares her more than “the mentally ill young people the #FBI recruits to join ISIS.”

  • The Complicated Truth of Countering Disinformation

    Social media’s unprecedented ability to spread disinformation succeeds in part because of vulnerabilities in the way people process and evaluate information. In an information environment characterized by an oversaturation of content and algorithms designed to increase views and shares, narratives (true or not) can quickly go viral by appealing to our biases. This new, decentralized world of content creation and consumption is ripe for exploitation by nefarious actors who seek to spread doubt and untruths. To counter modern disinformation, then, we cannot focus solely on social media platforms or current technologies — we should also understand the psychological factors that underpin our identities and perceptions of the truth.

  • I Researched Uighur Society in China for 8 Years and Watched How Technology Opened New Opportunities – Then Became a Trap

    The Uighurs, a Muslim minority ethnic group of around 12 million in northwest China, are required by the police to carry their smartphones and IDs listing their ethnicity. As they pass through one of the thousands of newly built digital media and face surveillance checkpoints located at jurisdictional boundaries, entrances to religious spaces and transportation hubs, the image on their ID is matched to their face. If they try to pass without these items, a digital device scanner alerts the police. The Chinese state authorities described the intrusive surveillance as a necessary tool against the “extremification” of the Uighur population. Through this surveillance process, around 1.5 million Uighurs and other Muslims were determined “untrustworthy” and have forcibly been sent to detention and reeducation in a massive internment camp system. Since more than 10 percent of the adult population has been removed to these camps, hundreds of thousands of children have been separated from their parents. Many children throughout the region are now held in boarding schools or orphanages which are run by non-Muslim state workers.

  • Determining the Who, Why, and How Behind Manipulated Media

    The threat of manipulated multi-modal media – which includes audio, images, video, and text – is increasing as automated manipulation technologies become more accessible, and social media continues to provide a ripe environment for viral content sharing. The creators of convincing media manipulations are no longer limited to groups with significant resources and expertise. Today, an individual content creator has access to capabilities that could enable the development of an altered media asset that creates a believable, but falsified, interaction or scene. A new program seeks to develop technologies capable of automating the detection, attribution, and characterization of falsified media assets.

  • AI Startups to Fight Against Online Disinformation

    On both sides of the Atlantic, governments, foundations, and companies are looking at how to solve the problem of online dis/misinformation. Some emphasize the demand side of the problem, believing it important to focus on consumer behavior and the use of media literacy and fact-checking. Some focus on legal remedies such as platform-liability and hate-speech laws as well as privacy protections. Others try to raise the quality of journalism in the hope that creating more reliable content. There is another kind of fix, offered by small companies in the information ecosystem: Using natural language processing as well as human intelligence to identify and, in some cases, block false or inflammatory content online.

  • How to Act against Domestic Terrorists — and Their Foreign Supporters

    The United States faces a surging domestic terrorism threat in the homeland. In the aftermath of the El Paso and Dayton shootings in the first weekend of August, more than 40 people were   arrested for threats to commit mass attacks by the end of that month. GW Program on Extremism suggests two ways to achieve a more effective and coordinated multisector response to the domestic terrorism threat. First, specific criminal statutes for domestic terrorism offenses need to be enacted that penalize the commission of specific violent crimes. Acknowledging concerns that new criminal statutes related to property damage may stifle legitimate protest, new criminal statutes could be limited to violence against persons and providing material support to terrorists. Second, the list of proscribed foreign terrorist organizations (FTOs) should include far-right actors outside of the United States.

  • How Disinformation Could Sway the 2020 Election

    In 2016, Russian operatives used Facebook, Twitter and YouTube to sow division among American voters and boost Donald Trump’s presidential campaign. What the Russians used to accomplish this is called “disinformation,” which is false or misleading content intended to deceive or promote discord. Now, with the first presidential primary vote only five months away, the public should be aware of the sources and types of online disinformation likely to surface during the 2020 election.

  • Foreign Interference Threat Bigger than Terrorism, Warns Spymaster

    Foreign interference and hostile state espionage are a bigger threat to Australia’s security than terrorism, one of the country’s top spy chiefs has warned. Duncan Lewis, the outgoing head of Australian Security Intelligence Organization (ASIO), identified three challenges security confronting Australians: terrorism, cyber warfare; and foreign interference and espionage. But the latter was on a “growth trajectory” and is a greater threat than terrorism, he told a Lowy Institute forum in Sydney.

  • Lega Nord’s Bedfellows: Russians Offering Illicit Funding to Italian Far-Right Party Identified

    In the last four years, the Kremlin has engaged in a broad, systematic campaign – consisting of hacking, a vast social media disinformation effort, and illicit funding – to weaken the West by helping far-right, populist, pro-Russian politicians and movements reach power. One of their successes was in Italy, where the far-right, anti-EU, anti-immigrant Northern League and the eclectic, anti-establishment 5 Star Movement won enough seats in the Fall 2017 election to form a coalition government (which collapsed last week, after more than a 1.5 years in power). Prosecutors in Milan have launched an investigation of The League after recordings emerged of meetings between League leaders and Kremlin emissaries, in which a scheme to secure funding for The League in the upcoming European parliament elections was discussed. The funding – in the millions of Euro – was to be funneled via artificially underpriced Russian oil export transactions.