• White supremacists use social media aid, abet terror

    Before carrying out mass shooting attacks in Pittsburgh and New Zealand, white supremacist terrorists Robert Bowers and Brenton Tarrant frequented fringe social networking sites which, according to a new study, serve as echo chambers for the most virulent forms of anti-Semitism and racism, and active recruiting grounds for potential terrorists.

  • Sri Lanka attacks: government’s social media ban may hide the truth about what is happening

    By Meera Selva

    Sri Lanka has temporarily banned social media and messaging apps in the wake of the coordinated Easter Sunday attacks on churches and hotels across the country, which killed at least 290 people. The ban is ostensibly to stop the spread of misinformation – but in Sri Lanka Facebook and social media platforms generally have created a positive space for public conversation that did not exist before. Shutting down social media, leaving its citizens reliant on state messaging and a weak and beaten down form of journalism, the government now risks preventing Sri Lankans from finding out the truth about what is happening in their fragile and delicately balanced country.

  • Can artificial intelligence help end fake news?

    By Tom Cassauwers

    Fake news has already fanned the flames of distrust towards media, politics and established institutions around the world. And while new technologies like artificial intelligence (AI) might make things even worse, it can also be used to combat misinformation.

  • Social media networks aid, abet white supremacist terrorism: Study

    A new study reveals how fringe social media sites such as Gab, 4 Chan and 8chan act like virtual “round-the-clock white supremacist rallies” where hateful notions of Jews and other minorities are openly espoused and closely associated with violence as a solution.

  • Russia targeted Sanders supporters, pushing them to vote for Trump

    As part of Russia’s broad 2016 effort to ensure Donald Trump’s victory in the presidential election, Russian hackers targeted supporters of Senator Bernie Sanders (I-Vermont), following his primary loss in 2016, trying to push them to vote for Donald Trump instead of Democratic nominee Hillary Clinton. Daren Linvill, the Clemson University researcher who conducted the research of the Russian campaign, said the Russians saw Sanders as “just a tool.” “He is a wedge to drive into the Democratic Party,” resulting in lower turnout for Clinton, he said.

  • 2019 a record year for measles infection since the disease was eradicated in 2000

    The number of measles cases confirmed in the United States since the first of the year grew by 90 in the last week, raising the total to 555 cases, meaning it’s likely 2019 will see the most measles cases in the United States since the disease was eradicated in 2000. Measles is highly contagious, and can be deadly. The World Health Organization (WHO) said 110,000 deaths were attributed to the virus in 2017. “The disease is almost entirely preventable through two doses of a safe and effective vaccine. For several years, however, global coverage with the first dose of measles vaccine has stalled at 85 percent,” the WHO said. Coverage needs to reach 95 percent to prevent outbreaks.

  • Weapons of mass distraction

    A sobering new report from the U.S. Department of State’s Global Engagement Center details the reach, scope, and effectiveness of Russia’s disinformation campaigns to undermine and weaken Western societies. “The messages conveyed through disinformation range from biased half-truths to conspiracy theories to outright lies. The intent is to manipulate popular opinion to sway policy or inhibit action by creating division and blurring the truth among the target population,” write the authors of the report.

  • Hate incidents are underreported. Now, there’s an app for that

    Despite the FBI recording an all-time high in hate-motivated incidents in 2017 (the most recent year’s statistics available) the number is likely much higher. Low reporting from victims to police and inconsistent reporting from police to federal authorities have created a massive gap in how we understand hate in America. Researchers from the University of Utah want to fill that gap with an app.

  • April Fools hoax stories may offer clues to help identify “fake news”

    Studying April Fools hoax news stories could offer clues to spotting ‘fake news’ articles, new research reveals. Researchers interested in deception have compared the language used within written April Fools hoaxes and fake news stories.

  • In disasters, Twitter users with large networks get out-tweeted

    New study shows that when it comes to sharing emergency information during natural disasters, timing is everything. The study on Twitter use during hurricanes, floods and tornadoes offers potentially life-saving data about how information is disseminated in emergency situations, and by whom. Unlikely heroes often emerge in disasters, and the same is true on social media.

  • Why the next terror manifesto could be even harder to track

    By Megan Squire

    Just before his shooting spree at two Christchurch, New Zealand mosques, the alleged mass murderer posted a hate-filled manifesto on several file-sharing sites. Soon, the widespread adoption of artificial intelligence on platforms and decentralized tools like IPFS will mean that the online hate landscape will change. Combating online extremism in the future may be less about “meme wars” and user-banning, or “de-platforming,” and could instead look like the attack-and-defend, cat-and-mouse technical one-upsmanship that has defined the cybersecurity industry since the 1980s. No matter what technical challenges come up, one fact never changes: The world will always need more good, smart people working to counter hate than there are promoting it.

  • Social media create a spectacle society that makes it easier for terrorists to achieve notoriety

    By Stuart M. Bender

    The shocking mass-shooting in Christchurch last Friday is notable for using livestreaming video technology to broadcast horrific first-person footage of the shooting on social media. The use of social media technology and livestreaming marks the attack as different from many other terrorist incidents. It is a form of violent “performance crime.” That is, the video streaming is a central component of the violence itself, it’s not somehow incidental to the crime, or a disgusting trophy for the perpetrator to re-watch later. In an era of social media, which is driven in large part by spectacle, we all have a role to play in ensuring that terrorists aren’t rewarded for their crimes with our clicks.

  • Russian trolls, bots spread false vaccine information on Twitter

    A study found that Russian trolls and bots have been spreading false information about vaccination, in support of the anti-vaccination movement. The false information was generated by propaganda and disinformation specialists at the Kremlin-affiliated, St. Petersburg-based IRA. The Kremlin employed IRA to conduct a broad social media disinformation campaign to sow discord and deepen divisions in the United States, and help Donald Trump win the 2016 presidential election.

  • Studying how hate and extremism spread on social media

    The ADL and the Network Contagion Research Institute will partner to produce a series of reports that take an in-depth look into how extremism and hate spread on social media – and provide recommendations on how to combat both.

  • Four ways social media platforms could stop the spread of hateful content in aftermath of terror attacks

    By Bertie Vidgen

    Monitoring hateful content is always difficult and even the most advanced systems accidentally miss some. But during terrorist attacks the big platforms face particularly significant challenges. As research has shown, terrorist attacks precipitate huge spikes in online hate, overrunning platforms’ reporting systems. Lots of the people who upload and share this content also know how to deceive the platforms and get round their existing checks. So what can platforms do to take down extremist and hateful content immediately after terrorist attacks? I propose four special measures which are needed to specifically target the short term influx of hate.