-
Weapons of mass distraction
A sobering new report from the U.S. Department of State’s Global Engagement Center details the reach, scope, and effectiveness of Russia’s disinformation campaigns to undermine and weaken Western societies. “The messages conveyed through disinformation range from biased half-truths to conspiracy theories to outright lies. The intent is to manipulate popular opinion to sway policy or inhibit action by creating division and blurring the truth among the target population,” write the authors of the report.
-
-
Hate incidents are underreported. Now, there’s an app for that
Despite the FBI recording an all-time high in hate-motivated incidents in 2017 (the most recent year’s statistics available) the number is likely much higher. Low reporting from victims to police and inconsistent reporting from police to federal authorities have created a massive gap in how we understand hate in America. Researchers from the University of Utah want to fill that gap with an app.
-
-
April Fools hoax stories may offer clues to help identify “fake news”
Studying April Fools hoax news stories could offer clues to spotting ‘fake news’ articles, new research reveals. Researchers interested in deception have compared the language used within written April Fools hoaxes and fake news stories.
-
-
In disasters, Twitter users with large networks get out-tweeted
New study shows that when it comes to sharing emergency information during natural disasters, timing is everything. The study on Twitter use during hurricanes, floods and tornadoes offers potentially life-saving data about how information is disseminated in emergency situations, and by whom. Unlikely heroes often emerge in disasters, and the same is true on social media.
-
-
Why the next terror manifesto could be even harder to track
Just before his shooting spree at two Christchurch, New Zealand mosques, the alleged mass murderer posted a hate-filled manifesto on several file-sharing sites. Soon, the widespread adoption of artificial intelligence on platforms and decentralized tools like IPFS will mean that the online hate landscape will change. Combating online extremism in the future may be less about “meme wars” and user-banning, or “de-platforming,” and could instead look like the attack-and-defend, cat-and-mouse technical one-upsmanship that has defined the cybersecurity industry since the 1980s. No matter what technical challenges come up, one fact never changes: The world will always need more good, smart people working to counter hate than there are promoting it.
-
-
Social media create a spectacle society that makes it easier for terrorists to achieve notoriety
The shocking mass-shooting in Christchurch last Friday is notable for using livestreaming video technology to broadcast horrific first-person footage of the shooting on social media. The use of social media technology and livestreaming marks the attack as different from many other terrorist incidents. It is a form of violent “performance crime.” That is, the video streaming is a central component of the violence itself, it’s not somehow incidental to the crime, or a disgusting trophy for the perpetrator to re-watch later. In an era of social media, which is driven in large part by spectacle, we all have a role to play in ensuring that terrorists aren’t rewarded for their crimes with our clicks.
-
-
Russian trolls, bots spread false vaccine information on Twitter
A study found that Russian trolls and bots have been spreading false information about vaccination, in support of the anti-vaccination movement. The false information was generated by propaganda and disinformation specialists at the Kremlin-affiliated, St. Petersburg-based IRA. The Kremlin employed IRA to conduct a broad social media disinformation campaign to sow discord and deepen divisions in the United States, and help Donald Trump win the 2016 presidential election.
-
-
Studying how hate and extremism spread on social media
The ADL and the Network Contagion Research Institute will partner to produce a series of reports that take an in-depth look into how extremism and hate spread on social media – and provide recommendations on how to combat both.
-
-
Four ways social media platforms could stop the spread of hateful content in aftermath of terror attacks
Monitoring hateful content is always difficult and even the most advanced systems accidentally miss some. But during terrorist attacks the big platforms face particularly significant challenges. As research has shown, terrorist attacks precipitate huge spikes in online hate, overrunning platforms’ reporting systems. Lots of the people who upload and share this content also know how to deceive the platforms and get round their existing checks. So what can platforms do to take down extremist and hateful content immediately after terrorist attacks? I propose four special measures which are needed to specifically target the short term influx of hate.
-
-
Fraudulent news, disinformation become “new normal” political tactics
New report warns of the risk of fraudulent news and online disinformation becoming a normalized part of U.S. political discourse. The report sounds an alarm that fraudulent news and online disinformation, which distort public discourse, erode faith in journalism, and skew voting decisions, are becoming part of the toolbox of hotly contested modern campaigns.
-
-
Information literacy must be improved to stop spread of “fake news”
It is not difficult to verify whether a new piece of information is accurate; however, most people do not take that step before sharing it on social media, regardless of age, social class or gender, a new study has found.
-
-
White supremacist propaganda and events soared in 2018
White supremacists dramatically stepped up their propaganda efforts targeting neighborhoods and campuses in 2018, far exceeding any previous annual distribution count for the United States and showing how these extremist groups are finding ways to share hateful messages while hiding the identity of individual members.
-
-
U.S. Cyber Command cut Russian troll factory’s access to the internet
The U.S. Cyber Command blocked the internet access of the St. Petersburg’s-based Internet Research Agency (IRA), a Russian disinformation and propaganda outfit which was contracted by the Kremlin to orchestrate the social media disinformation campaign to help Donald Trump win the 2016 presidential election. The IRA’s access to the internet was blocked on midterms Election Day, and for a few days following the election.
-
-
Telegram used by ISIS to spread propaganda globally
The Counter Extremism Project (CEP) this week reports about a Telegram channel that called for lone actor terrorist attacks in London, alongside other online websites that host ISIS videos and propaganda online. The encrypted messaging app is the platform of choice for terrorist group to call for violence.
-
-
U.S. hate groups hit record number last year amid increased violence
American hate groups had a bumper year in 2018 as a surge in black and white nationalist groups lifted their number to a new record high, the Southern Poverty Law Center said in a report issued Wednesday. The increase was driven by growth in both black and white nationalist groups, the SPLC said. The number of white nationalist groups jumped from 100 to 148, while the number of black nationalist groups — typically anti-Semitic, anti-LGBTQ and anti-white — rose from 233 to 264. Some conservative groups have accused the SPLC of unfairly labeling them as “hate groups,” and last month, the Center for Immigration Studies sued the SPLC for “falsely designating” it as a hate group in 2016, saying the SPLC has produced no evidence that the group maligns immigrants as a class.
-