-
Why the next terror manifesto could be even harder to track
Just before his shooting spree at two Christchurch, New Zealand mosques, the alleged mass murderer posted a hate-filled manifesto on several file-sharing sites. Soon, the widespread adoption of artificial intelligence on platforms and decentralized tools like IPFS will mean that the online hate landscape will change. Combating online extremism in the future may be less about “meme wars” and user-banning, or “de-platforming,” and could instead look like the attack-and-defend, cat-and-mouse technical one-upsmanship that has defined the cybersecurity industry since the 1980s. No matter what technical challenges come up, one fact never changes: The world will always need more good, smart people working to counter hate than there are promoting it.
-
-
Social media create a spectacle society that makes it easier for terrorists to achieve notoriety
The shocking mass-shooting in Christchurch last Friday is notable for using livestreaming video technology to broadcast horrific first-person footage of the shooting on social media. The use of social media technology and livestreaming marks the attack as different from many other terrorist incidents. It is a form of violent “performance crime.” That is, the video streaming is a central component of the violence itself, it’s not somehow incidental to the crime, or a disgusting trophy for the perpetrator to re-watch later. In an era of social media, which is driven in large part by spectacle, we all have a role to play in ensuring that terrorists aren’t rewarded for their crimes with our clicks.
-
-
Russian trolls, bots spread false vaccine information on Twitter
A study found that Russian trolls and bots have been spreading false information about vaccination, in support of the anti-vaccination movement. The false information was generated by propaganda and disinformation specialists at the Kremlin-affiliated, St. Petersburg-based IRA. The Kremlin employed IRA to conduct a broad social media disinformation campaign to sow discord and deepen divisions in the United States, and help Donald Trump win the 2016 presidential election.
-
-
Studying how hate and extremism spread on social media
The ADL and the Network Contagion Research Institute will partner to produce a series of reports that take an in-depth look into how extremism and hate spread on social media – and provide recommendations on how to combat both.
-
-
Four ways social media platforms could stop the spread of hateful content in aftermath of terror attacks
Monitoring hateful content is always difficult and even the most advanced systems accidentally miss some. But during terrorist attacks the big platforms face particularly significant challenges. As research has shown, terrorist attacks precipitate huge spikes in online hate, overrunning platforms’ reporting systems. Lots of the people who upload and share this content also know how to deceive the platforms and get round their existing checks. So what can platforms do to take down extremist and hateful content immediately after terrorist attacks? I propose four special measures which are needed to specifically target the short term influx of hate.
-
-
Fraudulent news, disinformation become “new normal” political tactics
New report warns of the risk of fraudulent news and online disinformation becoming a normalized part of U.S. political discourse. The report sounds an alarm that fraudulent news and online disinformation, which distort public discourse, erode faith in journalism, and skew voting decisions, are becoming part of the toolbox of hotly contested modern campaigns.
-
-
Information literacy must be improved to stop spread of “fake news”
It is not difficult to verify whether a new piece of information is accurate; however, most people do not take that step before sharing it on social media, regardless of age, social class or gender, a new study has found.
-
-
White supremacist propaganda and events soared in 2018
White supremacists dramatically stepped up their propaganda efforts targeting neighborhoods and campuses in 2018, far exceeding any previous annual distribution count for the United States and showing how these extremist groups are finding ways to share hateful messages while hiding the identity of individual members.
-
-
U.S. Cyber Command cut Russian troll factory’s access to the internet
The U.S. Cyber Command blocked the internet access of the St. Petersburg’s-based Internet Research Agency (IRA), a Russian disinformation and propaganda outfit which was contracted by the Kremlin to orchestrate the social media disinformation campaign to help Donald Trump win the 2016 presidential election. The IRA’s access to the internet was blocked on midterms Election Day, and for a few days following the election.
-
-
Telegram used by ISIS to spread propaganda globally
The Counter Extremism Project (CEP) this week reports about a Telegram channel that called for lone actor terrorist attacks in London, alongside other online websites that host ISIS videos and propaganda online. The encrypted messaging app is the platform of choice for terrorist group to call for violence.
-
-
U.S. hate groups hit record number last year amid increased violence
American hate groups had a bumper year in 2018 as a surge in black and white nationalist groups lifted their number to a new record high, the Southern Poverty Law Center said in a report issued Wednesday. The increase was driven by growth in both black and white nationalist groups, the SPLC said. The number of white nationalist groups jumped from 100 to 148, while the number of black nationalist groups — typically anti-Semitic, anti-LGBTQ and anti-white — rose from 233 to 264. Some conservative groups have accused the SPLC of unfairly labeling them as “hate groups,” and last month, the Center for Immigration Studies sued the SPLC for “falsely designating” it as a hate group in 2016, saying the SPLC has produced no evidence that the group maligns immigrants as a class.
-
-
Putting data privacy in the hands of users
In today’s world of cloud computing, users of mobile apps and web services store personal data on remote data center servers. Services often aggregate multiple users’ data across servers to gain insights on, say, consumer shopping patterns to help recommend new items to specific users, or may share data with advertisers. Traditionally, however, users haven’t had the power to restrict how their data are processed and shared. New platform acts as a gatekeeper to ensure web services adhere to a user’s custom data restrictions.
-
-
Don’t be fooled by fake images and videos online
Advances in artificial intelligence have made it easier to create compelling and sophisticated fake images, videos and audio recordings. Meanwhile, misinformation proliferates on social media, and a polarized public may have become accustomed to being fed news that conforms to their worldview. All contribute to a climate in which it is increasingly more difficult to believe what you see and hear online. There are some things that you can do to protect yourself from falling for a hoax. As the author of the upcoming book Fake Photos, to be published in August, I’d like to offer a few tips to protect yourself from falling for a hoax.
-
-
Are Russian trolls saving measles from extinction?
Scientific researchers say Russian social-media trolls who spread discord before the 2016 U.S. presidential election may also contributed to the 2018 outbreak of measles in Europe that killed 72 people and infected more than 82,000 — mostly in Eastern and Southeastern European countries known to have been targeted by Russia-based disinformation campaigns. Experts in the United States and Europe are now working on ways to gauge the impact that Russian troll and bot campaigns have had on the spread of the disease by distributing medical misinformation and raising public doubts about vaccinations.
-
-
Russia is attacking the U.S. system from within
A new court filing submitted last Wednesday by Special Counsel Robert Mueller shows that a Russian troll farm currently locked in a legal battle over its alleged interference in the 2016 election appeared to wage yet another disinformation campaign late last year—this time targeting Mueller himself. Concord Management and Consulting is accused of funding the troll farm, known as the Internet Research Agency. But someone connected to Concord allegedly manipulated the documents and leaked them to reporters, hoping the documents would make people think that Mueller’s evidence against the troll farm and its owners was flimsy. Natasha Bertrand writes that “The tactic didn’t seem to convince anyone, but it appeared to mark yet another example of Russia exploiting the U.S. justice system to undercut its rivals abroad.”
-