-
How Russia May have Used Twitter to Seize Crimea
Online discourse by users of social media can provide important clues about the political dispositions of communities. New research suggests it can even be used by governments as a source of military intelligence to estimate prospective casualties and costs incurred from occupying foreign territories. New research shows real-time social media data may have been a source of military intelligence for the Kremlin and potentially other governments.
-
-
“Like” at Your Own Risk
New “Chameleon” Attack Can Secretly Modify Content on Facebook, Twitter. or LinkedIn: That video or picture you “liked” on social media of a cute dog, your favorite team or political candidate can actually be altered in a cyberattack to something completely different, detrimental and potentially criminal.
-
-
Researcher Tests “Vaccine” Against Hate
Amid a spike in violent extremism around the world, a communications researcher is experimenting with a novel idea: whether people can be “inoculated” against hate with a little exposure to extremist propaganda, in the same manner vaccines enable human bodies to fight disease.
-
-
"Redirect Method": Countering Online Extremism
In recent years, deadly white supremacist violence at houses of worship in Pittsburgh, Christchurch, and Poway demonstrated the clear line from violent hate speech and radicalization online to in-person violence. With perpetrators of violence taking inspiration from online forums, leveraging the anonymity and connectivity of the internet, and developing sophisticated strategies to spread their messages, the stakes couldn’t be higher in tackling online extremism. Researchers have developed the Redirect Method to counter white supremacist and jihadist activity online.
-
-
YouTube’s Algorithms Might Radicalize People – but the Real Problem Is We’ve No Idea How They Work
Does YouTube create extremists? It’s hard to argue that YouTube doesn’t play a role in radicalization, Chico Camargo writes. “In fact, maximizing watchtime is the whole point of YouTube’s algorithms, and this encourages video creators to fight for attention in any way possible.” Society must insist on using algorithm auditing, even though it is a difficult and costly process. “But it’s important, because the alternative is worse. If algorithms go unchecked and unregulated, we could see a gradual creep of conspiracy theorists and extremists into our media, and our attention controlled by whoever can produce the most profitable content.”
-
-
Chinese Communist Party’s Media Influence Expands Worldwide
Over the past decade, Chinese Communist Party (CCP) leaders have overseen a dramatic expansion in the regime’s ability to shape media content and narratives about China around the world, affecting every region and multiple languages, according to a new report. This trend has accelerated since 2017, with the emergence of new and more brazen tactics by Chinese diplomats, state-owned news outlets, and CCP proxies.
-
-
Combating the Latest Technological Threat to Democracy: A Comparison of Facebook and Twitter’s Deepfake Policies
Twitter and Facebook have both recently announced policies for handling synthetic and manipulated media content on their platforms. Side-by-side comparison and analysis of Twitter and Facebook’s policies highlights that Facebook focuses on a narrow, technical type of manipulation, while Twitter’s approach contemplates the broader context and impact of manipulated media.
-
-
Countering Hate Speech by Detecting, Highlighting “Help Speech”
Researchers have developed a system that leverages artificial intelligence to rapidly analyze hundreds of thousands of comments on social media and identify the fraction that defend or sympathize with disenfranchised minorities such as the Rohingya community. Human social media moderators, who couldn’t possibly manually sift through so many comments, would then have the option to highlight this “help speech” in comment sections.
-
-
Why the Jeffrey Epstein Saga Was the Russian Government-Funded Media’s Top Story of 2019
In a year featuring a presidential impeachment, Brexit, mass protests in Hong Kong, and widespread geopolitical turmoil, few topics dominated the Russian government-funded media landscape quite like the arrest and subsequent suicide of billionaire financier and serial sex offender Jeffrey Epstein. Given the lack of any notable connection between Epstein and Russian interests, the focus on Epstein highlights the Kremlin’s clear prioritization of content meant to paint a negative image of the West rather than a positive image of Russia.
-
-
New AI-Based Tool Flags Fake News for Media Fact-Checkers
A new artificial intelligence (AI) tool could help social media networks and news organizations weed out false stories. The tool uses deep-learning AI algorithms to determine if claims made in posts or stories are supported by other posts and stories on the same subject.
-
-
Enhanced Deepfakes Capabilities for Less-Skilled Threat Actors Mean More Misinformation
The ability to create manipulated content is not new. But what has changed with the advances in artificial intelligence is you can now build a very convincing deepfake without being an expert in technology. This “democratization” of deepfakes will increase the quantity of misinformation and disinformation aiming to weaken and undermine evidence-based discourse.
-
-
The United States Should Not Act as If It's the Only Country Facing Foreign Interference
“Right now, Russia’s security services and their proxies have geared up to repeat their interference in the 2020 election. We are running out of time to stop them.” This stark warning from former National Security Council official Fiona Hill serves as a sharp reminder of the threat to democracy posed by foreign interference and disinformation. Russia’s ongoing interference in U.S. affairs is just a small piece on a big chessboard. A key foreign policy goal of the Kremlin is to discredit, undermine, and embarrass what it sees as a liberal international order intent on keeping Russia down and out. Russia’s systematic attack on U.S. democracy in 2016 was unprecedented, but its playbook is not unique.
-
-
Containing Online Hate Speech as If It Were a Computer Virus
Artificial intelligence is being developed which will allow advisory “quarantining” of hate speech in a manner akin to malware filters – offering users a way to control exposure to “hateful content” without resorting to censorship.
-
-
Seizure-Triggering Attack Is Stark Example of How Social Media Can Be Weaponized
Followers of the Epilepsy Foundation’s Twitter handle were targeted last month with posts containing strobe light GIFs and videos which could have caused seizures for people with epilepsy, the foundation announced Monday. “While this kind of activity may not bear the hallmarks of a cyberattack, which can trick users into clicking malicious links or knock a website offline by flooding it with junk traffic, this attack shows that platforms can have even their normal functions weaponized in order to cause physical harm,” Shannon Vavra writes.
-
-
Click Here to Kill
The idea of an online assassination market was advanced long before it was possible to build one, and long before there was anything resembling the dark web. Susan Choi writes that a threshold had been crossed: advances in encryption and cryptocurrency make this dark vision a reality: Journalists at BBC News Russia confirmed that on 12 March 2019, the first known case of a murder being ordered on the dark web and successfully carried out by hired assassins. The FBI and DHS are worried.
-