-
Government Regulation Can Effectively Curb Social-Media Dangers
Social media posts such as those that promote terrorism and hate; spread medical misinformation; encourage dangerous challenges that put teen lives at risk; or those that glamorize suicide, pose a significant threat to society. New EU rules require social media platforms to take down flagged posts within 24 hours – and modelling shows that’s fast enough to have a dramatic effect on the spread of harmful content.
-
-
Conspiracy Theories: How Social Media Can Help Them Spread and Even Spark Violence
Conspiracy theory beliefs and (more generally) misinformation may be groundless, but they can have a range of harmful real-world consequences, including spreading lies, undermining trust in media and government institutions and inciting violent or even extremist behaviors.
-
-
Fighting Fake “Facts” with Two Little Words: Grounding a Large Language Model's Answers in Reality
Asking ChatGPT for answers comes with a risk—it may offer you entirely made-up “facts” that sound legitimate. Despite having been trained on vast amounts of factual data, large language models, or LLMs, are prone to generating false information called hallucinations. Inspired by a phrase commonly used in journalism, the researchers conducted a study on the impact of incorporating the words “according to” in LLM queries.
-
-
Fact-Checking Found to Influence Recommender Algorithms
Researchers have shown that urging individuals to actively participate in the news they consume can reduce the spread of these kinds of falsehoods. “We don’t have to think of ourselves as captive to tech platforms and algorithms,” said a researcher.
-
-
Fighting Fake News: Using Machine Learning, Blockchain to Counter Misinformation
False information can lead to harmful consequences. How can content creators focus their efforts on areas where the misinformation is likely to do the most public harm? Research offers possible solutions through a proposed machine learning framework, as well as expanded use of blockchain technology.
-
-
Hateful Usernames in Online Multiplayer Games
The online games industry continues to fall short in protecting players from hate and extremist content in games. Usernames are a basic part of any online experience. A new report focuses on hateful usernames, which should be the easiest content for companies to moderate.
-
-
China’s Cyber Interference and Transnational Crime Groups in Southeast Asia
The Chinese Communist Party has a long history of engagement with criminal organizations and proxies to achieve its strategic objectives. This activity involves the Chinese government’s spreading of influence and disinformation campaigns using fake personas and inauthentic accounts on social media that are linked to transnational criminal organizations.
-
-
The Promise—and Pitfalls—of Researching Extremism Online
While online spaces are key enablers for extremist movements, social media research hasn’t provided many answers to fundamental questions. How big of a problem is extremism, in the United States or around the world? Is it getting worse? Are social media platforms responsible, or did the internet simply reveal existing trends? Why do some people become violent?
-
-
Six Things to Watch Following Meta's Threads Launch
Meta’s ‘Twitter killer,’ Threads, launched on July 6 to media fanfare. With another already politically charged U.S. election on the horizon, online hate and harassment at record highs, and a rise in antisemitism and extremist incidents both on- and offline, a new social media product of this scale will present serious challenges.
-
-
Fact Check: Why Do we Believe Fake News?
Fake news has become a real threat to society. Some internet users are more likely to accept misinformation and fake news as true information than others. How do psychological and social factors influence whether we fall for them or not? And what can we do against it?
-
-
Preliminary Injunction Limiting Government Communications with Platforms Tackles Illegal “Jawboning,” but Fails to Provide Guidance on What’s Unconstitutional
A July 4 preliminary injunction issued by a federal judge in Louisiana limiting government contacts with social media platforms deals with government “jawboning” is a serious issue deserving serious attention and judicial scrutiny. The court order is notable as the first to hold the government accountable for unconstitutional jawboning of social media platforms, but it is not the serious examination of jawboning issues that is sorely needed. The court did not distinguish between unconstitutional and constitutional interactions or provide guideposts for distinguishing between them in the future.
-
-
Muting Trump’s “Megaphone” Easier Said Than Done
How do you cover Donald Trump? He’s going to do a lot of speeches, and parts of his message will be provably false, reflect intolerance, and promote anti-democratic ideas. Political experts suggest ways media can blunt the former president’s skillful manipulation of coverage to disseminate falsehoods and spread messages which are often sharply divisive and periodically dangerous.
-
-
The ‘Truther Playbook’: Tactics That Explain Vaccine Conspiracy Theorist RFK Jr’s Presidential Momentum
Polls show that Robert Kennedy’s Jr., promoting anti-vaccine conspiracy theories, has been drawing surprising early support in his campaign for the Democratic presidential nomination. Kennedy is using the “truther playbook” - – promising identity and belonging, revealing “true” knowledge, providing meaning and purpose, and promising leadership and guidance – which prove to be appealing in our current post-truth era, in which opinions often triumph over facts, and in which charlatans can achieve authority by framing their opponents as corrupt and evil and claiming to expose this corruption. These rhetorical techniques can be used to promote populist politics just as much as anti-vaccine content.
-
-
Six Pressing Questions We Must Ask About Generative AI
The past twenty-five years have demonstrated that waiting until after harms occur to implement internet safeguards fails to protect users. The emergence of Generative Artificial Intelligence (GAI) lends an unprecedented urgency to these concerns as this technology outpaces what regulations we have in place to keep the internet safe.
-
-
Research Shows How Terrorism Affects Our Language and Voting Patterns
Following the series of terrorist attacks between 2015 and 2017, German twitter users shifted their language towards that of the far right AfD party. Eventually voters rewarded the party at the 2017 election.
-
More headlines
The long view
Study Highlights Challenges in Detecting Violent Speech Aimed at Asian Communities
A study of language detection software found that algorithms struggle to differentiate anti-Asian violence-provoking speech from general hate speech. Left unchecked, threats of violence online can go unnoticed and turn into real-world attacks.
App Helps Users Transition from Doom-Scrolling to Mindfulness
Do you find yourself doom-scrolling, or spending more time than you should consuming negative news on the internet and social media and want to stop? New app unites principles from art and technology to encourage mindfulness on-the-go.
AI-Powered Massive Deepfake Detector to Safeguard Elections from Deepfake Threats
Israeli startup Revealense has introduced its illuminator Massive Deepfake Detector, an AI-powered solution designed to combat the growing threat of deepfakes in electoral processes. Dov Donin, CEO of Revealense, said: “Our system is already used by several governments globally.”