-
Extremist Communities Continue to Rely on YouTube for Hosting, but Most Videos Are Viewed Off-Site, Research Finds
After the 2016 U.S. presidential election, YouTube was so criticized for radicalizing users by recommending increasingly extremist and fringe content that it changed its recommendation algorithm. Research four years later found that while extremist content remained on YouTube, subscriptions and external referrals drove disaffected users to extremist content rather than the recommendation algorithm.
-
-
Can Wikipedia-like Citations on YouTube Curb Misinformation?
Videos can be dense with information: text, audio, and image after image. Yet each of these layers presents a potential source of error or deceit. And when people search for videos directly on a site like YouTube, sussing out which videos are credible sources can be tricky.
-
-
Users Seek Out Echo Chambers on Social Media
Users are inclined to favor popular opinion; lack of exposure to dissent contributes to polarization.
-
-
Truth and Reality with Chinese Characteristics
The Chinese Communist Party seeks to maintain total control over the information environment within China, while simultaneously working to extend its influence abroad to reshape the global information ecosystem. That includes not only controlling media and communications platforms outside China, but also ensuring that Chinese technologies and companies become the foundational layer for the future of information and data exchange worldwide.
-
-
Banning TikTok Won’t Solve Social Media’s Foreign Influence, Teen Harm and Data Privacy Problems
Concerns about TikTok are not unfounded, but they are also not unique. Each threat posed by TikTok has also been posed by U.S.-based social media for over a decade. Lawmakers should take action to address harms caused by U.S. companies seeking profit as well as by foreign companies perpetrating espionage. Protecting Americans cannot be accomplished by banning a single app. To truly protect their constituents, lawmakers would need to enact broad, far-reaching regulation.
-
-
Exploring New Ideas for Countering Disinformation
The rise of social media has connected people to one another and to information more rapidly and directly than ever before, but this fast-moving digital information landscape has also turbocharged the spread of misinformation and disinformation. From COVID-19 to climate change, coordinated social media efforts to disseminate intentionally false or misleading information are sowing distrust in science and in public institutions, and causing real harms to individuals and society more broadly.
-
-
TikTok Ban Feared, Antisemitic Conspiracy Theories Follow
Soon after the news broke about the House, on 13 March 2024, passing a bill that could potentially lead to a nationwide ban of the popular social media platform TikTok, influencers and extremists from across the political spectrum began framing the bill as an outright ban and speculating that the bill is a product of Jewish or Zionist influence, calling it an effort to infringe on free speech by limiting the reach of pro-Palestinian content.
-
-
Anti-Vaccine Conspiracies Fuel Divisive Political Discourse
Heightened use of social media during the coronavirus pandemic brought with it an unprecedented surge in the spread of misinformation. Of particular significance were conspiracy theories surrounding the virus and vaccines made to combat it. New analysis shows conspiracy theories gain political weight due to social media.
-
-
AI and the Spread of Fake News Sites: Experts Explain How to Counteract Them
With national elections looming in the United States, concerns about misinformation are sharper than ever, and advances in artificial intelligence (AI) have made distinguishing genuine news sites from fake ones even more challenging.
-
-
Disinformation Threatens Global Elections – Here’s How to Fight Back
With over half the world’s population heading to the polls in 2024, disinformation season is upon us — and the warnings are dire. Many efforts have focused on fact-checking and debunking false beliefs. In contrast, “prebunking” is a new way to prevent false beliefs from forming in the first place. Polio was a highly infectious disease that was eradicated through vaccination and herd immunity. Our challenge now is to build herd immunity to the tricks of disinformers and propagandists. The future of our democracy may depend on it.
-
-
X Provides Premium Perks to Hezbollah, Other U.S.-Sanctioned Groups
The U.S. imposes sanctions on individuals, groups, and countries deemed to be a threat to national security. Elon Musk’s X appears to be selling premium service to some of them. An investigation identified more than a dozen X accounts for U.S.-sanctioned entities that had a blue checkmark, which requires the purchase of a premium subscription. Along with the checkmarks, which are intended to confer legitimacy, X promises a variety of perks for premium accounts, including the ability to post longer text and videos and greater visibility for some posts.
-
-
Social Media Posts Have Power, and So Do You
In a healthy democracy, having accurate information is crucial for making informed decisions about voting and civic engagement. False and misleading information can lead to knowledge that is inaccurate, incomplete, or manipulated. Such knowledge can erode trust in democratic institutions and contribute to divisions within society. Fortunately, the ability to identify and resist false and misleading information is not static, because this ability relies on skills that can be learned.
-
-
Using AI to Monitor the Internet for Terror Content Is Inescapable – but Also Fraught with Pitfalls
This vast ocean of online material needs to be constantly monitored for harmful or illegal content, like promoting terrorism and violence. The sheer volume of content means that it’s not possible for people to inspect and check all of it manually, which is why automated tools, including artificial intelligence (AI), are essential. But such tools also have their limitations.
-
-
Campus Antisemitism Online: The Proliferation of Hate on Sidechat
Antisemitism has soared in the wake of the Hamas assault on Israel on October 7th with an intensity that has shocked many. Jewish students and campus organizations, such as Hillel, report that anti-Jewish and anti-Israel sentiments are often being spread via campus messaging apps like Yik Yak and Sidechat, where hate can easily be masked behind a cloak of anonymity. Jewish students have reported death threats, verbal and physical assaults, and levels of intimidation that have spread fear of attending classes or in some instances even venturing outside of their dorm rooms.
-
-
New Russian Disinformation Campaigns Prove the Past Is Prequel
Since 2016, conversations about disinformation have focused on the role of technology—from chatbots to deepfakes. Persuasion, however, is a fundamentally human-centered endeavor, and humans haven’t changed. Darren Linvill and Patrick Warren write the fundamentals of covert influence haven’t either.
-
More headlines
The long view
Study Highlights Challenges in Detecting Violent Speech Aimed at Asian Communities
A study of language detection software found that algorithms struggle to differentiate anti-Asian violence-provoking speech from general hate speech. Left unchecked, threats of violence online can go unnoticed and turn into real-world attacks.
App Helps Users Transition from Doom-Scrolling to Mindfulness
Do you find yourself doom-scrolling, or spending more time than you should consuming negative news on the internet and social media and want to stop? New app unites principles from art and technology to encourage mindfulness on-the-go.
AI-Powered Massive Deepfake Detector to Safeguard Elections from Deepfake Threats
Israeli startup Revealense has introduced its illuminator Massive Deepfake Detector, an AI-powered solution designed to combat the growing threat of deepfakes in electoral processes. Dov Donin, CEO of Revealense, said: “Our system is already used by several governments globally.”