-
Disinformation Threatens Global Elections – Here’s How to Fight Back
With over half the world’s population heading to the polls in 2024, disinformation season is upon us — and the warnings are dire. Many efforts have focused on fact-checking and debunking false beliefs. In contrast, “prebunking” is a new way to prevent false beliefs from forming in the first place. Polio was a highly infectious disease that was eradicated through vaccination and herd immunity. Our challenge now is to build herd immunity to the tricks of disinformers and propagandists. The future of our democracy may depend on it.
-
-
Feds Deliver Stark Warnings to State Election Officials Ahead of November
Federal law enforcement and cybersecurity officials are warning the nation’s state election administrators that they face serious threats ahead of November’s presidential election, as AI, ransomware attacks, and malicious mail could disrupt voting.
-
-
Using AI to Develop Enhanced Cybersecurity Measures
Using artificial intelligence to address several critical shortcomings in large-scale malware analysis, researchers are making significant advancements in the classification of Microsoft Windows malware and paving the way for enhanced cybersecurity measures.
-
-
X Provides Premium Perks to Hezbollah, Other U.S.-Sanctioned Groups
The U.S. imposes sanctions on individuals, groups, and countries deemed to be a threat to national security. Elon Musk’s X appears to be selling premium service to some of them. An investigation identified more than a dozen X accounts for U.S.-sanctioned entities that had a blue checkmark, which requires the purchase of a premium subscription. Along with the checkmarks, which are intended to confer legitimacy, X promises a variety of perks for premium accounts, including the ability to post longer text and videos and greater visibility for some posts.
-
-
Social Media Posts Have Power, and So Do You
In a healthy democracy, having accurate information is crucial for making informed decisions about voting and civic engagement. False and misleading information can lead to knowledge that is inaccurate, incomplete, or manipulated. Such knowledge can erode trust in democratic institutions and contribute to divisions within society. Fortunately, the ability to identify and resist false and misleading information is not static, because this ability relies on skills that can be learned.
-
-
Using AI to Monitor the Internet for Terror Content Is Inescapable – but Also Fraught with Pitfalls
This vast ocean of online material needs to be constantly monitored for harmful or illegal content, like promoting terrorism and violence. The sheer volume of content means that it’s not possible for people to inspect and check all of it manually, which is why automated tools, including artificial intelligence (AI), are essential. But such tools also have their limitations.
-
-
U.S. Disrupts Botnet China Used to Conceal Hacking of Critical Infrastructure
In December 2023, the FBI disrupted a botnet of hundreds of U.S.-based small office/home office (SOHO) routers hijacked by People’s Republic of China (PRC) state-sponsored hackers. The Chinese government hackers used privately-owned SOHO routers infected with the “KV Botnet” malware to conceal the PRC origin of further hacking activities directed against U.S. critical infrastructure and the critical infrastructure of other foreign victims.
-
-
Campus Antisemitism Online: The Proliferation of Hate on Sidechat
Antisemitism has soared in the wake of the Hamas assault on Israel on October 7th with an intensity that has shocked many. Jewish students and campus organizations, such as Hillel, report that anti-Jewish and anti-Israel sentiments are often being spread via campus messaging apps like Yik Yak and Sidechat, where hate can easily be masked behind a cloak of anonymity. Jewish students have reported death threats, verbal and physical assaults, and levels of intimidation that have spread fear of attending classes or in some instances even venturing outside of their dorm rooms.
-
-
New Russian Disinformation Campaigns Prove the Past Is Prequel
Since 2016, conversations about disinformation have focused on the role of technology—from chatbots to deepfakes. Persuasion, however, is a fundamentally human-centered endeavor, and humans haven’t changed. Darren Linvill and Patrick Warren write the fundamentals of covert influence haven’t either.
-
-
Fake News: Who's Better at Detecting it?
More than 2 billion voters in 50 countries are set to go to the polls in 2024 — a record-breaking year for elections. But 2024 is also the year when artificial intelligence (AI) could flood our screens with fake news like never before. With the U.S. in election mode, a study finds Republicans are less likely to spot fake news than Democrats. Gender and education are important factors, too.
-
-
$2.6 Million NSF Grant for FAU’s CyberCorps Student Scholarship Program
A $2.6 million grant from the National Science Foundation (NSF) will allow FAU to establish a scholarship program in the rapidly growing field of cybersecurity. The program is managed by the NSF and DHS. Designed to increase the volume and strength of the nation’s cybersecurity workforce, the program provides full scholarships and stipends to students pursuing studies at the intersection of cybersecurity and AI.
-
-
Slow the Scroll: Users Less Vigilant About Misinformation on Mobile Phones
Mobile phones pack a lot of information into pocket-sized devices, which is why users may want to slow down the next time they’re scrolling through social media or checking email on a mobile app. Habitual mobile phone users engage in less information processing, and are more likely to fall for misinformation on their mobile phones than on personal computers, researchers find.
-
-
Truth Decay and National Security
The line between fact and opinion in public discourse has been eroding, and with it the public’s ability to have arguments and find common ground based in fact. Two core drivers of Truth Decay are political polarization and the spread of misinformation—and these are particularly intertwined in the national security arena. Exposure to misinformation leads to increased polarization, and increased polarization decreases the impact of factual information. Individuals, institutions, and the nation as a whole are vulnerable to this vicious cycle.
-
-
Jan. 6 Was an Example of Networked Incitement − a Media and Disinformation Expert Explains the Danger of Political Violence Orchestrated Over Social Media
The shocking events of Jan. 6, 2021 were an example of a new phenomenon: influential figures inciting large-scale political violence via social media, and insurgents communicating across multiple platforms to command and coordinate mobilized social movements in the moment of action. We call this phenomenon “networked incitement.” The use of social media for networked incitement foreshadows a dark future for democracies. Rulers could well come to power by manipulating mass social movements via social media, directing a movement’s members to serve as the leaders’ shock troops, online and off.
-
-
Identifying Types of Cyberattacks That Manipulate Behavior of AI Systems
AI systems can malfunction when exposed to untrustworthy data – what is called “adversarial machine learning” — and attackers are exploiting this issue. New guidance documents the types of these attacks, along with mitigation approaches. No foolproof method exists as yet for protecting AI from misdirection, and AI developers and users should be wary of any who claim otherwise.
-
More headlines
The long view
States Rush to Combat AI Threat to Elections
By Zachary Roth
This year’s presidential election will be the first since generative AI became widely available. That’s raising fears that millions of voters could be deceived by a barrage of political deepfakes. Congress has done little to address the issue, but states are moving aggressively to respond — though questions remain about how effective any new measures to combat AI-created disinformation will be.
Ransomware Attacks: Death Threats, Endangered Patients and Millions of Dollars in Damages
By Dino Jahic
A ransomware attack on Change Healthcare, a company that processes 15 billion health care transactions annually and deals with 1 in 3 patient records in the United States, is continuing to cause massive disruptions nearly three weeks later. The incident, which started on February 21, has been called the “most significant cyberattack on the U.S. health care system” by the American Hospital Association. It is just the latest example of an increasing trend.
Chinese Government Hackers Targeted Critics of China, U.S. Businesses and Politicians
An indictment was unsealed Monday charging seven nationals of the People’s Republic of China (PRC) with conspiracy to commit computer intrusions and conspiracy to commit wire fraud for their involvement in a PRC-based hacking group that spent approximately 14 years targeting U.S. and foreign critics, businesses, and political officials in furtherance of the PRC’s economic espionage and foreign intelligence objectives.
Autonomous Vehicle Technology Vulnerable to Road Object Spoofing and Vanishing Attacks
Researchers have demonstrated the potentially hazardous vulnerabilities associated with the technology called LiDAR, or Light Detection and Ranging, many autonomous vehicles use to navigate streets, roads and highways. The researchers have shown how to use lasers to fool LiDAR into “seeing” objects that are not present and missing those that are – deficiencies that can cause unwarranted and unsafe braking or collisions.
Tantalizing Method to Study Cyberdeterrence
By Trina West
Tantalus is unlike most war games because it is experimental instead of experiential — the immersive game differs by overlapping scientific rigor and quantitative assessment methods with the experimental sciences, and experimental war gaming provides insightful data for real-world cyberattacks.