-
$2.6 Million NSF Grant for FAU’s CyberCorps Student Scholarship Program
A $2.6 million grant from the National Science Foundation (NSF) will allow FAU to establish a scholarship program in the rapidly growing field of cybersecurity. The program is managed by the NSF and DHS. Designed to increase the volume and strength of the nation’s cybersecurity workforce, the program provides full scholarships and stipends to students pursuing studies at the intersection of cybersecurity and AI.
-
-
Slow the Scroll: Users Less Vigilant About Misinformation on Mobile Phones
Mobile phones pack a lot of information into pocket-sized devices, which is why users may want to slow down the next time they’re scrolling through social media or checking email on a mobile app. Habitual mobile phone users engage in less information processing, and are more likely to fall for misinformation on their mobile phones than on personal computers, researchers find.
-
-
Truth Decay and National Security
The line between fact and opinion in public discourse has been eroding, and with it the public’s ability to have arguments and find common ground based in fact. Two core drivers of Truth Decay are political polarization and the spread of misinformation—and these are particularly intertwined in the national security arena. Exposure to misinformation leads to increased polarization, and increased polarization decreases the impact of factual information. Individuals, institutions, and the nation as a whole are vulnerable to this vicious cycle.
-
-
Jan. 6 Was an Example of Networked Incitement − a Media and Disinformation Expert Explains the Danger of Political Violence Orchestrated Over Social Media
The shocking events of Jan. 6, 2021 were an example of a new phenomenon: influential figures inciting large-scale political violence via social media, and insurgents communicating across multiple platforms to command and coordinate mobilized social movements in the moment of action. We call this phenomenon “networked incitement.” The use of social media for networked incitement foreshadows a dark future for democracies. Rulers could well come to power by manipulating mass social movements via social media, directing a movement’s members to serve as the leaders’ shock troops, online and off.
-
-
Identifying Types of Cyberattacks That Manipulate Behavior of AI Systems
AI systems can malfunction when exposed to untrustworthy data – what is called “adversarial machine learning” — and attackers are exploiting this issue. New guidance documents the types of these attacks, along with mitigation approaches. No foolproof method exists as yet for protecting AI from misdirection, and AI developers and users should be wary of any who claim otherwise.
-
-
Fighting European Threats to Encryption: 2023 Year in Review
Private communication is a fundamental human right. In the online world, the best tool we have to defend this right is end-to-end encryption. Yet throughout 2023, politicians across Europe attempted to undermine encryption, seeking to access and scan our private messages and pictures.
-
-
How Verified Accounts on X Thrive While Spreading Misinformation About the Israel-Hamas Conflict
With the gutting of content moderation initiatives at X, accounts with blue checks, once a sign of authenticity, are disseminating debunked claims and gaining more followers. Community Notes, X’s fact-checking system, hasn’t scaled sufficiently. “The blue check is flipped now. Instead of a sign of authenticity, it’s a sign of suspicion, at least for those of us who study this enough,” said one expert.
-
-
Evaluating the Truthfulness of Fake News Through Online Searches Increases the Chances of Believing Misinformation
Conventional wisdom suggests that searching online to evaluate the veracity of misinformation would reduce belief in it. But a new study by a team of researchers shows the opposite occurs: Searching to evaluate the truthfulness of false news articles actually increases the probability of believing misinformation.
-
-
Shadow Play: A Pro-China and Anti-U.S. Influence Operation Thrives on YouTube
Experts have recently observed a coordinated inauthentic influence campaign originating on YouTube that’s promoting pro-China and anti-US narratives in an apparent effort to shift English-speaking audiences’ views of those countries’ roles in international politics, the global economy and strategic technology competition.
-
-
Hidden Fortunes and Surprising Overestimations in Cybercrime Revenue
To what extent methodological limitations and incomplete data impact the revenue estimations of cybercriminal groups using the Bitcoin blockchain was largely unknown. A new study challenges existing figures regarding cybercriminals’ Bitcoin earnings to date, revealing the full scale of the financial impact of cybercriminal activity.
-
-
The Cross-Platform Evasion Toolbox of Islamic State Supporters
Extremists exploiting platforms for their own ends and learning along the way is a tale as old as the internet and one that has become even more pronounced in the era of ubiquitous access to social media. Moustafa Ayad writes that over the past three years, a set of exploitation and evasion tactics have become central for Islamic State supporters online, and they are only getting more elaborate.
-
-
Why Federal Efforts to Protect Schools from Cybersecurity Threats Fall Short
In August 2023, the White House announced a plan to bolster cybersecurity in K-12 schools – and with good reason. Between 2018 and mid-September 2023, there were 386 recorded cyberattacks in the U.S. education sector and cost those schools $35.1 billion. K-12 schools were the primary target. While the steps taken by the White House are positive, as someone who teaches and conducts research about cybersecurity, I don’t believe the proposed measures are enough to protect schools from cyberthreats.
-
-
AI Networks Are More Vulnerable to Malicious Attacks Than Previously Thought
Artificial intelligence tools hold promise for applications ranging from autonomous vehicles to the interpretation of medical images. However, a new study finds these AI tools are more vulnerable than previously thought to targeted attacks that effectively force AI systems to make bad decisions.
-
-
Interference-Free Elections? How Quaint!
There are three major elections taking place in 2024: in Taiwan, the United States, and Russia. So, what are the chances that we’ll see cyber-enabled disruption campaigns targeting each of these polls? Tom Uren writes that in the case of the upcoming U.S. election, it seems inevitable.
-
-
New CPU Vulnerability Makes Virtual Machine Environments Vulnerable
Researchers have identified a security vulnerability that could allow data on virtual machines with AMD processors to fall under the control of attackers.
-
More headlines
The long view
States Rush to Combat AI Threat to Elections
This year’s presidential election will be the first since generative AI became widely available. That’s raising fears that millions of voters could be deceived by a barrage of political deepfakes. Congress has done little to address the issue, but states are moving aggressively to respond — though questions remain about how effective any new measures to combat AI-created disinformation will be.
Ransomware Attacks: Death Threats, Endangered Patients and Millions of Dollars in Damages
A ransomware attack on Change Healthcare, a company that processes 15 billion health care transactions annually and deals with 1 in 3 patient records in the United States, is continuing to cause massive disruptions nearly three weeks later. The incident, which started on February 21, has been called the “most significant cyberattack on the U.S. health care system” by the American Hospital Association. It is just the latest example of an increasing trend.
Chinese Government Hackers Targeted Critics of China, U.S. Businesses and Politicians
An indictment was unsealed Monday charging seven nationals of the People’s Republic of China (PRC) with conspiracy to commit computer intrusions and conspiracy to commit wire fraud for their involvement in a PRC-based hacking group that spent approximately 14 years targeting U.S. and foreign critics, businesses, and political officials in furtherance of the PRC’s economic espionage and foreign intelligence objectives.
Autonomous Vehicle Technology Vulnerable to Road Object Spoofing and Vanishing Attacks
Researchers have demonstrated the potentially hazardous vulnerabilities associated with the technology called LiDAR, or Light Detection and Ranging, many autonomous vehicles use to navigate streets, roads and highways. The researchers have shown how to use lasers to fool LiDAR into “seeing” objects that are not present and missing those that are – deficiencies that can cause unwarranted and unsafe braking or collisions.
Tantalizing Method to Study Cyberdeterrence
Tantalus is unlike most war games because it is experimental instead of experiential — the immersive game differs by overlapping scientific rigor and quantitative assessment methods with the experimental sciences, and experimental war gaming provides insightful data for real-world cyberattacks.