Disaster at FEMA | Neo-Nazi “Fitness Clubs” Surge in U.S. | Pentagon Snaps Up Ownership Stake in America's Only Rare Earths Mine | Toward an Abundance National Security Agenda, and more

Anti-Government Militia Targets Weather Radars: What to Know  (Emma Marsden, Newsweek
An “anti-government militia” called Veterans on Patrol has declared that it is targeting weather radar installations in Oklahoma. In an interview with News 9 on Tuesday, Michael Lewis Arthur Meyer, the founder of VOP, which the Southern Poverty Law Center describes as an anti-government militia, confirmed the group’s intentions. When asked whether they were targeting the radars, Meyer replied, “Absolutely.”

Bibliography: Counter-terrorism Cooperation  (Perspectives on Terrorism)
This bibliography contains journal articles, book chapters, books, edited volumes, theses, grey literature, bibliographies and other resources on counterterrorism cooperation. It covers contributions on collaborative efforts of multiple actors in the field of countering terrorism and violent extremism on international, national, and regional level. The bibliography focuses on recent publications (up to May 2025) and should not be considered as being exhaustive. The literature has been retrieved by manually browsing more than 200 core and periphery sources in the field of Terrorism Studies. Additionally, full-text as well as reference retrieval systems have been employed to broaden the search.

Elon Musk Updated Grok. Guess What It Said?  (Matteo Wong, The Atlantic)
After praising Hitler earlier this week, the chatbot is now listing the “good races.”

Animal Liberation Front Claims Responsibility for Releasing Mink from Farm in Stark County  (The Repository)
The Counter Extremism Project describes the Animal Liberation Front as “a far-left extremist group focused on animal rights” that was formed in the 1970s in the United Kingdom. It now operates in 40 countries and has “claimed responsibility for arson and vandalism against animal research facilities, farms, restaurants, and other businesses,” the group says.

THE LONG VIEW

Tech Giants Warn Window to Monitor AI Reasoning Is Closing, Urge Action  (Paul Arnold, Phys.org)
Artificial intelligence is advancing at a dizzying speed. Like many new technologies, it offers significant benefits but also poses safety risks. Recognizing the potential dangers, leading researchers from Google DeepMind, OpenAI, Meta, Anthropic and a coalition of companies and nonprofit groups have come together to call for more to be done to monitor how AI systems “think.”
In a joint paper published earlier this week and endorsed by prominent industry figures, including Geoffrey Hinton (widely regarded as the “godfather of AI”) and OpenAI co-founder Ilya Sutskever, the scientists argue that a brief window to monitor AI reasoning may soon close.

Uncovering the Foibles of the KGB and the CIA  (Economist)
Three new books look at the blind spots of the intelligence services.

Toward an Abundance National Security Agenda  (Kathleen H. Hicks and Wendy R. Anderson, National Interest)
The central aims of the abundance agenda must include national security innovation to secure enduring prosperity.

For Trump, Domestic Adversaries Are Not Just Wrong, They Are “Evil”  (Peter Baker, New York Times)
The president’s vilification of political opponents and journalists seeds the ground for threats of prosecution, imprisonment and deportation unlike any modern president has made.

When the Threat Is Inside the White House  (Tim Weiner, Foreign Policy)
What CIA insiders make of the MAGA moles and toadies now in charge of U.S. national security.

Pentagon Snaps Up Ownership Stake in America’s Only Rare Earths Mine  (iconBrandon Vigliarolo, The Register)
Rare earth metals are vital to electronics, and most of them are mined in China.

Datacenters Feeling the Heat as Climate Risk Boils Over  (iconDan Robinson, The Register)
A warmer world will affect bit barn resilience, warn consultants.

Disinformation 2.0: Deepfakes Hit the Frontlines of Global Influence Ops  (Tom Sefton-Collins, HSToday)
State-backed actors and disinformation-for-hire networks are already using deepfakes in real operations. The tools are public, the threat is active and we are not ready.

MORE PICKS

Hackers Are Finding New Ways to Hide Malware in DNS Records  (Dan Goodin, Ars Technica / Wired)
Newly published research shows that the domain name system—a fundamental part of the web—can be exploited to hide malicious code and prompt injection attacks against chatbots.

Security Experts Are “Losing Their Minds” Over an FAA Proposal  (Isaac Stanley-Becker, The Atlantic)
The Trump administration is considering hiring foreigners as air traffic controllers.

Modernizing Department of Defense Civilian Human Resources  (Brandon Crosby, Nathan Thompson, Lisa M. Harrington, Daniel B. Ginsberg, RAND)
Harnessing AI for transformative change.

ICE Is Getting Unprecedented Access to Medicaid Data  (Leah Feiger Makena Kelly Vittoria Elliott Matt Giles, Wired)
A new agreement viewed by WIRED gives ICE direct access to a federal database containing sensitive medical data on tens of millions of Americans, with the goal of locating immigrants.

Disaster at FEMA  (David A. Graham, The Atlantic)
It’s getting harder for Americans to find relief under Trump’s vision of government.

The Trump Administration Is Violating the First Rule of Disasters  (Zoë Schlanger, The Atlantic)
Good disaster management is premised on preparation.

DHS Tells Police That Common Protest Activities Are “Violent Tactics”  (Dell Cameron, Wired)
DHS is urging law enforcement to treat even skateboarding and livestreaming as signs of violent intent during a protest, turning everyday behavior into a pretext for police action.

The Conversations Doctors Are Having About Vaccination Now  (Katherine J. Wu, The Atlantic)
Pediatricians’ advice on vaccination hasn’t changed. What happens when the government’s does?

Many Texas Communities Are Dangerously Unprepared for Floods − Lack of Funding Plays a Big Role  (Ivis García, Jaimie Hicks Masterson, and Shannon Van Zandt, The Conversation)
The devastating flash floods that swept through Texas Hill Country in July 2025 highlight a troubling reality: Despite years of warnings and recent improvements in flood planning, Texas communities remain dangerously vulnerable to flood damage.
The tragedy wasn’t caused just by heavy rainfall. It was made worse by a lack of money for early warning systems, by drainage systems and emergency communication networks that haven’t been updated to handle more intense storms or growing populations, and by the many older buildings in harm’s way.
A 2024 state report estimated the cost of flood mitigation and management projects needed statewide at US$54.5 billion. But in Texas, most of that work is left to local governments.

We study disaster planning at Texas A&M University and see several ways the state and Texas communities can improve safety for everyone.

Tinker Tailor Soldier MAGA  (Tom Nichols, The Atlantic)
Tulsi Gabbard and Kash Patel are turning their agencies against their own staff.

AI-powered Translation: How AI Tools Could Shape a New Frontier of IS Propaganda Dissemination  (Alessandro Bolpagni and Eleonora Ristuccia, GNET)
Terrorists have always adopted and employed emerging technologies to organize acts of terror, radicalize, and produce propaganda to mobilise recruits. Among them, the Salafi-Jihadi groups and, in particular, the Islamic State (IS), have proved to be consistently ahead of the curve in the adoption of new, cutting-edge technology, creating a consistent and persistent online presence. Despite theological quibbles, Artificial Intelligence (AI) is among the technologies that IS supporters are employing to produce and disseminate propaganda. By analyzing previous uses of AI within the IS online ecosystem, this Insight will discuss a case in which a pro-IS user adopted an open-source AI tool to spread IS-English translated propaganda. The Insight will also explore how these models can facilitate and promote the dissemination of propaganda materials.
IS sympathizers and supporters’ interest in AI systems became apparent in 2023, when a pro-IS non-institutional media house released a guide on using these tools while protecting users’ privacy. In the same year, Islamic State Khorasan Province (ISKP) issued AI-powered courses to train propagandists. Then, in March 2024, the first widespread use of AI was detected on the pro-IS server, TechHaven, on Rocket.Chat. Indeed, following the Crocus City Hall attack in Moscow, a video news bulletin with AI-generated characters was published on the platform. Subsequently, on 17 May 2024, ISKP produced a propaganda bulletin generated with AI tools to claim responsibility for the public market attack in the Afghan Bamiyan Province. Later that month, another AI video announcing a bombing in the city of Kandahar was released. Finally, in April 2025, the non-institutional media house Qimam Electronic Foundation (QEF) shared a guide to AI tools and the risks related to them, particularly in terms of security and privacy, also on Rocket.Chat.