OUR PICKSInsiders’ Threat to US Nuclear Arsenal | The Far Right Is Splintering | Volt Typhoon’s Hacking Tactics, and more

Published 30 May 2023

·  The National Counterterrorism Center Must Expand to Better Fight Domestic Terrorists
The threat has changed, and the NCTC should be reconfigured to domestic terrorists as well

·  Agencies Warn of State-Sponsored Volt Typhoon’s Hacking Tactics
Malware infiltrates private networks by blending in with normal Windows system activities to avoid detection

·  The Far Right Is Splintering
In his trial, the Oath Keepers leader Stewart Rhodes turned against other extremists

·  U.S Tech Mogul Bankrolls Pro-Russia, Pro-China News Network
Neville Singham’s vast dark money network has fueled BreakThrough News and a raft of other online outlets pushing Moscow and Beijing’s favorite narratives

·  Unmonitored Networks Put US Nuclear Arsenal at Risk, GAO Finds
Preventing insider threats to the nation’s nuclear arsenal

·  Another Warning from Industry Leaders on Dangers Posed by AI
Leading AI figures say that AI poses dangers to the world similar to those posed by nuclear weapons

The National Counterterrorism Center Must Expand to Better Fight Domestic Terrorists  (Bruce Hoffman and Jacob Ware, Defense One)
Broadening the NCTC’s mission to include homegrown terrorism will require bold executive leadership and congressional action.

Agencies Warn of State-Sponsored Volt Typhoon’s Hacking Tactics  (Alexandra Kelley, Nextgov)
In collaboration with international and private sector partners, CISA released a new advisory warning network defenders of PRC-linked Volt Typhoon’s infiltration tactics.

The Far Right Is Splintering  (Juliette Kayyem, The Atlantic)
At his sentencing, Rhodes was unrepentant. In a 20-minute speech before the court,Rhodes also unwittingly revealed deepening fissures in the far-right movement that, two years ago, resorted to violence to keep Donald Trump in the White House. The defendant used some of his time to distance himself from the Proud Boys, another extremist organization, with whom he had met in the days before the insurrection. “Unlike other groups like the Proud Boys, who seek conflict and seek to street-fight,” Rhodes explained, “we deter.” I’ve been misunderstood, he was telling the court; the Proud Boys are the ones you want.
Rhodes, it seems, is not entirely in sync with his radical brethren. A unified extremist front is a threat to our democracy—but the story is different when extremists start pointing fingers at one another in the criminal-justice system.
Violent, noxious ideologies do not just vanish with a tough sentence. Success against them can’t be measured by whether bad people see the light, but whether they are able to expand their ranks. Raising money and organizing large-scale collective actions becomes more difficult if seemingly like-minded groups are at war with each other. Far-right groups make noise about left-wing conspiracies, but they are under attack from within their own cause.
Rhodes will have 18 years to contemplate the violence and stew in his resentment of the Proud Boys. In the meantime, let the infighting continue.

U.S Tech Mogul Bankrolls Pro-Russia, Pro-China News Network  (William Bredderman, Daily Beast)

Neville “Roy” Singham has been descried as “A Marxist with a massive software company.” The tech billionaire has been funding several pro-Russia, pro-China, and anti-U.S. social media channels.

Unmonitored Networks Put US Nuclear Arsenal at Risk, GAO Finds  (Edward Graham, Nextgov)
A Government Accountability Office report found that the Energy Department cannot effectively monitor potential insider threats to U.S. nuclear security because department staff “have not identified the total number of DOE’s stand-alone classified networks.”

Another Warning from Industry Leaders on Dangers Posed by AI  (Sara Goudarzi, Bulletin of the Atomic Scientists)
In a one-sentence statement, industry professionals issued yet another warning regarding the dangers posed by artificial intelligence. The statement, which read “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” was signed by OpenAI CEO Sam Altman; Demis Hassabis, chief executive of Google DeepMind; Geoffrey Hinton, Emeritus Professor of Computer Science, University of Toronto (also known as a godfather of AI); and more than 350 researchers, executives, and, other professionals.
In March, more than 1,000 researchers and tech leaders, signed an open letter urging AI labs to pause the training of systems more powerful than ChatGPT-4 for six months, citing “profound risks to society and humanity.”
Since the release of OpenAI’s ChatGPT last November, there’s been growing concern about large language and image models. The concerns range from obvious effects—such as spreading misinformation and disinformation, amplifying biases and inequities, copyright issues, plagiarism, and influencing politics—to more hypothetical, science fictionish possibilities, such as the systems developing human-like capabilities and using them for malign ends.
The latter concerns are often floated by those creating the technology, which raises the question: Why release, and continue to improve, a tech that is cause for such grave fears? Artificial intelligence isn’t a natural disaster, like a tsunami, over which humans have little control. If AI is causing existential worry, then maybe it’s time to put the brakes on.
Or perhaps the voices that are the loudest in this arena are not the ones describing the technology’s current abilities with the most clarity and transparency.