OUR PICKSU.S. Cyber Operations in Ukraine | China Hacks Telecom Firms | AI-Trained ‘Hate Speech Machine’, and more
· U.S. Confirms Military Hackers Have Conducted Cyber Operations in Support of Ukraine
· 15 Ways Social Media Fuels Violent Extremism
· AI Trained on 4Chan Becomes ‘Hate Speech Machine’
· AI Model Trained on 4chan Dupes Message Board
· Senate Homeland Security Committee Holds Hearing on Racially Motivated Extremist Violence
· Racially Motivated Violent Extremism Is a Cancer and Connecticut Has It, Too
· Misunderstandings of the First Amendment Hobble Content Moderation
· Chinese Hackers Breach ‘Major’ Telecoms Firms, U.S. Says
U.S. Confirms Military Hackers Have Conducted Cyber Operations in Support of Ukraine (Sean Lyngaas, CNN)
After American officials revealed that U.S. military hackers have carried out offensive cyberoperations in support of Ukraine, Russia warned that cyberattacks against its infrastructure could result in a military confrontation. The exact nature of U.S. operations against Russia remains unclear, but U.S. willingness to carry out such attacks—while refraining from a broader military intervention—suggests that American officials are not particularly concerned about the type of reprisals Russian officials are now warning of. American officials have warned that Russia may carry out retaliatory cyberattacks against the United States for its support of Ukraine, but, so far, such attacks have not materialized.
15 Ways Social Media Fuels Violent Extremism (Bridget Johnson, HSToday)
Social media is used not just as a venting space for unpopular or hateful opinions but as a tool for extremists to grow, inspire, information-share, and promote violence.
AI Trained on 4Chan Becomes ‘Hate Speech Machine’ (Matthew Gault, Vice)
After 24 hours, the nine bots running on 4chan had posted 15,000 times.
AI Model Trained on 4chan Dupes Message Board (Elias Groll, Brookings)
Over the course of 24 hours, a bot built by the YouTuber and AI researcher Yannick Kilcher ran rampant on 4chan, the notorious online forum that serves as an incubator for a variety of extremist ideologies. To the unsuspecting eye, messages from Kilcher’s bot accounts were exactly what would be expected from 4chan posts: insulting, conspiratorial, misogynistic, anti-Semitic and racist. But for the hundreds of 4chan users who interacted with them, the bot accounts, which posted messages from a large language model built by Kilcher, seemed the same as every other anonymous user contributing to the site’s toxic discourse. 4chan’s users were being hoodwinked by a racist, anti-Semitic, degenerate AI. (Cont.)