U.S. Cyber Operations in Ukraine | China Hacks Telecom Firms | AI-Trained ‘Hate Speech Machine’, and more

Kilcher built the language model in question by training an AI on a data-set of posts scraped from 4chan’s infamous /pol message board. He dubbed the model gpt-4chan, and for anyone who has spent any time on the site it is inspired by, the model is uncanny at reproducing 4chan’s blend of toxic rhetoric. When I asked the model its opinion of women’s role in society, it described women as a “different species” and “not human.” When I asked its opinion of Black people, its answers were uniformly racist and featured the n-word. Remarkably, Kilcher allowed this bot to freely post on 4chan for 24 hours, creating a system in which nine instances of the bot posted on the site. While his bot accounts were up and running, Kilcher claims that they contributed 10% of the posts on /pol. The high volume of messages posted by Kilcher’s bots attracted the attention of 4chan users, who began to speculate on the identity of this anonymous poster. Some correctly identified it as a bot; others found the bots messages to bear the hallmarks of human language; others thought it was a police operation. At no point were 4chan users informed that they were interacting with an AI
Kilcher’s experiment illustrates the profound ethical questions raised by the proliferation of this technology. The deployment of large language models pose a variety of potential harms—ranging from the automation of disinformation to spam to fraud to astroturfing—and gpt-4chan encapsulates many of these concerns. Kilcher’s bots spammed 4chan and exposed users to hateful, racist, anti-Semitic messages, reinforcing the impression among users about the prevalence of such language on the platform. Trained on /pol’s toxic language, gpt-4chan reproduced this language in an original way with sufficient authenticity that it sparked an intense debate on the platform about its identity. And the bot’s references to itself—as well as its references to its girlfriend—were among the evidence cited by users that what was a bot had to be a human or perhaps a team of users working together.  

Senate Homeland Security Committee Holds Hearing on Racially Motivated Extremist Violence  (AP, PBS)
The Senate Homeland Security committee held a hearing Thursday on white supremacist violence in the aftermath of the racially motivated massacre in Buffalo, New York. Committee Chair Gary Peters, a Michigan Democrat, opened the hearing by citing data that a majority of extremism violence is committed by far-right and white supremacist extremists. Between 2012 and 2021, white supremacists committed the majority of murders carried out by extremists, according to the Anti-Defamation League, Peters said. Peters also spoke on the Buffalo shooter’s connections to white supremacy, noting that in the days before the attack, the shooter posted a screed hundreds of pages long online in which he referred to the Great Replacement Theory – a racist idea that non-white people are working to replace white Americans. “This disgusting belief is at the center of some of the most horrific terrorist attacks that we have seen in recent years,” Peters said, also citing the 2017 “Unite the Right” rally in Charlottesville, Virginia and the 2018 shooting at the Tree of Life Synagogue in Pittsburgh, in which 11 people were killed. “Once relegated to the fringes of our society, these extreme and abhorrent beliefs are now a constant presence in our nation’s mainstream,” Peters said.

Racially Motivated Violent Extremism Is a Cancer and Connecticut Has It, Too  (Susan Campbell, CT NewsJunkie)
According to the non-profit, Counter Extremism Project, the New England Nationalist Social Club (NSC) began in Massachusetts, and it is metastasizing. They are the Klan, but armed with social media accounts, and countering their hatred will take more than police action. It will take all of us speaking up and speaking out when we see even the slightest evidence of hatred creeping up the streets, because we know that hateful words beget violence begets pain. Sunlight really is powerful medicine. A few years ago, a man tried to burn down a mosque in Joplin, Mo. He was unsuccessful, but he returned a few weeks later when he was able to get the roof to catch fire, and the mosque was lost.  You might not expect a mosque to exist in that troubled land, though it might not be surprising to picture a mosque burning there. I am a native. I grew up in a sundown town, where people of anything other than European heritage were encouraged to leave the premises before dark, or … well, no one had to finish that sentence. Only here’s what happened next: People who knew nothing about Islam began turning out, starting with the casserole brigade (church ladies stepping over debris to bring sustenance to the wounded). A student at a local Christian college organized a heavily-attended fundraiser that helped the Muslims rebuild.

Misunderstandings of the First Amendment Hobble Content Moderation  (Aileen Nielsen, Brookings)
The contours of acceptable online speech, and the appropriate mechanisms to ensure meaningful online communities, are among the most contentious policy debates in America today. Moderating content that is not per se illegal but that likely creates significant harm has proven particularly divisive. Many on the left insist digital platforms haven’t done enough to combat hate speech, misinformation, and other potentially harmful material, while many on the right argue that platforms are doing far too much—to the point where “Big Tech” is censoring legitimate speech and effectively infringing on Americans’ fundamental rights. As Congress weighs new regulation for digital platforms, and as states like Texas and Florida create social media legislation of their own, the importance and urgency of the issue is only set to grow.
Yet unfortunately the debate over free speech online is also being shaped by fundamentally incorrect understandings of the First Amendment. As the law stands, platforms are private entities with their own speech rights; hosting content is not a traditional government role that makes a private actor subject to First Amendment constraints. Nonetheless, many Americans erroneously believe that the content-moderation decisions of digital platforms violate ordinary people’s constitutionally guaranteed speech rights. With policymakers at all levels of government working to address a diverse set of harms associated with platforms, the electorate’s mistaken beliefs about the First Amendment could add to the political and economic challenges of building better online speech governance.

Chinese Hackers Breach ‘Major’ Telecoms Firms, U.S. Says  (Sean Lyngaas, CNN)
U.S. security agencies warned that they have observed Chinese government-backed hackers targeting American telecommunications firms using publicly disclosed cybersecurity vulnerabilities. By carrying out attacks using known vulnerabilities, rather than unique tools, Chinese hackers may be attempting to provide greater deniability to their attacks or to mask their origin all-together.