• The Russian connectionRussian-operated bots posted millions of social media posts, fake stories during Brexit referendum

    More than 156,000 Twitter accounts, operated by Russian government disinformation specialists, posted nearly 45,000 messages in support of the “Leave” campaign, urging British voters to vote for Brexit – that is, for Britain to leave the European Union. Researchers compared 28.6 million Russian tweets in support of Brexit to ~181.6 million Russian tweets in support of the Trump campaign, and found close similarity in tone and tactics in the Russian government’s U.K. and U.S. efforts. In both cases, the Russian accounts posted divisive, polarizing messages and fake stories aiming to raise fears about Muslims and immigrants. The goal was to sow discord; intensify rancor and animosity along racial, ethnic, and religious lines; and deepen political polarization — not only to help create a public climate more receptive to the populist, protectionist, nationalist, and anti-Muslim thrust of both Brexit and the Trump campaigns, but also to deepen societal and cultural fault lines and fractures in the United Kingdom and the United States, thus contributing to the weakening of both societies from within.

  • Considered opinionAnatomy of a fake news scandal

    By Amanda Robb

    On 1 December 2016, Alex Jones, the Info-Wars host, a conspiracy-theories peddler, and a fervent Trump booster, was reporting that Hillary Clinton was sexually abusing children in satanic rituals in the basement of a Washington, D.C., pizza restaurant. How was this fake story fabricated and disseminated? “We found ordinary people, online activists, bots, foreign agents and domestic political operatives,” Reveal’s researchers say. “Many of them were associates of the Trump campaign. Others had ties with Russia. Working together – though often unwittingly – they flourished in a new ‘post-truth’ information ecosystem, a space where false claims are defended as absolute facts. What’s different about Pizzagate, says Samuel Woolley, a leading expert in computational propaganda, is it was ‘retweeted and picked up by some of the most powerful faces of American politics’.”

  • Disasters & social mediaDuring crisis, exposure to social media’s conflicting information is linked to stress

    Exposure to high rates of conflicting information during an emergency is linked to increased levels of stress, and those who rely on text messages or social media reports from unofficial sources are more frequently exposed to rumors and experience greater distress, according to new research.

  • PandemicsApp-based citizen science experiment to help predict future pandemics

    There are flu outbreaks every year, but in the last 100 years, there have been four pandemics of a particularly deadly flu, including the Spanish Influenza outbreak which hit in 1918, killing up to 100 million people worldwide. Nearly a century later, a catastrophic flu pandemic still tops the U.K. government’s Risk Register of threats to the United Kingdom. A new app gives U.K. residents the chance to get involved in an ambitious science experiment that could save lives.

  • CybersecurityBullyBlocker app tackles SU cyberbullying

    Researchers say that more than half of adolescents have been bullied online. Faculty and students at ASU’s New College of Interdisciplinary Arts and Sciences last month announced the public availability of BullyBlocker, a smartphone application that allows parents and victims of cyberbullying to monitor, predict and hopefully prevent incidents of online bullying.

  • The Russian connectionDOD wants to be able to detect the online presence of social bots

    Russian government operatives used social bots in the run up to the 2016 presidential campaign to sow discord and dissention, discredit political institutions, and send targeted messages to voters to help Donald Trump win the election. DARPA is funding research to detect the online presence of social bots.

  • The Russian connectionReddit examined for “coordinated” Russian effort to distribute false news

    A spokesperson for Senator Mark Warner (D-Virginia), the ranking Democrat on the Senate intelligence committee, said that Reddit could join Facebook and Twitter as a target for federal investigators exploring the Russian government’s campaign to help Donald Trump win the 2016 presidential election. Oxford University experts examining patterns of news dissemination on Reddit said that they found “coordinated information campaigns” and found “patterns on the site which suggested a deliberate effort to distribute false news.”

  • TerrorismAnwar al-Awlaki’s sermons, lectures still accessible on YouTube

    Anwar al-Awlaki, the U.S.-born leader of external operations for al-Qaeda in the Arabian Peninsula (AQAP), was targeted and killed by a U.S. drone strike on 30 September 2011. Yet, six years later, Awlaki continues to radicalize and inspire Westerners to terror, due to the ongoing presence and availability of his lectures online, including on YouTube. As of 30 August 2017, a search for Anwar al-Awlaki on YouTube yielded more than 70,000 results, including his most incendiary lectures.

  • Extremists & social mediaCan taking down websites really stop terrorists and hate groups?

    By Thomas Holt, Joshua D. Freilich, and Steven Chermak

    Racists and terrorists, and many other extremists, have used the internet for decades and adapted as technology evolved, shifting from text-only discussion forums to elaborate and interactive websites, custom-built secure messaging systems and even entire social media platforms. Recent efforts to deny these groups online platforms will not kick hate groups, nor hate speech, off the web. In fact, some scholars theorize that attempts to shut down hate speech online may cause a backlash, worsening the problem and making hate groups more attractive to marginalized and stigmatized people, groups, and movements. The tech industry, law enforcement, and policymakers must develop a more measured and coordinated approach to the removal of extremist and terrorist content online. The only way to really eliminate this kind of online content is to decrease the number of people who support it.

  • Considered opinionRussia’s fake Americans

    By The New York Times "Editorial" writers

    It is commonly believed that Russia’s interference in the 2016 presidential campaign consisted mainly of the hacking and leaking of Democratic emails and unfavorable stories circulated abroad about Hillary Clinton. A startling new report by the New York Times, and new research by the cybersecurity firm FireEye, now reveal that the Kremlin’s stealth intrusion into the election was far broader and more complex, involving a cyber-army of bloggers posing as Americans and spreading propaganda and disinformation to an American electorate on Facebook, Twitter, and other platforms. The Russian social media scheming is further evidence of what amounted to unprecedented foreign invasion of American democracy. If President Trump and Congress are not outraged by this, American voters should ask why.

  • Hate speechWhat is the online equivalent of a burning cross?

    By Jessie Daniels

    White supremacy is woven into the tapestry of American culture, online and off. Addressing white supremacy is going to take much more than toppling a handful of Robert E. Lee statues or shutting down a few white nationalist websites, as technology companies have started to do. We must wrestle with what freedom of speech really means, and what types of speech go too far, and what kinds of limitations on speech we can endorse. In 2003, the Supreme Court ruled, in Virginia v. Black, that “cross burning done with the intent to intimidate has a long and pernicious history as a signal of impending violence.” In other words, there’s no First Amendment protection because a burning cross is meant to intimidate, not start a dialogue. But what constitutes a burning cross in the digital era?

  • Hate speechManaging extreme speech on social media

    Extreme speech on social media—foul language, threats and overtly sexist and racist language—has been in the spotlight. While such language is not new, recent increases of extreme and offensive posts on social media have led to politicians, celebrities and pundits calling for social media platforms to do more in curbing such speech, opening new debates about free speech in the digital age. A new study shows that while people tend to dislike extreme speech on social media, there is less support for outright censorship. Instead, people believe sites need to do a better job promoting healthy discourse online.

  • Quick takes // By Ben FrankelGoogle’s assault on privacy: a reminder

    On its best day, with every ounce of technology the U.S. government could muster, it could not know a fraction as much about any of us as Google does now” (Shelly Palmer, technology analyst).

  • Terrorists & social mediaIslamic State’s Twitter network is decimated, but other extremists face much less disruption

    By Suraj Lakhani and Maura Conway

    The use of social media by a diversity of violent extremists and terrorists and their supporters has been a matter of concern for law enforcement and politicians for some time. While it appears that Twitter is now severely disrupting pro-IS accounts on its platform, our research found that other jihadists were not subject to the same levels of take down. The migration of the pro-IS social media community from Twitter to the messaging service Telegram particularly bears watching. Telegram currently has a lower profile than Twitter with a smaller user base and higher barriers to entry, with users required to provide a mobile phone number to create an account. While this means that fewer people are being exposed to IS’s online content via Telegram, and are thereby in a position to be radicalized by it, it may mean that Telegram’s pro-IS community is more committed and therefore poses a greater security risk than its Twitter variant.

  • Hate groupsHow online hate infiltrates social media and politics

    By Adam G. Klein

    In late February, an anti-Semitic website known as the Daily Stormer — which receives more than 2.8 million monthly visitors — announced, “Jews Destroy Another One of Their Own Graveyards to Blame Trump.” The story was inspired by the recent desecration of a Jewish cemetery in Philadelphia. To whom, and how many, this example of conspiracy mongering may travel is, in part, the story of “fake news,” the phenomenon in which biased propaganda is disseminated as if it were objective journalism in an attempt to corrupt public opinion. Looking at the most-visited websites of what were once diminished movements – white supremacists, xenophobic militants, and Holocaust deniers, to name a few – reveals a much-revitalized online culture. When he was asked about the Philadelphia vandalism, President Trump told the Pennsylvania attorney general the incident was “reprehensible.” But he then went on to speculate that it might have been committed “to make others look bad.” That feeds the very doubt that extremist groups thrive on. And the cycle continues.