• Spotting Russian bots trying to influence politics

    A team of researchers has isolated the characteristics of bots on Twitter through an examination of bot activity related to Russian political discussions. The team’s findings provide new insights into how Russian accounts influence online exchanges using bots, or automated social media accounts, and trolls, which aim to provoke or disrupt. “There is a great deal of interest in understanding how regimes and political actors use bots in order to influence politics,” explains one researcher. “Russia has been at the forefront of trying to shape the online conversation using tools like bots and trolls, so a first step to understanding what Russian bots are doing is to be able to identify them.”

  • Social media trends can predict vaccine scares tipping points

    Analyzing trends on Twitter and Google can help predict vaccine scares that can lead to disease outbreaks, according to a new study. Researchers examined Google searches and geocoded tweets with the help of artificial intelligence and a mathematical model. The resulting data enabled them to analyze public perceptions on the value of getting vaccinated and determine when a population was getting close to a tipping point.

  • Why the president’s anti-Muslim tweets could increase tensions

    By Michael Pasek and Jonathan Cook

    Last week, President Trump retweeted to his nearly 44 million followers a series of videos purporting to show Muslims assaulting people and destroying Christian statues. These videos, originally shared by an extremist anti-Muslim group in the U.K., were shown to be inaccurate and misleading. Our research may shed light on why President Trump shared anti-Muslim videos with his followers. As the White House press secretary said, his decision was a direct response to a perceived threat posed by Muslims. However, religious threat is not a one-way street. Attacking Muslims is not likely to stop religious conflict, but instead increase religious tension by fostering a spiraling tit-for-tat of religious threat and prejudice that increases in severity over time. This type of cyclical process has long been documented by scholars. If people who feel discriminated against because of their religion retaliate by discriminating against other religions, religious intolerance is only going to snowball. If President Trump really wants to stop religious violence, social psychology suggests he should refrain from it himself.

  • Russian-operated bots posted millions of social media posts, fake stories during Brexit referendum

    More than 156,000 Twitter accounts, operated by Russian government disinformation specialists, posted nearly 45,000 messages in support of the “Leave” campaign, urging British voters to vote for Brexit – that is, for Britain to leave the European Union. Researchers compared 28.6 million Russian tweets in support of Brexit to ~181.6 million Russian tweets in support of the Trump campaign, and found close similarity in tone and tactics in the Russian government’s U.K. and U.S. efforts. In both cases, the Russian accounts posted divisive, polarizing messages and fake stories aiming to raise fears about Muslims and immigrants. The goal was to sow discord; intensify rancor and animosity along racial, ethnic, and religious lines; and deepen political polarization — not only to help create a public climate more receptive to the populist, protectionist, nationalist, and anti-Muslim thrust of both Brexit and the Trump campaigns, but also to deepen societal and cultural fault lines and fractures in the United Kingdom and the United States, thus contributing to the weakening of both societies from within.

  • Anatomy of a fake news scandal

    By Amanda Robb

    On 1 December 2016, Alex Jones, the Info-Wars host, a conspiracy-theories peddler, and a fervent Trump booster, was reporting that Hillary Clinton was sexually abusing children in satanic rituals in the basement of a Washington, D.C., pizza restaurant. How was this fake story fabricated and disseminated? “We found ordinary people, online activists, bots, foreign agents and domestic political operatives,” Reveal’s researchers say. “Many of them were associates of the Trump campaign. Others had ties with Russia. Working together – though often unwittingly – they flourished in a new ‘post-truth’ information ecosystem, a space where false claims are defended as absolute facts. What’s different about Pizzagate, says Samuel Woolley, a leading expert in computational propaganda, is it was ‘retweeted and picked up by some of the most powerful faces of American politics’.”

  • During crisis, exposure to social media’s conflicting information is linked to stress

    Exposure to high rates of conflicting information during an emergency is linked to increased levels of stress, and those who rely on text messages or social media reports from unofficial sources are more frequently exposed to rumors and experience greater distress, according to new research.

  • App-based citizen science experiment to help predict future pandemics

    There are flu outbreaks every year, but in the last 100 years, there have been four pandemics of a particularly deadly flu, including the Spanish Influenza outbreak which hit in 1918, killing up to 100 million people worldwide. Nearly a century later, a catastrophic flu pandemic still tops the U.K. government’s Risk Register of threats to the United Kingdom. A new app gives U.K. residents the chance to get involved in an ambitious science experiment that could save lives.

  • BullyBlocker app tackles SU cyberbullying

    Researchers say that more than half of adolescents have been bullied online. Faculty and students at ASU’s New College of Interdisciplinary Arts and Sciences last month announced the public availability of BullyBlocker, a smartphone application that allows parents and victims of cyberbullying to monitor, predict and hopefully prevent incidents of online bullying.

  • DOD wants to be able to detect the online presence of social bots

    Russian government operatives used social bots in the run up to the 2016 presidential campaign to sow discord and dissention, discredit political institutions, and send targeted messages to voters to help Donald Trump win the election. DARPA is funding research to detect the online presence of social bots.

  • Reddit examined for “coordinated” Russian effort to distribute false news

    A spokesperson for Senator Mark Warner (D-Virginia), the ranking Democrat on the Senate intelligence committee, said that Reddit could join Facebook and Twitter as a target for federal investigators exploring the Russian government’s campaign to help Donald Trump win the 2016 presidential election. Oxford University experts examining patterns of news dissemination on Reddit said that they found “coordinated information campaigns” and found “patterns on the site which suggested a deliberate effort to distribute false news.”

  • Anwar al-Awlaki’s sermons, lectures still accessible on YouTube

    Anwar al-Awlaki, the U.S.-born leader of external operations for al-Qaeda in the Arabian Peninsula (AQAP), was targeted and killed by a U.S. drone strike on 30 September 2011. Yet, six years later, Awlaki continues to radicalize and inspire Westerners to terror, due to the ongoing presence and availability of his lectures online, including on YouTube. As of 30 August 2017, a search for Anwar al-Awlaki on YouTube yielded more than 70,000 results, including his most incendiary lectures.

  • Can taking down websites really stop terrorists and hate groups?

    By Thomas Holt, Joshua D. Freilich, and Steven Chermak

    Racists and terrorists, and many other extremists, have used the internet for decades and adapted as technology evolved, shifting from text-only discussion forums to elaborate and interactive websites, custom-built secure messaging systems and even entire social media platforms. Recent efforts to deny these groups online platforms will not kick hate groups, nor hate speech, off the web. In fact, some scholars theorize that attempts to shut down hate speech online may cause a backlash, worsening the problem and making hate groups more attractive to marginalized and stigmatized people, groups, and movements. The tech industry, law enforcement, and policymakers must develop a more measured and coordinated approach to the removal of extremist and terrorist content online. The only way to really eliminate this kind of online content is to decrease the number of people who support it.

  • Russia’s fake Americans

    By The New York Times "Editorial" writers

    It is commonly believed that Russia’s interference in the 2016 presidential campaign consisted mainly of the hacking and leaking of Democratic emails and unfavorable stories circulated abroad about Hillary Clinton. A startling new report by the New York Times, and new research by the cybersecurity firm FireEye, now reveal that the Kremlin’s stealth intrusion into the election was far broader and more complex, involving a cyber-army of bloggers posing as Americans and spreading propaganda and disinformation to an American electorate on Facebook, Twitter, and other platforms. The Russian social media scheming is further evidence of what amounted to unprecedented foreign invasion of American democracy. If President Trump and Congress are not outraged by this, American voters should ask why.

  • What is the online equivalent of a burning cross?

    By Jessie Daniels

    White supremacy is woven into the tapestry of American culture, online and off. Addressing white supremacy is going to take much more than toppling a handful of Robert E. Lee statues or shutting down a few white nationalist websites, as technology companies have started to do. We must wrestle with what freedom of speech really means, and what types of speech go too far, and what kinds of limitations on speech we can endorse. In 2003, the Supreme Court ruled, in Virginia v. Black, that “cross burning done with the intent to intimidate has a long and pernicious history as a signal of impending violence.” In other words, there’s no First Amendment protection because a burning cross is meant to intimidate, not start a dialogue. But what constitutes a burning cross in the digital era?

  • Managing extreme speech on social media

    Extreme speech on social media—foul language, threats and overtly sexist and racist language—has been in the spotlight. While such language is not new, recent increases of extreme and offensive posts on social media have led to politicians, celebrities and pundits calling for social media platforms to do more in curbing such speech, opening new debates about free speech in the digital age. A new study shows that while people tend to dislike extreme speech on social media, there is less support for outright censorship. Instead, people believe sites need to do a better job promoting healthy discourse online.