• DOD wants to be able to detect the online presence of social bots

    Russian government operatives used social bots in the run up to the 2016 presidential campaign to sow discord and dissention, discredit political institutions, and send targeted messages to voters to help Donald Trump win the election. DARPA is funding research to detect the online presence of social bots.

  • Reddit examined for “coordinated” Russian effort to distribute false news

    A spokesperson for Senator Mark Warner (D-Virginia), the ranking Democrat on the Senate intelligence committee, said that Reddit could join Facebook and Twitter as a target for federal investigators exploring the Russian government’s campaign to help Donald Trump win the 2016 presidential election. Oxford University experts examining patterns of news dissemination on Reddit said that they found “coordinated information campaigns” and found “patterns on the site which suggested a deliberate effort to distribute false news.”

  • Anwar al-Awlaki’s sermons, lectures still accessible on YouTube

    Anwar al-Awlaki, the U.S.-born leader of external operations for al-Qaeda in the Arabian Peninsula (AQAP), was targeted and killed by a U.S. drone strike on 30 September 2011. Yet, six years later, Awlaki continues to radicalize and inspire Westerners to terror, due to the ongoing presence and availability of his lectures online, including on YouTube. As of 30 August 2017, a search for Anwar al-Awlaki on YouTube yielded more than 70,000 results, including his most incendiary lectures.

  • Can taking down websites really stop terrorists and hate groups?

    Racists and terrorists, and many other extremists, have used the internet for decades and adapted as technology evolved, shifting from text-only discussion forums to elaborate and interactive websites, custom-built secure messaging systems and even entire social media platforms. Recent efforts to deny these groups online platforms will not kick hate groups, nor hate speech, off the web. In fact, some scholars theorize that attempts to shut down hate speech online may cause a backlash, worsening the problem and making hate groups more attractive to marginalized and stigmatized people, groups, and movements. The tech industry, law enforcement, and policymakers must develop a more measured and coordinated approach to the removal of extremist and terrorist content online. The only way to really eliminate this kind of online content is to decrease the number of people who support it.

  • Russia’s fake Americans

    It is commonly believed that Russia’s interference in the 2016 presidential campaign consisted mainly of the hacking and leaking of Democratic emails and unfavorable stories circulated abroad about Hillary Clinton. A startling new report by the New York Times, and new research by the cybersecurity firm FireEye, now reveal that the Kremlin’s stealth intrusion into the election was far broader and more complex, involving a cyber-army of bloggers posing as Americans and spreading propaganda and disinformation to an American electorate on Facebook, Twitter, and other platforms. The Russian social media scheming is further evidence of what amounted to unprecedented foreign invasion of American democracy. If President Trump and Congress are not outraged by this, American voters should ask why.

  • What is the online equivalent of a burning cross?

    White supremacy is woven into the tapestry of American culture, online and off. Addressing white supremacy is going to take much more than toppling a handful of Robert E. Lee statues or shutting down a few white nationalist websites, as technology companies have started to do. We must wrestle with what freedom of speech really means, and what types of speech go too far, and what kinds of limitations on speech we can endorse. In 2003, the Supreme Court ruled, in Virginia v. Black, that “cross burning done with the intent to intimidate has a long and pernicious history as a signal of impending violence.” In other words, there’s no First Amendment protection because a burning cross is meant to intimidate, not start a dialogue. But what constitutes a burning cross in the digital era?

  • Managing extreme speech on social media

    Extreme speech on social media—foul language, threats and overtly sexist and racist language—has been in the spotlight. While such language is not new, recent increases of extreme and offensive posts on social media have led to politicians, celebrities and pundits calling for social media platforms to do more in curbing such speech, opening new debates about free speech in the digital age. A new study shows that while people tend to dislike extreme speech on social media, there is less support for outright censorship. Instead, people believe sites need to do a better job promoting healthy discourse online.

  • Google’s assault on privacy: a reminder

    On its best day, with every ounce of technology the U.S. government could muster, it could not know a fraction as much about any of us as Google does now” (Shelly Palmer, technology analyst).

  • Islamic State’s Twitter network is decimated, but other extremists face much less disruption

    The use of social media by a diversity of violent extremists and terrorists and their supporters has been a matter of concern for law enforcement and politicians for some time. While it appears that Twitter is now severely disrupting pro-IS accounts on its platform, our research found that other jihadists were not subject to the same levels of take down. The migration of the pro-IS social media community from Twitter to the messaging service Telegram particularly bears watching. Telegram currently has a lower profile than Twitter with a smaller user base and higher barriers to entry, with users required to provide a mobile phone number to create an account. While this means that fewer people are being exposed to IS’s online content via Telegram, and are thereby in a position to be radicalized by it, it may mean that Telegram’s pro-IS community is more committed and therefore poses a greater security risk than its Twitter variant.

  • How online hate infiltrates social media and politics

    In late February, an anti-Semitic website known as the Daily Stormer — which receives more than 2.8 million monthly visitors — announced, “Jews Destroy Another One of Their Own Graveyards to Blame Trump.” The story was inspired by the recent desecration of a Jewish cemetery in Philadelphia. To whom, and how many, this example of conspiracy mongering may travel is, in part, the story of “fake news,” the phenomenon in which biased propaganda is disseminated as if it were objective journalism in an attempt to corrupt public opinion. Looking at the most-visited websites of what were once diminished movements – white supremacists, xenophobic militants, and Holocaust deniers, to name a few – reveals a much-revitalized online culture. When he was asked about the Philadelphia vandalism, President Trump told the Pennsylvania attorney general the incident was “reprehensible.” But he then went on to speculate that it might have been committed “to make others look bad.” That feeds the very doubt that extremist groups thrive on. And the cycle continues.

  • U.S. weapons main source of trade in illegal arms on the Dark Web

    New report, based on first-ever study, looks at the size and scope of the illegal arms trade on the dark web. European purchases of weapons on the dark web generate estimated revenues five times higher than the U.S. purchases. The dark web’s potential to anonymously arm criminals and terrorists, as well as vulnerable and fixated individuals, is “the most dangerous aspect.”

  • “Stalking software”: Surveillance made simpler

    The controversial Snap Map app enables Snapchat users to track their friends. The app makes it possible for users to monitor their friends’ movements, and determine – in real time – exactly where their posts are coming from (down to the address). Many social media users expressed their indignation, referring to the app as “stalking software.” This is the latest in a series of monitoring tools to be built on social media platforms. A new study assesses the benefits and risks associated with their use.

  • The real costs of cheap surveillance

    Surveillance used to be expensive. Even just a few years ago, tailing a person’s movements around the clock required rotating shifts of personnel devoted full-time to the task. Not any more, though. Governments can track the movements of massive numbers of people by positioning cameras to read license plates, or by setting up facial recognition systems. Private companies’ tracking of our lives has also become easy and cheap too. Advertising network systems let data brokers track nearly every page you visit on the web, and associate it with an individual profile. It is worth thinking about all of this more deeply. U.S. firms – unless they’re managed or regulated in socially beneficial ways – have both the incentive and the opportunity to use information about us in undesirable ways. We need to talk about the government’s enacting rules constraining that activity. After all, leaving those decisions to the people who make money selling our data is unlikely to result in our getting the rules we want.

  • “Social media triangulation” to help emergency responders

    During emergency situations like severe weather or terrorist attacks, local officials and first responders have an urgent need for accessible, reliable and real-time data. Researchers are working to address this need by introducing a new method for identifying local social media users and collecting the information they post during emergencies.

  • To curb hate speech on social media, we need to look beyond Facebook, Twitter: Experts

    Germany has passed a new controversial law which requires social media companies quickly to delete hate speech or face heavy fines. The debate over the new law has focused on the most common social media platforms like Facebook, Twitter, or Youtube. Experts say that placing Facebook, Twitter, and Youtube at the center of the debate over hate speech on social media websites is understandable, but it could undermine monitoring less widely known social media players. Some of these smaller players may present more problematic hate speech issues than their bigger rivals.