• Can taking down websites really stop terrorists and hate groups?

    Racists and terrorists, and many other extremists, have used the internet for decades and adapted as technology evolved, shifting from text-only discussion forums to elaborate and interactive websites, custom-built secure messaging systems and even entire social media platforms. Recent efforts to deny these groups online platforms will not kick hate groups, nor hate speech, off the web. In fact, some scholars theorize that attempts to shut down hate speech online may cause a backlash, worsening the problem and making hate groups more attractive to marginalized and stigmatized people, groups, and movements. The tech industry, law enforcement, and policymakers must develop a more measured and coordinated approach to the removal of extremist and terrorist content online. The only way to really eliminate this kind of online content is to decrease the number of people who support it.

  • Russia’s fake Americans

    It is commonly believed that Russia’s interference in the 2016 presidential campaign consisted mainly of the hacking and leaking of Democratic emails and unfavorable stories circulated abroad about Hillary Clinton. A startling new report by the New York Times, and new research by the cybersecurity firm FireEye, now reveal that the Kremlin’s stealth intrusion into the election was far broader and more complex, involving a cyber-army of bloggers posing as Americans and spreading propaganda and disinformation to an American electorate on Facebook, Twitter, and other platforms. The Russian social media scheming is further evidence of what amounted to unprecedented foreign invasion of American democracy. If President Trump and Congress are not outraged by this, American voters should ask why.

  • What is the online equivalent of a burning cross?

    White supremacy is woven into the tapestry of American culture, online and off. Addressing white supremacy is going to take much more than toppling a handful of Robert E. Lee statues or shutting down a few white nationalist websites, as technology companies have started to do. We must wrestle with what freedom of speech really means, and what types of speech go too far, and what kinds of limitations on speech we can endorse. In 2003, the Supreme Court ruled, in Virginia v. Black, that “cross burning done with the intent to intimidate has a long and pernicious history as a signal of impending violence.” In other words, there’s no First Amendment protection because a burning cross is meant to intimidate, not start a dialogue. But what constitutes a burning cross in the digital era?

  • Managing extreme speech on social media

    Extreme speech on social media—foul language, threats and overtly sexist and racist language—has been in the spotlight. While such language is not new, recent increases of extreme and offensive posts on social media have led to politicians, celebrities and pundits calling for social media platforms to do more in curbing such speech, opening new debates about free speech in the digital age. A new study shows that while people tend to dislike extreme speech on social media, there is less support for outright censorship. Instead, people believe sites need to do a better job promoting healthy discourse online.

  • Google’s assault on privacy: a reminder

    On its best day, with every ounce of technology the U.S. government could muster, it could not know a fraction as much about any of us as Google does now” (Shelly Palmer, technology analyst).

  • Islamic State’s Twitter network is decimated, but other extremists face much less disruption

    The use of social media by a diversity of violent extremists and terrorists and their supporters has been a matter of concern for law enforcement and politicians for some time. While it appears that Twitter is now severely disrupting pro-IS accounts on its platform, our research found that other jihadists were not subject to the same levels of take down. The migration of the pro-IS social media community from Twitter to the messaging service Telegram particularly bears watching. Telegram currently has a lower profile than Twitter with a smaller user base and higher barriers to entry, with users required to provide a mobile phone number to create an account. While this means that fewer people are being exposed to IS’s online content via Telegram, and are thereby in a position to be radicalized by it, it may mean that Telegram’s pro-IS community is more committed and therefore poses a greater security risk than its Twitter variant.

  • How online hate infiltrates social media and politics

    In late February, an anti-Semitic website known as the Daily Stormer — which receives more than 2.8 million monthly visitors — announced, “Jews Destroy Another One of Their Own Graveyards to Blame Trump.” The story was inspired by the recent desecration of a Jewish cemetery in Philadelphia. To whom, and how many, this example of conspiracy mongering may travel is, in part, the story of “fake news,” the phenomenon in which biased propaganda is disseminated as if it were objective journalism in an attempt to corrupt public opinion. Looking at the most-visited websites of what were once diminished movements – white supremacists, xenophobic militants, and Holocaust deniers, to name a few – reveals a much-revitalized online culture. When he was asked about the Philadelphia vandalism, President Trump told the Pennsylvania attorney general the incident was “reprehensible.” But he then went on to speculate that it might have been committed “to make others look bad.” That feeds the very doubt that extremist groups thrive on. And the cycle continues.

  • U.S. weapons main source of trade in illegal arms on the Dark Web

    New report, based on first-ever study, looks at the size and scope of the illegal arms trade on the dark web. European purchases of weapons on the dark web generate estimated revenues five times higher than the U.S. purchases. The dark web’s potential to anonymously arm criminals and terrorists, as well as vulnerable and fixated individuals, is “the most dangerous aspect.”

  • “Stalking software”: Surveillance made simpler

    The controversial Snap Map app enables Snapchat users to track their friends. The app makes it possible for users to monitor their friends’ movements, and determine – in real time – exactly where their posts are coming from (down to the address). Many social media users expressed their indignation, referring to the app as “stalking software.” This is the latest in a series of monitoring tools to be built on social media platforms. A new study assesses the benefits and risks associated with their use.

  • The real costs of cheap surveillance

    Surveillance used to be expensive. Even just a few years ago, tailing a person’s movements around the clock required rotating shifts of personnel devoted full-time to the task. Not any more, though. Governments can track the movements of massive numbers of people by positioning cameras to read license plates, or by setting up facial recognition systems. Private companies’ tracking of our lives has also become easy and cheap too. Advertising network systems let data brokers track nearly every page you visit on the web, and associate it with an individual profile. It is worth thinking about all of this more deeply. U.S. firms – unless they’re managed or regulated in socially beneficial ways – have both the incentive and the opportunity to use information about us in undesirable ways. We need to talk about the government’s enacting rules constraining that activity. After all, leaving those decisions to the people who make money selling our data is unlikely to result in our getting the rules we want.

  • “Social media triangulation” to help emergency responders

    During emergency situations like severe weather or terrorist attacks, local officials and first responders have an urgent need for accessible, reliable and real-time data. Researchers are working to address this need by introducing a new method for identifying local social media users and collecting the information they post during emergencies.

  • To curb hate speech on social media, we need to look beyond Facebook, Twitter: Experts

    Germany has passed a new controversial law which requires social media companies quickly to delete hate speech or face heavy fines. The debate over the new law has focused on the most common social media platforms like Facebook, Twitter, or Youtube. Experts say that placing Facebook, Twitter, and Youtube at the center of the debate over hate speech on social media websites is understandable, but it could undermine monitoring less widely known social media players. Some of these smaller players may present more problematic hate speech issues than their bigger rivals.

  • Fake news: Studying cyber propaganda and false information campaigns

    Dr. Nitin Agarwal of the University of Arkansas at Little Rock will use $1.5 million grant from the Office of Naval Research to study the sources of false information on the Internet, how it is spread through social media, and how people and groups strategically use this false information to conduct cyber propaganda campaigns.

  • Can the world ever really keep terrorists off the internet?

    After London’s most recent terror attacks, British Prime Minister Theresa May called on countries to collaborate on internet regulation to prevent terrorism planning online. May criticized online spaces that allow such ideas to breed, and the companies that host them. Internet companies and other commentators, however, have pushed back against the suggestion that more government regulation is needed, saying weakening everyone’s encryption poses different public dangers. Many have also questioned whether some regulation, like banning encryption, is possible at all. As a law professor who studies the impact of the internet on society, I believe the goal of international collaboration is incredibly complicated, given global history.

  • New tool spots fake online profiles

    People who use fake profiles online could be more easily identified, thanks to a new tool developed by computer scientists. Researchers have trained computer models to spot social media users who make up information about themselves — known as catfishes. The system is designed to identify users who are dishonest about their age or gender. Scientists believe it could have potential benefits for helping to ensure the safety of social networks.