• Cambridge Analytica: the data analytics industry is already in full swing

    Revelations about Cambridge Analytica have laid bare the seeming lack of control that we have over our own data. Suddenly, with all the talk of “psychographics” and voter manipulation, the power of data analytics has become the source of some concern. But the risk is that if we look at the case of Cambridge Analytica in isolation, we might prevent a much wider debate about the use and control of our data. By focusing on the reports of extreme practices, we might miss the many everyday ways that data analytics are now shaping our lives.

  • Cambridge Analytica’s abuse of Facebook user data shows “profound impact of technology on democracy”

    Facebook has suspended Cambridge Analytica from its platform for violating its guidelines on the use of user data. The Center for Democracy and Technology (CDT) says that a weekend New York Times article further illuminated the scale of Cambridge Analytica’s efforts and showed how the company used personal information about users to conduct targeted political outreach. “These revelations illustrate the profound impact internet platforms can have on democracy,” CDT says.

  • New U.S. sanctions on Russia for election interference, infrastructure cyberattacks, NoPetya

    The U.S. Treasury has issued its strongest sanctions yet against Russia in response to what it called “ongoing nefarious attacks.” The move targets five entities and nineteen individuals. Among the institutions targeted in the new sanctions for election meddling were Russia’s top intelligence services, Federal Security Service (FSB) and Main Intelligence Directorate (GRU), the two organizations whose hackers, disinformation specialists, and outside contractors such as the Internet Research Agency (IRA) troll farm were behind — and are still engaged in — a broad and sustained campaign to undermine U.S. democracy.

  • To stop fake news, internet platforms should choose quality over quantity: Study

    “Fake news” has made headlines and dominated social media chatter since the 2016 presidential election. It appears to be everywhere, and researchers are still determining the scale of the problem. A new study examines fake news and its prevalence and impact across Google, Facebook, and Twitter. The authors offer recommendations for stemming the flow and influence of fake news, and in particular call for more interdisciplinary research—including more collaboration between internet platforms and academia — “to reduce the spread of fake news and to address the underlying pathologies it has revealed.”

  • Study: On Twitter, false news travels faster than true stories

    A new study by three MIT scholars has found that false news spreads more rapidly on the social network Twitter than real news does — and by a substantial margin. “We found that falsehood diffuses significantly farther, faster, deeper, and more broadly than the truth, in all categories of information, and in many cases by an order of magnitude,” says one researcher. “These findings shed new light on fundamental aspects of our online communication ecosystem,” says another researcher, adding that the researchers were “somewhere between surprised and stunned” at the different trajectories of true and false news on Twitter.

  • Large-scale scientific investigation required to combat fake news: Researcher

    The indictment of 13 Russians in the operation of a troll farm that spread false information related to the 2016 U.S. presidential election has renewed the spotlight on the power of fake news to influence public opinion. Researchers call for a coordinated investigation into the underlying social, psychological and technological forces behind fake news. This is necessary to counteract the phenomenon’s negative influence on society, the researchers say.

  • Extremists exploit gun control debate to promote hatred of Jews

    White supremacists are attempting to exploit the tragic mass shooting in Parkland, Florida, and the ensuing debate over gun control to push an anti-Semitic agenda. Many of these white supremacists are publicly framing the battle over gun control as a struggle between beleaguered whites who want to preserve their traditions in the face of a Jewish onslaught. The ADL says that white supremacists’ anti-Semitic attacks intensified in the wake of NRA head Wayne LaPierre’s 22 February speech to CPAC. LaPierre, perhaps unknowingly, used terms which are buzzwords white supremacists associate with Jews, such as “European-style socialists.” LaPierre said, “A tidal wave of new European-style socialists [has seized] control of the Democratic party.” The only people LaPierre mentioned as examples of people using “social engineering” to try to take away the guns and freedoms of Americans were two Jewish businessmen, Michael Bloomberg and George Soros.

  • Atomwaffen, extremist group whose members have been charged in five murders, loses some of its platforms

    At least four technology companies have taken steps to bar Atomwaffen Division, a violent neo-Nazi organization, from using their online services and platforms to spread its message or fund its operations. The action comes after ProPublica reports detailing the organization’s terrorist ambitions and revealing that the California man charged with murdering Blaze Bernstein, a 19-year-old college student found buried in an Orange County park earlier this year, was an Atomwaffen member.

  • Russia used social media extensively to influence U.S. energy markets: Congressional panel

    The U.S. House Science, Space, and Technology Committee last week released a staff report uncovering Russia’s extensive efforts to influence U.S. energy markets through divisive and inflammatory posts on social media platforms. The report details Russia’s motives in interfering with U.S. energy markets and influencing domestic energy policy and its manipulation of Americans via social media propaganda. The report includes examples of Russian-propagated social media posts.

  • New challenge for first responders: Fake News

    First responders must find ways to address a new challenge: Not only do they have to deal with floods, storms, fires, earthquakes, active shooter events, and other natural and manmade crises – now they also have to find ways to deal with fake news. Social media may disseminate valuable and helpful information during disasters and extreme events – but it may also be used to spread fake news: disinformation and misinformation about the scope, nature, and sources, and location of a disaster or extreme incident. Such misinformation may not only confuse victims and potential victims, but also confuse and mislead first responders who rush to their rescue.

  • Deep Fakes: A looming crisis for national security, democracy and privacy?

    Events in the last few years, such as Russia’s broad disinformation campaign to undermine Western democracies, including the American democratic system, have offered a compelling demonstration of truth decay: how false claims — even preposterous ones — can be disseminated with unprecedented effectiveness today thanks to a combination of social media ubiquitous presence and virality, cognitive biases, filter bubbles, and group polarization. Robert Chesney and Danielle Citron write in Lawfare that the resulting harms are significant for individuals, businesses, and democracy – but that the problem may soon take a significant turn for the worse thanks to deep fakes. They urge us to get used to hearing that phrase. “It refers to digital manipulation of sound, images, or video to impersonate someone or make it appear that a person did something—and to do so in a manner that is increasingly realistic, to the point that the unaided observer cannot detect the fake. Think of it as a destructive variation of the Turing test: imitation designed to mislead and deceive rather than to emulate and iterate.”

  • Corporations can benefit from altruism during a crisis

    Altruism – and social media – can help corporations cultivate trust with consumers on mobile devices during and after natural disasters, such as hurricanes. “Companies that engage in corporate social responsibility efforts during and after a disaster can build strong relationships with consumers,” says one researcher. “This is particularly true if companies are communicating their efforts through social media aimed at mobile device users – but only if their efforts appear altruistic.”

  • Fake news “vaccine”: online game may “inoculate” by simulating propaganda tactics

    A new experiment, just launched online, aims to help “inoculate” against disinformation by providing a small dose of perspective from a “fake news tycoon.” The game encourages players to stoke anger, mistrust and fear in the public by manipulating digital news and social media within the simulation. Players build audiences for their fake news sites by publishing polarizing falsehoods, deploying twitter bots, photo-shopping evidence, and inciting conspiracy theories in the wake of public tragedy – all while maintaining a “credibility score” to remain as persuasive as possible. The psychological theory behind the research is called “inoculation”: “A biological vaccine administers a small dose of the disease to build immunity. Similarly, inoculation theory suggests that exposure to a weak or demystified version of an argument makes it easier to refute when confronted with more persuasive claims,” says a researcher.

  • Social media is helping Putin kill our democracy

    There are few more important issues confronting the West today than what to do about social media companies, which thanks to their ubiquity possess vast riches and daunting influence over our democracies. The Russians have been spreading lies for decades. Active Measures, including fake reports, forged documents, and dastardly conspiracies invented out of thin air, were created by the KGB to smear Western governments. Social media made Moscow’s clandestine work much easier and more profitable. Although the lies currently emanating from the Kremlin resemble Cold War Active Measures in overall form and content, they are now disseminated so quickly, and through so many fronts, trolls, and bots, that Western governments are severely challenged to even keep up with these weaponized lies, much less push back. For this, we have the Internet to thank. While none can deny the countless benefits of the online age, this is one of its most pernicious side effects. It’s time the West seriously addressed the problem, and quickly, since this Kremlin spy game isn’t going away.

  • Using AI, machine learning to understand extent of online hate

    The Anti-Defamation League’s (ADL) Center for Technology and Society (CTS) announced preliminary results from an innovative project that uses artificial intelligence, machine learning, and social science to study what is and what isn’t hate speech online. The project’s goal is to help the tech industry better understand the growing amount of hate online. CTS has collaborated with the University of California at Berkeley’s D-Lab since April 2017 to develop the Online Hate Index. ADL and the D-Lab have created an algorithm that has begun to learn the difference between hate speech and non-hate speech. The project has completed its first phase and its early findings are described in a report released today. In a very promising finding, ADL and the D-Lab found the learning model identified hate speech reliably between 78 percent and 85 percent of the time.