• EU develops legislation to tackle online terrorism-promoting content

    The EU is planning to take legal measures to control online content which supports and promotes terrorism. The EU Security Commissioner, Julian King, said voluntary agreements, which are currently in place, had not provided European citizens enough protection against exposure to terrorist-promoting content.

  • We researched Russian trolls and figured out exactly how they neutralize certain news

    Russian “troll factories” have been making headlines for some time. First, as the Kremlin’s digital guardians in the Russian blogosphere. Then, as subversive cyber-squads meddling with U.S. elections. A few statistical analyses of large samples of trolling posts also show that institutionalized political trolling and the use of bots have become a consolidated practice that significantly affect the online public sphere. What has been shrouded in mystery so far, however, is how institutionalized, industrialized political trolling works on a daily basis. We have also lacked a proper understanding of how it affects the state’s relations with society generally, and security processes in particular.

  • Curbing fake news

    Falsified information, in the form of provoking and doctored content, can travel over these platforms unmonitored. Well-crafted content is potent enough for opinion engineering. The problem is more worrisome for mature economies, which are likely to consume more convincing fake news content than real correct information by 2022, as per a Gartner research. As the interest in fake news and other illicit content grows, their implications for society and the individual in turn are grim. In the quest of finding an immediate solution to this, social media giants are experimenting with Artificial Intelligence (AI), which for decades has been used to curb spam emails.

  • Russia’s influence campaign can “wreak havoc in our society and in our elections”

    On Wednesday, 1 August, the U.S. Senate Intelligence Committee convened an open hearing on foreign influence operations and their use of social media platforms. “Twenty-one months after the 2016 election – and only three months before the 2018 elections – Russian-backed operatives continue to infiltrate and manipulate social media to hijack the national conversation and set Americans against each other. They were doing it in 2016. They are still doing it today,” Senator Mark Warner (D-Virginia), vice-chairman of the committee said. “These active measures have two things in common: They are effective. And they are cheap. For just pennies on the dollar, they can wreak havoc in our society and in our elections. I’m concerned that even after 18 months of study, we are still only scratching the surface when it comes to Russia’s information warfare.”

  • Scorecard on hate crimes in 57 OSCE nations released

    Against a backdrop of rising reports of hate crimes, Human Rights First and the Anti-Defamation League (ADL) on Wednesday released their annual analysis of hate crime reporting by the 57 participating states of the Organization for Security and Cooperation in Europe (OSCE), a security- and human rights-focused intergovernmental organization comprising governments from North America, Europe, and Central Asia. The report notes that many OSCE governments remain unwilling or unable to meet even basic standards concerning the reporting of hate crimes.

  • Facebook IDs new fake influence campaign

    As the U.S. midterm election nears, the Kremlin is intensifying its disinformation and hacking campaign to help bring an outcome in the November election which would be favorable to Russia – as it did in the 2016 presidential election. Facebook on Tuesday announced it has identified a new ongoing political influence campaign and has removed more than thirty fake accounts and pages.

  • How the Russian government used disinformation and cyber warfare in 2016 election – an ethical hacker explains

    The Soviet Union and now Russia under Vladimir Putin have waged a political power struggle against the West for nearly a century. Spreading false and distorted information – called “dezinformatsiya” after the Russian word for “disinformation” – is an age-old strategy for coordinated and sustained influence campaigns that have interrupted the possibility of level-headed political discourse. Emerging reports that Russian hackers targeted a Democratic senator’s 2018 reelection campaign suggest that what happened in the lead-up to the 2016 presidential election may be set to recur.

  • Social media manipulation rising globally: Report

    The manipulation of public opinion over social media platforms has emerged as a critical threat to public life. Around the world, government agencies and political parties are exploiting social media platforms to spread junk news and disinformation, exercise censorship and control, and undermine trust in media, public institutions and science.

  • Make tech companies liable for "harmful and misleading material" on their platforms

    In a withering report on its 18-month investigation into fake news and the use of data and “dark ads” in elections, the U.K. Parliament’s Digital, Culture, Media and Sport Committee (DCMC) says that Facebook’s egregious indifference to its corporate responsibility has led to a massive failure with far-reaching consequences. The DCMC charges that Facebook “obfuscated”, refused to investigate how its platform was abused by the Russian government until forced by pressure from the U.S. Senate Intelligence Committee. In the most damning section of the report, DCMC offers evidence that Facebook’s indifference aided and abetted the incitement and persecution of the Rohingya ethnic group in Myanmar, causing large-scale death and the flight of hundreds of thousands of Rohingya from Myanmar to Bangladesh.

  • EU law enforcement, Google take on terrorist online propaganda

    Europol, the European law enforcement agency, conducted a 2-day gathering of European law intelligence and enforcement services, attended by representatives from Google, to improve the tracking and removal of online terrorist propaganda being disseminated on various Google platforms.

  • Improving disaster response through Twitter data

    Twitter data could give disaster relief teams real-time information to provide aid and save lives, thanks to a new algorithm developed by an international team of researchers. “The best source to get timely information during a disaster is social media, particularly microblogs like Twitter,” said one researcher. “Newspapers have yet to print and blogs have yet to publish, so Twitter allows for a near real-time view of an event from those impacted by it.”

  • White supremacist propaganda on U.S. college campuses on the rise

    White supremacist groups continued to escalate their propaganda campaign targeting U.S. college campuses, with incidents increasing by 77 percent during the 2017-2018 academic year, according to new data released today by the Anti-Defamation League (ADL). “The alt-right segment of the white supremacist movement remains a driving force behind this activity,” says the ADL’s Center on Extremism.

  • California’s strict internet privacy law has far-reaching implications

    California’s new internet privacy law, which takes effect in 2020, deemed one of the strictest so far in the United States, could result in a business strategy which offers discounts in exchange for user data. gives residents the right to know what data is collected by companies like Google and Facebook and to request their information not be sold to third parties.

  • From Nord Stream to Novichok: Kremlin propaganda on Google’s front page

    On 24 May, an international team of investigators announced that a Russian anti-aircraft missile was directly responsible for the downing of Malaysian Airlines Flight 17 (MH17). Initial analysis of social media reactions to these announcements indicated that Kremlin outlets were struggling to effectively counter the new evidence implicating Moscow in the downing of MG17. However, over the next week, conspiracy theories and disinformation narratives from Russian propaganda outlets found a foothold on an impactful and unlikely medium: Google’s front page.

  • Was there a connection between Russian Facebook propaganda and a foiled terrorist attack in Kansas City?

    On 18 April, a jury convicted three Kansas men of conspiring to use “weapons of mass destruction” against an apartment complex where many of the residents were Somali refugees. They were arrested before they were able to carry out their bomb plot in 2016. All three were known to be very active on Facebook, where they called themselves “Crusaders.” Experts wonder whether the divisive and polarizing ads which Russian disinformation specialists ran on Facebook during 2016 motivated the three to plan the attack.