Extremism & social mediaWhistleblower: Facebook deceived public on extent of extremist content removal

Published 16 May 2019

According to a whistleblower’s complaint to the Securities and Exchange Commission (SEC) that was recently revealed in an AP investigation, Facebook has been misleading the public and its shareholders about the efficacy of its content moderation efforts.

According to a whistleblower’s complaint to the Securities and Exchange Commission (SEC) that was recently revealed in an AP investigation, Facebook has been misleading the public and its shareholders about the efficacy of its content moderation efforts. CEO Mark Zuckerberg testified in April 2018 before the U.S. Congress that Facebook was “[taking] down 99 percent of the Al Qaeda and ISIS-related content in our system before someone, a human even flags it to us.” However, the whistleblower alleges Facebook’s extremist content removal rate was just 38 percent, not 99 percent. 

Over the course of a five-month period in 2018, the whistleblower found that “less than 30 percent of the profiles of these Friends had been removed by Facebook and just 38 percent of the Friends who were displaying symbols of terrorist groups had been removed. This directly contradicted the assurance by Facebook … that it is removing 99 percent of such content.” Facebook’s Artificial Intelligence (AI) supposedly only targets ISIS and al-Qaeda and their affiliates, but the tools often fail to catch most permutations of the groups’ names.

“The whistleblower’s complaint proves, once again, that Facebook is willing to lie to lawmakers, its shareholders and the public to disrupt serious conversations about regulating the tech industry,” said Counter Extremism Project (CEP) Executive Director David Ibsen. “Claiming to remove 99 percent of extremist content from your platform, and subsequently revising that number down to be anywhere from 83 to 90, was always suspect and defied logic. Now, we have evidence that the actual number was as low as 38 percent. The fact that the tech giant is willing to fabricate data is emblematic of Facebook’s arrogance and lack of seriousness when it comes to combatting extremism and terrorism online. Investors and regulators at the SEC must act decisively and demonstrate that dishonesty will not be tolerated.”

The Counter Extremism Project (CEP) says it has long questioned Facebook’s content moderation claims as impossible to independently verify and overly optimistic in nature. In fact, as Zuckerberg was testifying to Congress a year ago, CEP was able to find ISIS content in real time, which was cited by Congresswoman Susan Brooks in an exchange with the Facebook CEO. Since then, CEP has authored an additional report, Spiders of the Caliphate, which supports the whistleblower’s findings that Facebook’s own technology, whether it be the “suggested friends” feature or the auto-generation of videos and pages, often contributes to the problem. The report found that ISIS followers exploit Facebook to host meetings, link to terrorist propaganda and organize on the popular social media platform.