Nearing the Tipping Point Needed to Reform Facebook, Other Social Media?

One expert who is well-acquainted with Facebook’s pattern of quashing research that tarnishes its image is ADL Belfer Fellow Laura Edelson, a computer scientist at New York University. On 3 August 2021, Facebook cut off her access to Facebook because she was studying how misinformation spreads through political ads on the platform. Edelson will soon testify before the House Science, Space and Tech committee for the hearing, “The Disinformation Black Box: Researching Social Media Data.”

The timing of this hearing could not come at a more pivotal moment.

The debate over what content (and whose voices) should stay online and what should be removed, labeled, or de-amplified is not simple and will not be moved forward effectively by overly simplistic remedies that can have unintended consequences for free speech. Nor will Congress’s current cynical propensity to weaponize proposed reforms for partisan warfare advance any worthwhile goals. But careful legislation, regulation, third-party research, and market pressure are necessary if we are to meaningfully diminish the prevalence and impact of hate and disinformation online. 

One promising avenue for requiring more transparency from platforms is being sought in the California State legislature, where lawmakers will consider and should pass AB 587. This social media transparency bill would require regular reporting from major social media platforms on their terms of service, enforcement, and content moderation practices.

We need better clarity into how Facebook and other companies administer their content moderation policies. Consider the programs exposed by the WSJ series, such as Facebook’s “XCheck,” a shadow policy apparatus that allowed a “VIP” tier of politicians, celebrities, athletes, and other high-profile people to get away with violating the platform’s rules. Laws like AB 587 would apply penalties against platforms for spreading egregiously harmful content.

ADL experts have studied and documented the impact of online hate and abuse, particularly on Facebook. Last year, in a coalition, ADL launched the Stop Hate for Profit campaign in which advertisers, celebrities, and sports figures joined together to demand that social Facebook be held accountable for its role in amplifying hate and, in particular, racism and anti-Semitism. ADL has also reported how Facebook’s transparency reports obscure the full impact of hate speech and are shielded from independent third-party verification. ADL testified before Congress on the dangers posed by social media companies and crafted its REPAIR Plan to counteract them. Social media platforms act as megaphones for domestic terrorists looking to broadcast vile messages of racially or ethnically motivated extremism and anti-Semitism far and wide to radicalize people and normalize extremism.

ADL’s 2021 Online Hate and Harassment survey found that 75 percent of a nationally representative sample of respondents who had experienced online harassment reported that at least some of that abuse occurred on Facebook. Both ADL’s Holocaust denial and anti-Semitism report cards disclosed the platform’s anemic, often negligent, response to user-submitted reports of hateful anti-Jewish posts that violated Facebook’s and other platforms’ terms of use and community standards. These dismaying results came after nearly a decade of ADL advocacy to treat Holocaust denial as hate speech. While Facebook finally reversed its longstanding countenance of Holocaust denial last year, it is still possible to easily find violative content on the site.

The avalanche of information we now have on the harm caused by Facebook content and its refusal or inability to effectively and equitably enforce its own content rules must lead to a serious reckoning. But if years of research and blockbuster findings from trusted institutions and hard-hitting journalism from respected media outlets have not yielded meaningful action, one reason may be found in the words of Adam Mosseri, the head of Instagram. Responding to criticism leveled at Facebook in the wake of the WSJ’s investigation, Mosseri said: “We know that more people die than would otherwise because of car accidents, but by and large, cars create way more value in the world than they destroy. And I think social media is similar.”

In other words: back off because we still create more value than harm, even though we intentionally engineer our platforms and business models to seek out and amplify that harm. 

Mosseri’s analogy doesn’t work even on its own terms. The automotive industry is subject to significant regulation and independent review by outside watchdogs. This oversight has vastly increased the safety of cars and saved countless lives. (Automotive industry regulation was originally the subject of great resistance from the industry, which falsely claimed it would put them out of business.) As a result, cars, while still causing harm (and climate change), nevertheless have far more safeguards for drivers, passengers, and passers-by than does Facebook for harmful content. Car emission requirements, auto-lock brakes, seat belt and crash testing, speed limits, licenses for users, and transparency reports, are a few safety measures. If Facebook wants to compare itself to the auto industry, all the more reason we should start looking at some analogous public and private guardrails.

Every day, Facebook makes consequential decisions about how to deliver and amplify the most engaging content to keep users coming back, as well as how to apply its content moderation rules. These decisions affect billions of people, from teenage girls suffering from eating disorders to Muslim minorities who are the victims of genocidal campaigns waged on the platform. But Facebook is hardly alone in failing to enforce its rules. Twitter permits political officials what it calls a “public interest” exceptionallowing them to engage in “foreign policy saber-rattling”—inciting or implying violence that would lead ordinary users to be banned, suspended, or at the least, have their content removed. YouTube failed last year, in May 2020, to remove a conspiracy theory video that became a major source of COVID-19 misinformation before it amassed over seven million views, an egregious instance of content seeded from fringe groups online like QAnon and then rapidly amplified on social media.

One particularly disturbing takeaway from the WSJ’s series is Facebook’s longstanding policy of acknowledging harm only when absolutely forced to do so by a veritable tsunami of evidence. When faced with government action, company officials including CEO Mark Zuckerberg finally apologize and commit under the harsh light of scrutiny to again do more and better. It’s abundantly clear by now, however, that this strategy is a deflection, one it appears Facebook is now abandoning in favor of insulating Zuckerberg from continuing scandals and going on the offensive to serve users positive stories about the platform.

No one said reigning in social media was going to be easy. But the harm caused of social media is simply too big for us to fail. We shouldn’t need additional proof that platform self-regulation to combat disinformation and hate online doesn’t work when it comes to Facebook and many other companies. If the WSJ’s comprehensive series isn’t the tipping point needed to reform the tech sector, what will be?

The article is published courtesy of the Anti-Defamation League (ADL).