TRUTH DECAYIn Times of Crisis, States Have Few Tools to Fight Misinformation

By Matt Vasilogambros

Published 24 January 2025

While officials in Southern California fought fire and falsehoods, Meta —the parent company of Facebook and Instagram —announced it would eliminate its fact-checking program in the name of free expression. As social media companies are pushing back against efforts to crack down on falsehoods, questions are asked about what, if anything, state governments can do to stop the spread of harmful lies and rumors that proliferate on social media.

As deadly wildfires raged in Los Angeles this month, local officials were forced to address a slew of lies and falsehoods spreading quickly online.

From artificial intelligence-generated images of the famous Hollywood sign surrounded by fire to baseless rumors that firefighters were using women’s handbags full of water to douse the flames, misinformation has been rampant. While officials in Southern California fought fire and falsehoods, Meta — the parent company of Facebook and Instagram — announced it would eliminate its fact-checking program in the name of free expression.

That has some wondering what, if anything, state governments can do to stop the spread of harmful lies and rumors that proliferate on social media. Emergency first responders are now experiencing what election officials have had to contend with in recent years, as falsehoods about election fraud — stemming from President Donald Trump’s refusal to acknowledge his 2020 loss — have proliferated.

One California law, which passed along party lines last year, requires online platforms to remove posts with deceptive or fake, AI-generated content related to the state’s elections within 72 hours of a user’s complaint.

The measure allows California politicians and election officials harmed by the content to sue social media companies and force compliance. However, federal statute protects social media companies broadly from lawsuits, shielding them from being found liable for content.

“Meta’s recent announcement that they were going to follow the X model of relying on a community forum rather than experts goes to show why the bill was needed and why voluntary commitments are not sufficient,” Democratic Assemblymember Marc Berman, who introduced the measure, wrote Stateline in an email.

X, the company formerly known as Twitter, sued California in November over the measure, likening the law to state-sponsored censorship.

“Rather than allow covered platforms to make their own decisions about moderation of the content at issue here, it authorizes the government to substitute its judgment for those of the platforms,” the company wrote in the suit.

The law clearly violates the First Amendment, the suit argues. Further hearings on the lawsuit are likely to come this summer. Berman said he’s confident the law will prevail in the courts since it’s narrowly tailored to protect the integrity of elections.

California’s measure was the first of its kind in the nation. Depending on how it plays out in the courts, it could inspire legislation in other states, Berman said.