PerspectiveHow Data Privacy Laws Can Fight Fake News

Published 15 August 2019

Governments from Russia to Iran have exploited social media’s connectivity, openness, and polarization to influence elections, sow discord, and drown out dissent. While responses have also begun to proliferate, more still are needed to reduce the inherent vulnerability of democracies to such tactics. Recent data privacy laws may offer one such answer in limiting how social media uses personal information to micro-target content: Fake news becomes a lot less scary if it can’t choose its readers.

Governments from Russia to Iran have exploited social media’s connectivity, openness, and polarization to influence elections, sow discord, and drown out dissent. While responses have also begun to proliferate, more still are needed to reduce the inherent vulnerability of democracies to such tactics. Recent data privacy laws may offer one such answer in limiting how social media uses personal information to micro-target content: Fake news becomes a lot less scary if it can’t choose its readers.

Alex Campbell writes in Just Security that current efforts to combat online disinformation fall broadly into one of three categories: content control, transparency, or punishment. Content control covers takedowns and algorithmic de-ranking of pages, posts, and user accounts, as well as preventing known purveyors of disinformation from using platforms. Transparency includes fact-checkingad archives, and media literacy efforts, the last of which fosters general transparency by increasing user awareness. Punishment, the rarest category, involves sanctions, doxxing (outing responsible individuals), and other tactics that impose direct consequences on the originators of disinformation. All these initiatives show promise and deserve continued development. Ultimately, online disinformation is like cancer, a family of ills rather than a single disease, and therefore must be met with a similarly diverse host of treatments.

However, none of the above techniques fundamentally alter the most pernicious aspect of online disinformation, which is the ability to micro-target messaging at the exact audience where it will have the greatest impact. Content control and punishment are reactive—no matter their success in the moment, the bigger picture is a never-ending game of whack-a-mole as new tactics and operations crop up. Transparency doesn’t actively impede online disinformation but just lessens the blow, betting that more aware audiences will engage less with false or inflammatory content.

Data privacy may offer a more precise solution. Data privacy laws like the European General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) are not intended to address harmful speech. Their main goal is giving users greater control over their personal data, allowing people to check what data has been stored, opt out of data sharing, or erase their data entirely. Personal data generally includes information directly or indirectly linking accounts to real-life individuals, like demographic characteristics, political beliefs, or biometric data.