What Is the Online Safety Act and Why Have Riots in the U.K. Reopened Debates About It?

At present, social media platforms such as TikTok, X, Facebook and YouTube are designed to optimize user engagement through their recommendation algorithms, with safety concerns not typically weighted within these systems. X, for example, employs different algorithms for content moderation versus content recommendation.

As a result of this, it is possible that harmful content can be recommended by one algorithm before it is identified as needing to be moderated by another algorithm.

The Online Safety Act aims to address this challenge by requiring platforms to test the safety implications of their recommendation algorithms. That is, when changes are made to their recommendation algorithms, services will be encouraged to collect safety metrics, allowing them to assess whether these algorithm changes are likely to increase individuals’ exposure to illegal content.

By incorporating these safety considerations when designing and refining content recommendation algorithms, it is hoped that fewer individuals will be exposed to harmful content before content moderation teams have had the opportunity to remove it.

Neutral Oversight
One of the primary challenges around the regulation of online content is the unwillingness of platform providers to be seen as “arbiters of truth”. For example, X has recently changed the name of its Trust and Safety team to just Safety, as Elon Musk, CEO of X, stated that: “Any organization that puts ‘Trust’ in their name cannot be trusted as that is obviously a euphemism for censorship.”

Mark Zuckerberg, CEO of Meta, said something similar back in 2016 after the US election, when he stated that Meta “shouldn’t be the arbiter of truth of everything that people say online”.

However, and as recent events have shown, this has not precluded Musk himself from propagating specific narratives in relation to the UK riots and adding fuel to an already inflamed discourse.

The act addresses this challenge by using the independent regulator, Ofcom, to enforce and regulate online content and algorithms. While the law was passed by the UK government, the government does not have powers to determine what content is allowed and what should be disallowed – thus securing political neutrality in the long-term implementation of the act.

Prevailing Challenges
At present, the Online Safety Act does not include any legislation about misinformation and disinformation. This appears to be why Khan suggested that in its current form, the act does not go far enough.

The prevailing challenge of misinformation was put in sharp focus by the murders that led to the riots, with content falsely claiming that the Southport attacker was a Muslim migrant trending across several social networking platforms in the aftermath of the incident.

The home secretary Yvette Cooper claimed that social networking platforms “put rocket boosters” under the spread of this content, and there has been much debate as to whether it helped fuel the violence seen on many city streets.

This leaves some observers concerned that, until the act fully comes into force, we are in a legal purgatory around what can and cannot be litigated against online.

However, we won’t really know how effective the Online Safety Act can be until all of it has come into force and it has been tested in another situation like the recent riots.

Olivia Brown is Associate professor, University of Bath. Alicia Cork Postdoctoral Researcher, University of Bath. This article is published courtesy of The Conversation.