Extremists & social mediaNew Zealand, France leading an effort to ban terrorists from social media

Published 24 April 2019

New Zealand and France will host a meeting with technology companies and world leaders to develop a strategy to block terrorists from social media. The meeting comes in the wake of the March shootings at two mosques in Christchurch.

New Zealand and France will host a meeting with technology companies and world leaders to develop a strategy to block terrorists from social media. The meeting comes in the wake of the March shootings at two mosques in Christchurch.

The plan was announced Wednesday by New Zealand Prime Minister Jacinda Ardern.

The New York Times reports that she and French President Emmanuel Macron will chair a meeting with world leaders and tech companies in May, her office said, and attempt to get them to agree to a pledge called the “Christchurch Call.”

Wednesday’s announcement is latest in a series of actions Ardern has taken in the wake of the 15 March Islamophobic terrorist attacks in Christchurch which killed fifty people. The perpetrator livestreamed the attack on social media.

The March 15 terrorist attacks saw social media used in an unprecedented way as a tool to promote an act of terrorism and hate. We are asking for a show of leadership to ensure social media cannot be used again the way it was in the March 15 terrorist attack,” Ardern said in a statement.

We all need to act, and that includes social media providers taking more responsibility for the content that is on their platforms, and taking action so that violent extremist content cannot be published and shared.”

She and Macron want to stop terrorists from using social media to organize and promote terrorism and violent extremism, or distribute images of violence.

Talking with reporters on Wednesday, Arden said that she and Macron were preparing to formulate policies which would refer specifically to terrorist activity.

“This isn’t about freedom of expression; this is about preventing violent extremism and terrorism online,” Ms. Ardern said. “I don’t think anyone would argue that the terrorist had a right to livestream the murder of 50 people.”

The Times notes that New Zealand has taken a hard-line approach to people sharing footage of the Christchurch mosque shootings, which spread rapidly through social media despite attempts to stamp it out.

The meeting was also announced by the Elysee, which said it would take place on the second convening of the “Tech for Good” conference that Macron initiated last year.

Analysts cautioned that any agreement that did not outline specific consequences for failing to halt extremist content would be unlikely to significantly alter tech companies’ behavior.

“It does need to be determined what the prime minister really wants,” Robyn Caplan, a researcher at Data & Society, a research institute in New York, and a doctoral candidate at Rutgers University, told the Times. “Without some sort of incentives or disincentives, I’m not quite certain what change will happen.”

New Zealand, with a population of only 4.8 million people, suffers from a “small market problem,” Caplan said, comparing it to other countries like Canada which have also tried to bolster regulation of social media platforms.

“The companies, depending on what the regulation is, might just pull out rather than comply,” she said, referring to Google’s decision to ban political advertising ahead of the Canadian elections after new transparency laws were introduced.

Caplan also pointed out that extremists were often radicalized — as the man accused of the Christchurch shootings claimed he had been — outside the largest social media platforms, including in WhatsApp messaging groups and on message boards like 8Chan.

The Times reports that France has already taken action on its own. It announced in November that it would embed regulators at Facebook for the first six months of 2019 to determine whether its processes for removing hate-fueled content could be improved.

In May, French lawmakers will discuss updating the country’s online hate speech law in an effort to compel social media platforms to take more responsibility for taking down heinous content. Under the legislation, the companies could be fined up to 4 percent of their global revenues if they fail to withdraw extremist content within twenty-four hours.