Pin It
roxbury

Finally: Facebook is banning white nationalist and separatist content

The site promises to block praise, support and representation of white nationalism and separatism

Facebook has announced that as of next week, it will ban all white nationalist and separatist speech and content from its platforms. This news comes in the wake of the terrorist attacks on two mosques in Christchurch, New Zealand, which were live-streamed on Facebook by the attacker. The social network has been put under mounting pressure since to act.

The platform has been heavily criticised with how it has allowed white supremacist and racist groups and ideals to gain traction online – it emerged last year in a Motherboard investigation that Facebook’s policy did not class concerning examples of white nationalism as offensive enough to ban. Users were able to vocally support “white-only nations” on Facebook without consequence.

With this policy change, Facebook has promised to improve how it identifies and blocks content from terrorist groups, and any users searching out hate-related words and phrases will be met with information from Life After Hate, a charity founded by ex-white supremacists that fights extremism and helps people leave hateful racist groups.

This change will also include Instagram, as it’s a Facebook-owned platform.

In a blog post posted on Wednesday (March 27), Facebook detailed its previous policy had viewed white nationalism as acceptable ideals in the same was as “American pride and Basque separatism, which are an important part of people’s identity”. Following consultation and relevant insight with “members of civil society and academics”, it was agreed that white nationalism could not be “meaningfully separated” from white supremacy or organised hate groups.

The New Zealand massacre, which killed 50 people and injured dozens more, was viewed over 4,000 times on Facebook before it was removed. Facebook said that it deleted around 300,000 copies of the footage and blocked 1.2 million copies which were being uploaded across 24 hours. Facebook and YouTube are currently being sued for displaying the video on their sites by the French Council of the Muslim Faith, a group representing French Muslims.

Deplatforming hate groups has been shown to work, according to experts. Now, it’s just a case of implementing a strategy that would immediately combat both overt messages of hate and more implicit, coded messages. While “I am a white nationalist” would get instantly picked up on, more unlying sentiments of hatred towards groups like Muslims, the Jewish community, and LGBTQ people (insidious elements of white nationalism) may take longer or could get missed.

According to Motherboard, Facebook will use strategy similar to how it combats Isis and related terror content – this means utilising algorithms, machine learning, and artificial intelligence that detect and automatically delete images and content that matches previously deleted hate material.

Advocacy groups have praised this new policy, but highlight that it has come far too late, specifically referring to the online organising around 2017’s Unite the Right rally, which led to the Charlottesville incident which saw one anti-hate protester killed. Facebook did not remove the event page for the rally until one day before. Anti-hate organisations have called for more clarification on how the platform will handle less overt, more implicit hate messaging.

Back in September, Twitter launched expanded rules against hateful conduct with a moderating policy that banned “dehumanising speech”. It bans “language that makes someone less than human can have repercussions off the service, including normalising serious violence.”