The ‘dehumanising speech’ policy could work, if properly enforced
Twitter is to neo-Nazis as Facebook is to your auntie and her bad memes – it’s a cesspit for hate, racism and sexist abuse. Now the social network is rolling out a policy change, but it’s asking for your say first.
Twitter announced its latest expansion of its rules against hateful conduct, with a new moderating policy that bans “dehumanising speech” yesterday (September 25). A post by Twitter execs Del Harvey and Vijaya Gadde read: “language that makes someone less than human can have repercussions off the service, including normalising serious violence.”
For a long time, people of colour, women, and minority groups have been asking for more stringent protections from harassment and abuse on Twitter. Some have cited major inconsistencies in how Twitter deals with reports of abuse, while punishing those targeted by trolls and neo-Nazis.
The social network’s new policy should come into effect later this year, with the site rules reading: “You may not dehumanise anyone based on membership in an identifiable group, as this speech can lead to offline harm.” This expands on current rules which ban users from using threatening language against someone because of their race, gender, or religious group.
It’s detailed by Twitter as follows: “Dehumanisation: Language that treats others as less than human. Dehumanisation can occur when others are denied of human qualities (animalistic dehumanisation) or when others are denied of human nature (mechanistic dehumanisation). Examples can include comparing groups to animals and viruses (animalistic), or reducing groups to their genitalia (mechanistic).
Identifiable group: Any group of people that can be distinguished by their shared characteristics such as their race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, serious disease, occupation, political beliefs, location, or social practices.”
As Wired reports, under previous rules a comment like “all women are scum and should die” would have to be targeted at a specific individual to be addressed, while this new policy will target such abuse. Enforcing something like this will be hard, as language and context can be difficult to define and settle on – plus, what will Twitter do when Donald Trump eventually breaks this new rule?
Its other efforts to rid the site of hate and abuse continue – in the last few months, more than 140,000 third party apps that violated its policies were blocked, bots were deleted en masse, and it began hiding ‘potentially harmful’ Tweets from users. Nevertheless, Twitter caught a lot of flack for allowing right-wing pundit Alex Jones of Infowars to continue tweeting long after other platforms blocked him and the hateful site.
Users will have two weeks to provide feedback to Twitter on the proposed policy, available in English, Arabic, Japanese and Spanish. This comment period will be open until October 9, when Twitter will then consult its own ‘Trust and Safety Council’ on the feedback.