Pin It
1268102

YouTube bans insults aimed at race, gender, or sexual orientation

Like other social media platforms, YouTube is trying to shape healthier online conversation, but why did it take almost 15 years to tackle racism and homophobia?

Earlier this week (December 11), YouTube announced in an update to its harassment policy that it will “no longer allow content that maliciously insults someone based on protected attributes such as their race, gender expression, or sexual orientation”.

This move builds upon the platform prohibiting “supremacist” and conspiracy theory content back in June, but it also begs the question… shouldn’t racism and homophobia have been banned already?

The update goes on to explain the consequences for crossing or repeatedly “brushing up against” the new boundaries, including removal from the YouTube Partner Program (meaning the creator won’t be able to earn money from their videos directly), removal of content, and ultimately the termination of the offending channel.

Critics complain that the new policy is an infringement on freedom of speech, and it’s undeniable that YouTube has made censorship errors in the past: its broad-strokes algorithm has often led to innocent channels being caught in the crossfire, sometimes for responding to the kind of discrimination the platform is supposed to be cracking down on.

Outside influences are also a disturbing factor in YouTube’s censorship policy. The Met Police, for example, have previously had a hand in deleting over 100 drill music videos coming out of London, despite no evidence of a causal relationship between music and violent crime.

However, for the average YouTube user, it’s also easy to miss a whole ecosystem of hate speech that the relatively unregulated platform has allowed to develop over its almost 15 years of operation.

Across all social media, a trend towards extreme views has been fuelled by the increased engagement that it brings (whether that engagement is positive or negative is pretty irrelevant when you’re getting paid for it either way). This isn’t helped by the fact that YouTube have arguably incentivised shock content in the past, leading to controversies like Logan Paul filming a dead body in Japan’s “suicide forest”.

Companies like Instagram (and, by extension, Facebook) have tried to address this trend by removing likes on their app. But Facebook, along with Twitter and YouTube, has also been criticised in the past for acting too slowly to remove extreme views and even videos of racially-motivated attacks when they’ve started to gain traction among users.

Meanwhile, the creator of the retweet button has expressed his regret, suggesting it’s become a tool for the casual proliferation of hate speech.

On YouTube, it’s even easier to get caught in a rabbit hole of such extreme conversations as you follow video links to similar channels recommended by an algorithm. Add in targeted hate speech, and it’s easy to see how you could come out of the other end with a skewed perspective, especially when there are often hundreds of commenters backing up any single creator’s opinions.

In fact, the problem YouTube faces was particularly on show in a live broadcast of Friday’s protests in London; the comments section contained a constant stream of – often copy-pasted – blatant white supremacy, direct discriminatory attacks, and calls for violent right-wing intervention. (Obviously, that’s only anecdotal evidence taken from one stream, but it’s an indicator of a larger problem evident in the comments of plenty of other videos across the platform.)

YouTube also suggests that it will be doing something to combat this in its new harassment policy, by addressing and removing what it considers “toxic comments”, though it’s unclear how that would work in real-time, or how precise the decisions on what comments to delete will be.

In any case, the question remains for social media titans (Google-owned YouTube as much as Facebook and its subsidiaries): when it comes to correcting the course of online conversations, are they simply doing too little, too late?