Pin It
1336079

Could two Supreme Court cases really rewrite the rules of the internet?

Google, Twitter, and Facebook are at the centre of a debate about Section 230, the law that underpins free speech online

TextThom WaiteIllustrationCallum Abbott

Right now, if someone shares illegal content or says something morally dubious on social media, the blame will ultimately land on the user themselves. From harmful conspiracy theories shared by TikTok detectives, to never-ending fake news from Twitter’s finest, platforms themselves are protected from liability in the US, thanks to section 230 of the Communications Decency Act. However, two cases that have reached the Supreme Court this week have the potential to bring about a radical change.

YouTube, Twitter, and Facebook have all found themselves in the crosshairs, as the lawsuits are examined by the highest legal authority in the US. The first case, which had its first hearing on February 21, will decide whether YouTube (which by extension Google, which owns the video platform) should be held accountable for promoting videos that helped terrorist groups recruit new members. The second is aimed at Twitter, Facebook, and Google, dealing with similar accusations.

How can a couple of court cases have implications for the entire internet? Well, since reaching the Supreme Court, they’ve been touted as one of the most credible challenges to section 230 in decades, but tech companies and civil rights groups warn of irreparable damage to free speech if the statute is overturned. Below, we explore the cases at the forefront of the Section 230 battle, and whether we could actually be looking at the end of (relatively) uncensored social media.

WHAT IS SECTION 230, EXACTLY?

Passed in 1996 (which might explain why some people think it needs to change), section 230 protects platforms such as Facebook, Twitter, TikTok, and YouTube from legal consequences for content posted by their users. The thing is, the internet has changed considerably since the mid-90s. Most importantly, the majority of social media platforms now recommend content via opaque algorithms based on users’ interests – does this mean that they’re more responsible for recommending the dangerous, radical content that crops up on some users’ feeds?

GONZALEZ V GOOGLE

The family of Nohemi Gonzalez, a 23-year-old US citizen who was killed in Paris in 2015 during coordinated attacks by the Islamic State, claim that YouTube’s parent company, Google, played a part in promoting ISIS recruitment videos on its platform. These “inflammatory videos”, they say, encouraged the extremist organisation’s followers to commit terror attacks on their home soil. In a lawsuit against the company, the family seeks to appeal a ruling that maintained YouTube was protected under section 230.

TWITTER V TAAMNEH

This is a similar case, concerning a Jordanian citizen named Nawras Alassaf, who was killed in a terror attack on an Istanbul nightclub in 2017. Relatives of Alassaf filed a lawsuit claiming that Twitter, alongside Facebook and Google, should be held accountable for “aiding and abetting” the attack. The case differs, however, in that it doesn’t target section 230 directly, but questions whether the claim can be argued under an anti-terrorism law. Right now, it’s still unclear whether the social media companies would remain protected by section 230.

WHY DOES IT MATTER?

The reasons to question whether social media companies should be held accountable for promoting harmful or extremist content are clear. Nevertheless, a range of big tech companies, internet experts, and even human rights organisations have urged the Supreme Court to uphold Section 230.

Why? Well, for one, sifting through the vast quantities of data on the internet would be practically impossible without recommendation algorithms, as Twitter points out. There are also concerns that free speech could be restricted if the law was to change. If tech companies are held liable for everything their users say, then the safest thing to do is censor any potentially controversial conversations – that could include conversations about topics such as abortion, mental and sexual health, political action, police brutality... the list goes on.

HOW LIKELY IS LOSING FREE SPEECH ON SOCIAL MEDIA?

Following initial hearings on Gonzalez v Google and Twitter v Taamneh, it doesn’t seem particularly likely that tech platforms’ legal shield is going to disappear anytime soon. If nothing else, the Supreme Court justices don’t seem clear enough on Section 230 to take action either way, with justice Elene Kagan admitting that they are “not the nine greatest experts on the internet”. 

Given the impact of changing a rule like Section 230 – which has been described as one of the most important legal provisions in the history of the internet – there are some understandable worries about placing responsibility for its future in the hands of nine people, with a collective age of 566, who don’t really know how it works.

Could we see some dramatic changes to the law in the future, though? As we mentioned earlier, Section 230 is decades out-of-date, failing to account for the new, algorithm-driven internet. In fact, both Joe Biden and Donald Trump have moved to repeal the law during their time in office. Mark Zuckerberg has also called for an overhaul in the past (though this comes with its own problems, since Meta has vast resources for content moderation, putting it at a distinct advantage if changes were to be made). It does seem likely that we’ll see some significant changes to Section 230 somewhere down the line, but – like FOSTA-SESTA before it – they’ll undoubtedly come with their own fair share of controversy.

Join Dazed Club and be part of our world! You get exclusive access to events, parties, festivals and our editors, as well as a free subscription to Dazed for a year. Join for £5/month today.