The new plans could be the most significant crackdown on harmful content ever online, but critics say it goes against free speech
In new proposals revealed today (April 8), social media companies will be legally required to protect users from harmful content on their platforms. It marks a major crackdown on how tech companies operate, and could completely change the face of the internet in the UK.
“Tech companies will take your money, harvest your data, sell it to advertisers – but protect you from harm, they say ‘no can do',” Home Secretary Sajid Javid said at the launch of the proposals on ‘online harms’, brought by the Home Office and the Department for Digital, Culture, Media and Sport. The Online Harms White Paper proposes multiple elements to combat harmful content on everything from terrorist propaganda and child abuse, to fake news, harassment, cyberbullying, and the promotion of self-harm and suicide.
Here’s what you need to know about the significant proposal:
WHAT AN ‘ONLINE HARM’ ACTUALLY COUNTS AS
These measures come as pressure mounts on the government and tech companies to act on growing concerns surrounding mental health online, the spread of misinformation, and violent or inappropriate material circulating online. A recent report compiled by MPs suggests that social media addiction should be considered a disease. Much of the conversation has centred on content that promotes self-harm and suicide, following the death of 14-year-old Molly Russell by suicide. Following her death, her family found images relating to depression and suicide on her Instagram, and stated that they blame social media in part for her death.
The plans define ‘online harms’ very, very broadly. There are some that are already defined in the law – like child sex abuse, revenge porn, hate crime, harassment, terrorist propaganda, illegal trade. Others covered aren’t enshrined in law, like fake news and misinformation, cyberbullying, and trolling. At a time when the far-right are poisoning people on Facebook, children are confronted by self-harm images on Instagram, and women are continuing to be relentlessly harrassed on Twitter by trolls, the dialogue continues IRL and URL.
THE ONLINE HARMS WHITE PAPER SUGGESTS A ‘CODE OF PRACTICE’ AND MORE REGULATIONS
Within the paper, the government calls for the establishment of an independent regulator – like OFCOM, which looks out for TV and radio, or the ASA that keeps an eye on advertising. It’s not totally clear yet whether this would be a newly formed body, or if the task would fall under a current regulator like those mentioned – it would definitely be funded by the tech industry though.
This regulator would write a ‘code of practice’ for internet companies and social media networks to abide by. The main goal is for platforms to take more responsibility for the safety and welfare of users, and actively rail against harmful content and activity their online platforms can facilitate. This ‘code of practice’ would apply to any online company whose platforms allow users to share or engage with user-generated content or interact with other users, so platforms like Facebook, Twitter, and Instagram, as well as Google, Snapchat and Whatsapp.
Annual transparency reports would highlight what each platform does to address online harms. There have been suggestions that the code of practice could include guidelines for promoting legitimate news sources to combat fake news, and call for social platforms employing fact checkers.
SOCIAL MEDIA PLATFORMS WILL FACE CONSEQUENCES FOR NOT FULFILLING THEIR PART OF THE DEAL
The proposals bind tech companies to providing a level of ‘duty of care’ for their users. Failing that, they’ll be hit with hefty fines, named and shamed with public notices, face search engine blocking in the UK, and have senior management held personally liable.
WHAT CRITICS SAY
TechUK, a group that represents the UK’s technology industry, told the BBC that the government must be: “clear about how trade-offs are balanced between harm prevention and fundamental rights”.
Some claim there needs to be a wider approach – while the proposal focuses on young people and children, vulnerable people on the internet span across age, gender, race, education, and background. Huffington Post journalist Sophie Gallagher called out the lack of action on ‘cyberflashing’ (sending explicit images to strangers on AirDrop), an abhorrent digital phenomenon that mostly affects women.
The proposal also outlines plans for media literacy training in schools, something that would be in-step with the changing digital sphere and how young people interact with it. However, it has been proposed as an attachment to sex and relationships education. Media literacy training is vast enough to be totally its own thing.
It also isn’t clear whether fines for tech companies would be proportionate to size, meaning only the wealthiest platforms could afford regulation. And, as previously mentioned, the onus will be on tech companies to remove harmful content, but the laws as they stand don’t cover all those outlined in the proposal. There is no intention highlighted to legislate on all ‘harmful’ speech. There needs to be a totally legally binding assessment for what is ‘harmful’ for the proposal to work.
Another issue is that the code of best practice must clearly define how internet companies will operate. Examples could include limiting the reach of false content, or making political advertising more transparent for users, but companies could argue that they fulfill their ‘duty of care’ elsewhere.
This proposal could also foreshadow age verification (which we’re seeing with the government crackdown on access to porn sites) and other critics see the move away from self-regulation into the potential for harsher laws that lead to mass censorship. There’s a worry that the threat of fines to tech companies would lead to them restricting content to an intense, unnecessary degree and legitimise censorship, pushing filters that stop users even uploading content to avoid liability.
“These things are always justified as being for good, kind and worthy objectives, but ultimately it’s giving power to a state regulator to decide what can and cannot be shown on the internet,” Victoria Hewson of the Institute for Economic Affairs think tank said in an interview with the BBC. “Maybe the authorities should be trying to stop these things at source.”
Index, the non-profit advocating for freedom of speech and expression, warns against the proposal’s wide definition of ‘online harms’, highlighting the OFCOM report, which found “45 per cent of adult internet users had experienced some form of online harm”. Looking closely at those stats, a large number of those surveyed describe these harms as just “moderately annoying”.
Joy Hyvarinen, head of advocacy at Index, said in a statement: “The online harms white paper will set the direction for future internet regulation. Index is concerned that protecting freedom of expression is less important than the government wanting to be seen as ‘doing something’ in response to public pressure. Internet regulation needs a calm, evidence-based approach that safeguards freedom of expression rather than undermining it.”
1/2 This morning off to meet @sajidjavid to talk about the government's new online harms white paper. I want to know - since the govt committed to looking at the issue of cyberflashing and how to tackle it, why was it not included? The only mention of sexting is for U18s...— Sophie Gallagher (@SCFGallagher) April 8, 2019
PLATFORMS LIKE FACEBOOK SAY THEY’RE WILLING TO REGULATE
Social media platforms like Facebook and Twitter already put out reports similar to those suggested by the proposal.
“I’m giving tech companies a message that they cannot ignore. I warned you and you didn't do enough. It's no longer a matter of choice. I will accept nothing else,” Javid told press this morning.
Speaking at today’s launch, Refuge – the UK’s largest provider of services for survivors of violence and domestic abuse – highlight tech companies’ “patchy” approach to helping victims of revenge porn or threats online.
A statement from Rebecca Stimson, Facebook’s head of UK policy, said: "New regulations are needed so that we have a standardised approach across platforms and private companies aren't making so many important decisions alone.
“New rules for the internet should protect society from harm while also supporting innovation, the digital economy and freedom of speech.”
Twitter’s head of UK public policy Katy Minshall also said: “We look forward to engaging in the next steps of the process, and working to strike an appropriate balance between keeping users safe and preserving the open, free nature of the internet.”
YOU CAN HAVE YOUR SAY
The government will now hold a public consultation on the plans for the next 12 weeks. You can find out more, engage, and have your say here.