Life & CultureOpinionThe era of internet age verification is here – so what now?People in the UK will now be asked to submit proof of age when trying to view NSFW content online, to make the internet safer for children. Will it work?ShareLink copied ✔️July 28, 2025Life & CultureOpinionTextEli Cugini Last week, millions of UK users opened platforms like Reddit, Discord and Bluesky and were greeted with a new requirement: age verification. If you don’t submit a picture of a government ID, or have one of your selfies algorithmically verified as belonging to an adult, then 18+ content on many platforms will become inaccessible. This sudden change is actually years in the making: it’s an Ofcom-mandated implementation of the 2023 Online Safety Act (OSA), which aims to ‘protect children and adults online’ from illegal, harmful, and age-inappropriate content. But the repercussions of the Act appear potentially disastrous, with abounding data and safety risks. We even risk losing access to Wikipedia: Wikimedia, the nonprofit behind Wikipedia, is currently challenging the OSA in court, arguing that ID requirements threaten their anonymous editors’ safety and would allow bad actors to prevent unverified users from fixing their edits, slowing editors’ ability to tackle misinformation. The OSA is a very wide-ranging piece of legislation, but one of its main focuses is stopping children from accessing ‘Primary Priority Content’ – namely pornography, as well as content that encourages eating disorders, self-harm, and suicide. Under the OSA, platforms that children can access and that host adult content must implement age checks to stop children seeing that content, or else risk massive fines (up to £18 million, or up to 10 per cent of global worldwide revenue, whichever is higher). July 25 was Ofcom’s compliance deadline, and many massive platforms barely made the deadline; I checked Pornhub’s UK access on July 24 and the age check hadn’t been implemented yet. X (Twitter), meanwhile, has rendered NSFW content on my account unavailable ‘until we can verify your age’ (which they have not yet offered me a way to do). The regulation of online content is an important component of a safe and free internet. A lack of effective moderation within platforms, and a lack of serious scrutiny from outside, can produce terrible results: radicalised and violent communities, circulation of terrifying material, harassment of minorities.(X, and its worsened content moderation post-Musk takeover, is a good example of this in action). But these age checks, hastily implemented and usually outsourced to third-party providers like Persona and k-ID, are concerning for various reasons. As many have noted, they can be circumvented by entirely legal VPNs: as I write this, free VPNs make up five of the top ten free apps on Apple’s App Store. Various users have also reported finding creative ways to circumvent the checks, such as using the game Death Stranding’s photo mode to pass the selfie check on Discord. Those who do go through the checks may fall afoul of the age check’s inaccuracy rate, judging by Discord assigning my 30-year-old partner to the ‘teen’ age group yesterday. Plus, attaching an ID and/or photo to a sensitive account – and giving said information to non-UK companies whose data policies and vulnerability to hackers are difficult to discern – is an unwelcome suggestion for most. The safety of our sensitive data is an ever-present issue online: the ‘Tea’ app data leak this week saw thousands of women have their IDs and photos made public, which has already led to instances of doxxing and public harassment. Public debates about the OSA have often centred on end-to-end encryption (where only the sender and receiver can see messages), and about the sharing of illegal content – particularly child sexual abuse content – in encrypted messages, where the company cannot view and prosecute said content. Jemimah Steinfeld in Prospect argues that Ofcom’s treatment of encryption as a “risk” factor, and its potential encouragement of companies to weaken encryption to meet safeguarding standards, is misguided; any attempts to exert greater control over ‘bad guys’ through weakening encryption will instead create openings for ‘even more bad guys’, whose malicious misuse of data could present an equal – or greater – safeguarding risk for both children and adults. We can lessen online harms through content moderation and platform regulations, but we cannot truly age-segregate the internet So, we’re caught at an impasse: the OSA’s implementation so far is clearly limited in effectiveness and riddled with issues, issues that threaten to exceed even our high tolerance for data gathering. But the OSA is deeply important to the government as a bipartisan push for greater child safeguarding: who doesn’t want children to be safer? (Reform says they’d repeal the OSA, with Farage calling it ‘dystopian’ and promising to figure out a vaguely imagined alternative to safeguard children. The fear of censure and government surveillance is common across people opposed to the government – what differs is the content they aim to protect.) The difficulty here is that child safeguarding is so emotive that criticising how it is imagined, or implemented, leaves you open to accusations that you don’t care about children’s wellbeing. Creating a safe internet for children is a difficult and necessary task, but one that is often approached through the idea of the ‘crackdown’: increased surveillance and control over children, harsher policing of offenders, and intensified cultural fears about the internet and its horrors. I understand this impulse – I don’t want children to be exposed to harmful content, and I want those who seek to harm children to be prevented from doing so. But I think this approach also betrays a fundamental disrespect for children as people with agency. Children are going to explore their world; they are going to, at times, seek out ‘age-inappropriate’ content, either out of curiosity or necessity (16-17-year-olds, for instance, are in a strange limbo where they can legally have sex, but cannot legally be given access to videos of sex). Adult things routinely happen to, and around, children. We can lessen online harms through content moderation and platform regulations, but we cannot truly age-segregate the internet, and we can’t make children only seek ‘child-friendly’ things. What we can do is create an attentive, open, shame-free attitude to children’s sexuality and their exploration of ideas, where children who do end up venturing into unsafe territory can retreat easily, talk to adults about it openly, and report content effectively, without staying silent about that sexual conversation or that scary video for fear that they’ll be shamed and punished. I fear that the state cannot imagine treating children this way, and will continue trying to use policing and surveillance to fix children’s vulnerabilities – even the vulnerabilities that are created, and exacerbated, by a punishment- and surveillance-heavy sexual culture.