Pin It
GettyImages-1479619444
Courtesy Devrimb / Getty Images

Is Britain prepared to face the ‘existential threat’ of AI?

The UK has declared its aim to lead the world in AI, one of five ‘technologies of tomorrow’ – but what is the government actually doing to maximise the benefits and curb the risks of new tech?

Introducing Horror Nation?, a new season from Dazed about the current state of the UK from the perspective of the young people who live here. Over the course of this week, we will be celebrating the good that is happening all across the country – the culture and the creativity, the artists and the activists, the positive forces for change. But we will also be confronting the reality that life is getting increasingly challenging for British youth, and that Britishness itself is in flux, or even crisis. Stay with us as we lift the lid on modern Britain and ask whether this really is a horror nation.

Remember the metaverse? As late as last year, “tech-savvy” politicians would have had us believe that online virtual worlds were the technology that would define the future of the UK, making giddy pronouncements about their revolutionary effects on education, retail, work, industry, entertainment, and our social lives. Admittedly, we’d just spent months confined to our houses by a global pandemic – it was kind of logical that we’d be looking for alternatives to interacting in the real world. In mid-2023, though, the idea of a government-sponsored Second Life already seems laughable and out-of-touch (see also: Rishi Sunak’s plans to patch up our crumbling little island by... minting an NFT). Either the technology isn’t anywhere near good enough yet, or Brits just don’t care enough to go to the trouble of logging on and crafting an ugly little avatar.

Either way, it doesn’t matter: the UK government has a new tech obsession, which has all but buried the metaverse conversation. Yes, we’re talking about AI. Artificial intelligence. If you’re already sick of hearing these two words, then bad luck – a couple of years after AI went mainstream, it’s clear that its staying power goes way beyond previous tech fads, and for good reason. For better or worse, AI really could revolutionise our lives, and in some cases it’s already happening.

On May 24, 2023, Rishi Sunak met with three executives from leading AI companies, OpenAI, Google DeepMind, and Anthropic. Both OpenAI and Anthropic are based in California. DeepMind is headquartered in London, and led by British researcher Demis Hassabis, but became a wholly owned subsidiary of Google’s parent company, Alphabet Inc, in 2015. (Alphabet Inc is based – guess where – in California.) Anyway, in a joint statement published by the prime minister and the CEOs, they explained that they met to discuss “the risks of the technology, ranging from disinformation and national security, to existential threats”.

“The PM made clear that AI is the defining technology of our time, with the potential to positively transform humanity,” their statement adds. “But the success of this technology is founded on having the right guardrails in place, so that the public can have confidence that AI is used in a safe and responsible way.”

Obviously, it’s reassuring to know that leaders are considering guardrails, especially if the technology does in fact have the potential to “transform humanity”, whether “positively” or otherwise. (The road to Hell is paved with good intentions, and so on.) Even better, Sunak announced on June 12 that OpenAI, DeepMind, and Anthropic will grant the UK early access to their models to understand any potential risks. But, for the UK’s leadership, this statement did seem like a bit of a U-turn. Previously, in a March 29 white paper, the government proposed a light-touch approach to AI governance, with an emphasis on innovation rather than regulation. While Elon Musk and other AI experts warned that we “risk loss of control of our civilisation” across the Atlantic, the UK Department for Science, Innovation and Technology was tweeting: “Think AI is scary? It doesn’t have to be!”

Why was the government talking up this “pro-innovation” angle as recently as a few months ago? Well, a cynic might suggest that the country is playing catch-up. In the last few years, AI development in the UK has lagged behind counterparts like the US, China and the EU, despite the country’s impressive history of technological innovation. “All of these countries have increased the number [of patents filed] each year, but the US and China are increasing faster,” Daniel Castro, director of the Center for Data Innovation, tells Dazed. “This likely reflects the scale of investment, both from government and the private sector.”

“Post Brexit, one of the things the UK said was, ‘We’re really going to push tech development,’ but it feels like a difficult battle to be fighting,” adds Huw Roberts, who researches AI policy at the Oxford Internet Institute and previously worked for the government’s Centre for Data Ethics and Innovation. He points out that the UK often boasts that it’s the third most influential country for AI investment, ranked behind only the US and China. “But the second you take away DeepMind, now [a] US-owned company, from the equation, we fall right back down.”

In terms of the causes behind the UK’s slow AI uptake, Roberts agrees with Castro that investment is a major issue. “It’s a really expensive business to start up,” he says. “Naturally, if you’re in Silicon Valley, or nearby, you’re going to have more access to investment. If you look at OpenAI, or Anthropic – the two recent big players – they’ve both been heavily seed funded by various entities. We haven’t seen an equivalent in the UK.” Of course, DeepMind has made significant strides with the help of billionaire entrepreneurs, from Peter Thiel to Elon Musk, but that has ultimately involved transferring ownership outside the UK.

UK tech development isn’t only hampered by a lack of investment, either. Back in March, Rishi Sunak was warned that, if the UK doesn’t commit more funding to native manufacturers of microchips – a vital hardware component that must become increasingly powerful to sustain new technologies – it could become a “tech colony” of the US and China. (That’s if Alphabet’s acquisition of DeepMind didn’t kickstart this process already.) In the US and China, the chip industry is at the centre of a growing political battle, though Castro suggests that the issue is less important than manufacturers make out. “Access to chips will be important for the tech sector in any country,” he argues, “but not every country needs to be a chip maker.”

In any case, it’s easy to see why the UK would want to exploit increasing worldwide regulation to draw even with its economic competitors, even if it does mean putting real human beings at risk of unemployment, offensively bad art, epistemological despair, or even extinction (when has this stopped the Tories in the past?). So why, in May 2023, did Rishi Sunak change his mind? Why, this June, did he fly to Washington to discuss forming a global regulatory body, similar to the one that inhibits the use of nuclear weapons, with Joe Biden? Why has he promised that the UK will play a “leading role” in regulation, rather than forging ahead, consequences be damned? Well, maybe Demis Hassabis, Sam Altman, and Dario Amodei genuinely scared him into taking action. Or maybe it’s a way of establishing a different kind of leadership role for the UK, in an industry where it can’t keep up the pace of innovation.

In truth, says Roberts, Sunak might not have had a choice – he might have been influenced by the EU, who are “generally pretty good at regulation”, Roberts explains, citing the General Data Protection Regulation (GDPR) as an example. With GDPR: “The EU said, ‘You companies need to protect data better’, and then Facebook etc. were like, ‘Okay, we’re gonna follow the EU’s rules’. Then, if any jurisdiction deviated from [those rules] they received a big pushback from these big companies, because from a compliance perspective it’s a pain in the arse, if you’re trying to follow a million different rules.” 

“Companies are really pushing back and threatening to leave the UK. They wouldn’t leave the EU, but I’m not sure I’d be gambling if I was in UK government...” – Huw Roberts

Essentially, the EU is big enough to get platforms to stick to its guidelines, because being locked out of EU markets would have a devastating economic impact (although, under Elon Musk’s ownership, Twitter has started pushing the boundaries). We can’t necessarily say the same for a post-Brexit UK. “If the UK says something radically different, I think it will struggle,” says Roberts. We’re seeing that at the moment with the Online Safety Bill, with companies like Signal and WhatsApp uniting to protest policies that could undermine essential privacy features like end-to-end encryption. “[These companies] are really pushing back and threatening to leave the UK. They wouldn’t leave the EU, but I’m not sure I’d be gambling if I was in UK government... Do you ruin your whole business model, or do you listen to a country of, what, 69 million people?” he adds. “I know what I would do.” (For context, WhatsApp is reported to have more than two billion monthly users across the globe – of these, its 30 million UK users are a relatively small share.) By leaving the EU, Britain essentially diluted its power to negotiate its own tech policies, and mostly has to go along with guidelines set out by the institution it was supposed to distance itself from.

Whatever the reasons behind the government’s U-turn when it comes to AI regulation, it represents a “welcome shift in tone” from the prime minister, says Francine Bennett, Interim Director at the Ada Lovelace Institute, which seeks to ensure that data and AI enhance individual and social wellbeing and that their benefits are equitably distributed. But Bennett expresses concern that the specifics are still hazy: “The white paper doesn’t propose any new legal powers, rules or obligations, at least initially – in marked contrast to the EU, which is in the latter stages of passing new, comprehensive AI legislation.”

Castro disagrees that the UK’s approach is too slack, dismissing the fears about AI harms that are widely shared by international experts. “It is important to remember this is not the first time that people have claimed AI is an existential risk. They have been wrong before, and they are likely wrong again,” he says. “There is no real evidence that superintelligent AI systems are just around the corner, and that even if they do materialize, they present such a cataclysmic risk.” According to Castro, we should focus on the potential benefits – ”accelerating drug discovery, improving education, and making transportation safer” – rather than “chasing speculative harms”. As a technology, AI is still relatively new, he adds. (This much is true; new use cases, gaps in our knowledge, and unpredictable side-effects seem to emerge every week). “Critics have rightly lampooned businesses who ‘move fast and break things’, but that same criticism should apply to governments.”

Taking a less hardline approach, Roberts acknowledges that there are “both good things and bad things” about current UK policy. A bad thing: it’s too light on specifics. “I’m not convinced people, even within government, know what they want [policy] to look like,” he says. “It feels to me that if the government were taking this seriously, it would do a little bit more.” A good thing: this lack of concrete guidelines actually allows for more flexibility when responding to such a fast-moving technology, including unforeseen dangers.

“It feels to me that if the government were taking this seriously, it would do a little bit more” – Huw Roberts

By comparison, says Roberts, the EU is struggling to adapt fast, since its strict regulations weren’t built to deal with the kinds of innovations that are suddenly everywhere, including foundation models such as OpenAI’s GPT-4. Alas, another bad thing: without adequate funding for regulators, it will be impossible to provide protections for individuals, or clarity for developers. And yes, the government has recently pledged £100 million for an “expert taskforce” to help build safe AI, but that’s a paltry amount compared to the companies it’s up against. In 2023, OpenAI got $10 billion from Microsoft alone; Anthropic raised almost half a billion; and, lest we forget, DeepMind is owned by Google, one of the richest companies on Earth.

Speaking of money: does the British public even want its taxes going toward AI research? Do regular people know enough to make a decision? According to Bennett, the answer is yes. “Recent AI products like ChatGPT have offered everyday people a first-hand experience with more powerful kinds of AI systems, which have moved at a surprisingly fast pace,” she says. “As more people become aware of the ways these systems can impact their lives, it is no surprise there is strong public demand [that] these technologies are safe and well-regulated.” This growing awareness of new technologies is captured in the Ada Lovelace Institute’s recent research on public attitudes toward AI. According to the nationally-representative survey, 62 per cent of the public are in favour of laws and regulations guiding the use of AI technologies, with a majority preferring the responsibility to fall on an independent regulator. Among the top concerns are advanced robotics, autonomous weapons, and the replacement of human decision-making, while the main perceived benefits relate to health, science and security.

As Bennett points out, the UK public largely hold all of these opinions because it has first-hand experience. Thanks to the revolutionary and easy-to-use interfaces of tools like ChatGPT or Midjourney – or even the increasing integration of AI into existing apps like Bing, Snapchat, and TikTok – the powers of AI aren’t exclusive to developers and researchers. In terms of early-stage technology, this is quite rare. But does it mean that our attention is disproportionately weighted toward AI, while we remain blinkered to other technological developments? After all, AI is just one of the five “technologies of tomorrow” that are supposed to revolutionise life in the UK and beyond: namely, AI, quantum tech, engineering biology, semiconductors, and future telecoms. That’s not to mention the metaverse, cryptocurrencies, and other briefly-hyped technologies that are bound to re-enter our lives somewhere down the line.

Again, Castro plays down the “panic” surrounding AI. “For better or worse, there seems to be something about AI that grabs people’s imaginations,” he says. “Policymakers are not talking about ‘ethical 6G’ or ‘trustworthy quantum computing’.” (Maybe because it’s unclear how an unconscious wireless network could actually be unethical.) “We need rational policy discourse around AI as well,” he adds. “That means leaving behind the hype and hyperbole and avoiding reactionary policy.”

On the other hand, Bennett says that the UK’s current focus on AI makes sense, since it’s a “cross-cutting technology” that has already shown a significant impact on many parts of our lives, “and is intertwined with other technologies of tomorrow”.

On the innovation side, says Roberts, it’s a question of where the UK can actually make a difference. In terms of developing hardware, for example, the nation is already at a disadvantage, especially as companies like Arm (a well-established chip company) shift their allegiances away from UK markets. Thanks to its leading role in sectors like cybersecurity and life sciences, though, the UK has some opportunities to use AI in creative new ways. When it comes to regulation, he adds, the alarm bells make sense, to some extent. “When my mum starts asking me about ChatGPT, it shows that these are the technologies being used. We should be thinking about regulating them, because ChatGPT is the fastest-growing consumer application of all time, and we have no idea what to do with it.” If that sounds scary, then maybe that’s a good thing. If there’s one thing we’ve learned from experience, it’s that the government only takes action when the problem is staring it directly in the face.

Join Dazed Club and be part of our world! You get exclusive access to events, parties, festivals and our editors, as well as a free subscription to Dazed for a year. Join for £5/month today.