2001: A Space Odyssey stillVia IMDb

Can the world’s first AI safety summit help avert the apocalypse?

Elon Musk, OpenAI CEO Sam Altman and more have gathered in the UK to strike an historic AI safety agreement – here’s what you need to know

Not so long ago, it was unlikely that any random person you stopped on the street would have a strong opinion on artificial intelligence. In the last couple of years, though, AI has quickly come to dominate international headlines, spurred on by the rollout of user-friendly tools like ChatGPT and DALL-E 3. The public reactions are a mix of excitement and fear, but things are changing at a higher level, as well.

This week (November 1 and 2), around 100 of the most important policymakers, engineers, and businesspeople from across the globe are gathered at the UK’s Bletchley Park to discuss the technology and how to make it safe, at the world’s first global AI safety summit. Those in attendance include Sam Altman and Demis Hassabis, the CEOs of companies OpenAI and DeepMind, and people from Meta such as Prof Yann LeCun and Nick Clegg, as well as US vice president Kamala Harris and political representatives from the EU and China. UK prime minister Rishi Sunak is even set to sit down with Elon Musk for a live interview. AI slaughterbots have the opportunity to do the funniest thing of all time.

How did we get here so quickly? Well, it’s partly the result of lengthy and vocal campaigning by AI safety experts, who believe that the technology presents unprecedented risks and may even lead to humanity’s extinction. But still, the conversation has blown up very fast, considering it was mostly restricted to a handful of nerds on internet forums just a few years ago – and that’s because it has to keep up with the rapid growth of the technology itself, which has developed at a shocking pace, often with unexpected results. Just look at text-to-image generators for example: in 2018, their output was little more than a vague smudge, while today they have us second-guessing every image we see online.

Many have expressed relief that the world is finally taking action in the face of these emerging AI technologies, drawing comparisons to the worldwide debates that helped stem the proliferation of nuclear weapons. At the same time, critics say that proposed measures don’t go far enough, or that big AI firms are too involved in writing the rules they’ll have to play by. Below, we sum up the ongoing debate and what we can expect from the UK’s AI safety summit.

WHY BLETCHLEY PARK?

As the once-secret HQ of code-breakers in WWII, Bletchley Park is regarded as the birthplace of modern computing. No better place, then, to stage discussions about the future of AI, a computer technology that has been hailed as the most important invention since we harnessed electricity.

There’s also a political dimension, though. For a while now, the UK has been trying to cement itself as a world leader in AI. With US-owned companies doing most of the actual development, the Bletchley summit could be seen as an attempt by the UK to stake its claim via the safety angle. Of course, it wouldn’t be the first time Rishi’s government leveraged public fear to push its elusive “levelling up” scheme and insist on its continued relevance to world politics – only this time, the fears have some actual weight behind them.

FOR SOME REASON, KING CHARLES SHARED HIS THOUGHTS ON AI

King Charles is 74 years old and famously can’t use a pen. Does he have any clue what Midjourney is, or how to type a prompt into ChatGPT? Probably not! Is that going to stop him talking about AI? Of course not! In a recorded address at the summit, the monarch reiterated that the development of advanced AI is “no less important than the discovery of electricity”. He also said that its risks and “unintended consequences” – including its impact on our economic, psychological, and democratic health – need to be tackled with “a sense of urgency, unity and collective strength”, just like the climate crisis (which is going great, after all). At least he’s trying x

WHAT ARE THEY ACTUALLY DOING FOR AI SAFETY?

The UK government has stated that the purpose of the summit is to consider the risks of AI and potential paths forward to make the tech less dangerous. The end goal is to reach an international consensus on the future of AI (which, like it or not, is well on its way).

More specifically, there’s a concern that “frontier AI” models – meaning the latest and most powerful systems that stretch the limits of the technology – pose devastating risks alongside their “enormous benefits”, and have to be safely developed to stop the worst doomsday scenarios coming true. Some have criticised the summit, however, for its focus on existential risks over more immediate concerns, like AI’s effects on job loss, misinformation, and relationships. Even Turing Prize winner Yoshua Bengio, who’s been outspoken about the long-term risks of AI, has said that the two-day event is more suited to developing “small steps that can be implemented quickly”.

OPPOSING RESEARCHERS HAVE CALLED OUT AI ‘FEARMONGERING’

The AI community is basically split into two broad groups: the ones who think that the tech presents a genuine extinction risk (but continue to develop it anyway, claiming that the only way to mitigate this risk is get there first), and those who think the fears are overblown, or that AI will unlock so many benefits that it’s worth taking the plunge.

Yann LeCun, Meta’s chief AI scientist, is one of the latter. As the event began at Bletchley, he joined 70 other experts in signing an open letter that urges governments and researchers to “embrace openness, transparency, and broad access” when it comes to AI. He also called out the likes of Altman and Hassabis in a tweet last weekend, saying: “If your fear-mongering campaigns succeed, they will *inevitably* result in what you and I would identify as a catastrophe: a small number of companies will control AI.”

Admittedly, a single company or government gaining a monopoly on superintelligence sounds like a nightmare scenario, but many have pointed out that LeCun’s dreams of open-sourcing superintelligence are equally likely to end in catastrophe. Often, this plan is compared to fixing the problem of nuclear stockpiles by... giving everyone in the world a nuclear bomb.

THINGS KICKED OFF WITH A LANDMARK AGREEMENT

In an unusual display of unity, China joined with the US and the EU to sign a “world’s first” agreement on managing the risks of AI. In total, 28 countries agreed to the Bletchley Declaration, which acknowledges the urgency of understanding AI to the best of our abilities, and working hard to make sure it takes humanity in the right direction. Notably, this follows an executive order signed by Joe Biden earlier this week, which introduces new safety standards for the development and use of AI in the US.

WHAT HAPPENS NEXT?

On Thursday, after the AI summit is officially over, Elon Musk is set to sit down with Rishi Sunak for a live-streamed conversation on AI. Some have questioned this decision, since Musk’s “based” AI company is hardly competing with the big players (at least in the public eye) right now, though he did help form OpenAI as an initial board member, alongside Altman and other influential researchers.

Beyond that, a second summit will be held virtually in six months, hosted by the Republic of Korea, with another in-person event scheduled in France a year from now. It remains to be seen how much is actually accomplished in the interim... hopefully, we’ve not been wiped off the face of planet Earth by then.

Read Next
ExplainerWhat’s happening with the Global Sumud Flotilla?

Last night, crew members of the flotilla reported another wave of attacks while attempting to deliver aid to Gaza. From Israel’s response to how you can help, we explain everything you need to know

Read Now

Speakerbox‘Stories with heart’: Inside Hideo Kojima’s constellation of creatives

As the renowned video game director’s Kojima Productions turns ten, we speak to six of Kojima’s diverse collaborators to explore how his singular creative process has shaped their own artistic practices

Read Now

NewsFacebook is rolling out an AI matchmaking service – what could go wrong?

Mark Zuckerberg’s dating app aims to bring together like-minded lovebirds, with commands like... ‘find me a Brooklyn girl in tech’

Read Now

FashionWhat went down: Dario Vitale’s big debut put the sexy in Versace

The former Miu Miu designer dropped his first collection for the legendary house on day four of Milan Fashion Week SS26. From punchy brights to a hefty dose of side-boob, here’s what you need to know

Read Now