Depending on who you ask, the future of humanity in a world populated by extremely intelligent machines looks very different. On one hand, you have techno-utopias lifted straight out of a “Society if...” meme, where AI has solved all of humanity’s most difficult problems, from the climate crisis, to interstellar travel, and even death. On the other, you have scenes from a Terminator-style timeline: what remains of the scorched Earth is cleared to make room for vast swathes of solar panels and data centres, while humans scrabble about in the ruins, trying to avoid enslavement by their robot overlords, or being wiped off the planet altogether.

In other words, as one tech founder recently tweeted: “Within [three] years you will either be dead or have a god as a servant.” Naturally, these visions of the future aren’t very compatible. In fact, many suspect that they’re at the heart of a theatrical conflict within OpenAI – the creator of ChatGPT – that played out over the weekend, seeing the company cycle through three CEOs, only for the original CEO, Sam Altman, to land back at the helm.

In case you’re not caught up with the drama, Sam Altman was fired by OpenAI’s board on Friday (November 17) in a move that shocked the tech world, and seemingly Altman himself. Those doing the firing – including influential chief scientist Ilya Sutskever – claimed that he hadn’t been “consistently candid in his communications with the board”, also ousting board chairman Greg Brockman. Brockman quickly quit his role as OpenAI’s president as well, although he was technically allowed to stay. 

Over the course of the next few days, it seemed like a majority of OpenAI’s staff would follow suit, with an open letter threatening mass defection to a new AI research company under Microsoft, run by Altman and Brockman. This was co-signed by Sutskever – who claimed to “deeply regret” his involvement in the firing – as well as Mira Murati, who had been appointed CEO over the weekend, only to be replaced by former Twitch boss Emmett Shear on Sunday night. He didn’t last long either, though. By Wednesday (November 22) Altman had returned, with a brand new board in place at the top.

Notably, the OpenAI board has the power to fire its leaders without notice – if it sees something that might harm humanity, it’s allowed to make any necessary leadership changes to keep it contained.

That isn’t necessarily to say that Altman had stumbled on a path to the singularity before he was kicked out. But it’s easy to see how the developments have reignited the debate about the future of AI, with “doomers” at one end of the spectrum and believers in “effective accelerationism” at the other, preaching a version of AI utopianism. But what exactly does each side believe? That’s where we’re here to find out.

WHAT IS AN AI DOOMER, EXACTLY?

There’s a lot of jargon thrown around about AI research and development online, partly because it’s rooted in the nerdy niches of the tech industry, and partly because we’re dealing with some unprecedented ideas. Maybe one of the most important terms, though, is “p(doom)”. To break that down, the “p” stands for “probability” and the “doom” part means exactly what it sounds like. Together, they’re a kind of dark joke, expressing people’s best guesses about the likelihood that AI will destroy our chances of living a meaningful life.

Unsurprisingly, “doomer” is a label for people with a high p(doom) – in other words, a pessimistic outlook on the effects of AGI, or an AI that can operate at a human level or higher. As a result, they often advocate slowing down AI development, or join calls to put it on hold.

What does this have to do with OpenAI? Well, Emmett Shear, for one, dubbed himself a “doomer” as recently as August this year, just months before taking on the OpenAI top job (however briefly). Members of the board who were instrumental in the recent shake-up have also expressed deep concerns about the future of the technology, which sets the battleground for the supposed conflict.

SO WHO ARE THE AI ACCELERATIONISTS?

With doomers hoping for the deceleration of AGI development, it’s not hard to guess what accelerationists believe. Borrowing language from the radical political ideas of groups like the Cybernetic Culture Research Unit (CCRU) and controversial thinkers like Nick Land, followers of accelerationism – AKA “E/acc”, a play on the similarly dubious philosophy of Effective Altruism – believe that we should develop and integrate powerful AI systems as fast as possible. This theory has been around for a while, with the label often traced back to a group including Twitter user Beff Jezos, and toted by influential entrepreneurs like Marc Andreessen.

The movement isn’t monolithic, of course, with some major differences in accelerationist’s fundamental beliefs. Some, for example, believe that it’s important to achieve AGI as soon as possible because it will usher in a post-scarcity society, radically improving people’s living conditions across the globe and, at its core, reducing humanity’s net suffering. 

Others make the much rarer (but much more glamorous) argument that it’s not about reducing human suffering at all: in fact, our only responsibility is to build superior beings that can take our place, either gradually or all at once, and spread their superintelligence throughout the universe. In this scenario, our survival is irrelevant. It won’t come as a shock that these thinkers aren’t very popular.

CAN’T WE ALL JUST GET ALONG?

In the face of an alien intelligence, which may be getting closer by the day, you’d think that humanity could find some common ground. In reality, though, the opposite has happened, with experts’ varying perspectives and p(doom) predictions for post-AI society causing a schism across the tech sector.

To some extent, this makes sense. If you truly believe that AI can right all of humanity’s wrongs, find cures for diseases, save us from climate catastrophe, and bring about an era of abundance – as the most ardent accelerationists do – then it’s basically a moral imperative to make sure it happens as soon as possible. Anyone standing in the way would, hypothetically, have millions of deaths on their hands.

On the flipside, of course, there’s the belief that sufficiently intelligent machines will be the cause of widespread death and destruction, either through a lack of alignment with humanity’s goals, or by falling into the hands of bad actors. (See: the opinion of hundreds of industry leaders.) If you believe this, then the critical mission is to stop development, or at least slow it down until we can work out how to do it safely.

OpenAI reinstating Sam Altman is considered by many to be a failure of this mission, since it appears to override the original aims of the company’s board – to protect humanity from the worst consequences of a rushed AI system. Then again, Altman himself has issued vocal warnings about AI in the past, and claims to be motivated by safety concerns himself.

THERE IS (KIND OF) A MIDDLE GROUND

Whatever the range of p(doom) predictions across the tech industry and the wider world, very few people actually believe that AI should be shut down entirely, or that it even can be. As DeepMind co-founder Mustafa Suleyman told Dazed earlier this year, technological progress has been an inevitable part of human existence for as long as we can remember: “From our earliest origins we haven’t been separate from our tools – from fire, to stone tools, to weaving, they were part of what made us become Homo sapiens itself.” 

It may be the case that the development of AI – and, by extension, AGI or an AI superintelligence – is similarly inevitable. But what’s the point of the doomers vs accelerationists debate, if it’s going to happen either way? Well, maybe the important focus isn’t on the actual invention of AGI, but on how it’s brought into being. After all, most people aren’t on the extremes of the debate – they’re somewhere in the middle. Hopefully, they can take some of the arguments from both sides and work out how to get the best out of the technology while limiting the damage it might cause, through measures like industry regulation and international safety deals.

If one thing’s for sure, it’s that p(doom) appears pretty far from zero right now. In fact, it often feels like the only thing certain about the future is more uncertainty. Hopefully, though, the goal – for most of humanity, at least – is to keep the probability of building Skynet as low as possible as AI evolves.

Join Dazed Club and be part of our world! You get exclusive access to events, parties, festivals and our editors, as well as a free subscription to Dazed for a year. Join for £5/month today.