Mustafa Suleyman, DeepMind co-founder and author of The Coming Wave, tells Dazed how to walk the tightrope between technological dystopias
In 1965, Intel co-founder Gordon Moore made a prediction that held up for (give or take) the next six decades: that the number of transistors on a microchip would double every two years, resulting in an exponential growth in speed and capability, even as machines got cheaper. This came to be known as Moore’s Law. Then, decades later, as the prospect of truly intelligent computers grew into a legitimate concern, AI researcher Eliezer Yudkowsky offered a dystopian reinterpretation, which he dubbed Moore’s Law of Mad Science: “Every 18 months, the minimum IQ to destroy the world drops by one point.”
For obvious reasons, this is a controversial claim – among tech experts, Yudkowsky is variously regarded as a doomsayer, creating an unnecessary wave of fear, or one of few sensible voices calling for a complete shutdown of AI technologies. Either way, it’s easy to see where he gets his concerns. In the last couple of hundred years alone, we’ve gone from muskets, to automatic weapons, to nuclear bombs and remote drone strikes – now, wars are literally being fought with Xbox controllers.
Then, there’s AI. Unlike guns or bombs, this is a technology that has the power to unlock better and brighter futures, revolutionising everything from climate crisis prevention to drug development. However, as Yudkowsky (and many, many others) warn us, it could also render humans obsolete, and at worst wipe us out completely. And even if the machines don’t develop their own thoughts and goals, there’s a fear that people could ask hyper-intelligent chatbots – like the ones floating about on the dark web – for novel ways to make bombs or engineer pandemics, sidestepping the expertise required to unleash them on the world. More and more, people feel as if they stand on some sort of precipice.
As a co-founder of the leading AI company DeepMind and machine-learning startup Inflection AI, Mustafa Suleyman has already played an important role in bringing a future populated by intelligent machines to life, whether it turns out for better or worse. In his new book with Michael Bhaskar, The Coming Wave, he himself identifies this as a “critical threshold” for the future of our species, alongside another emerging technology: synthetic biology, i.e. the design and engineering of new living things, or modification of existing biological systems.
A central idea of the book is that these kinds of technologies come in waves, with “profound societal implications”. Take, for example, steam power, which drove the first Industrial Revolution, or the combustion engine, which reshaped our lifestyles and surroundings over the course of the last century. (Without the car, we wouldn’t have suburbs, drive-through restaurants, or a number of petrolhead subcultures, and that’s just a ripple compared to the underlying technology that powers “everything from lawnmowers to container ships”.) The more generalised the effects of these technologies, he adds, the more likely they are to change our lives on a vast scale, becoming so embedded that they eventually turn “invisible”, like language, agriculture and writing.
“When I first started work on AI, what we were doing seemed impossibly ambitious to many people... Now it’s much clearer [that] this is all too real and will change the world” – Mustafa Suleyman
This is why synthetic biology and AI represent such a tipping point, Suleyman tells Dazed: dealing with life and intelligence, respectively, these are the “most general purpose technologies imaginable”. Their consequences will be far-reaching, and may even rewrite what it means to be human as they wash over us, accelerated by other innovations like quantum computing. That’s why we need to start thinking about them now, in the hopes of aligning them with humanity’s goals, and to stop them from spinning out of control.
In The Coming Wave, Suleyman doesn’t just identify the problem, but outlines a possible solution: a process of “containment” that brings together AI developers, governments, regulators, and more to shape the coming wave of technologies from the ground up. It’s an ambitious plan, requiring mass participation and significant changes to the fabric of society.
Suleyman himself compares it to walking a tightrope or “narrow path” for the rest of time, with disaster looming on either side. But it might also be our only hope – much better, at least, than turning a blind eye and resigning ourselves to the future that people like Yudkowsky warn us about, all paperclips and factory-farmed humans. Then, if this does turn out to be the wave that wipes out Homo technologicus, at least we can say we tried.

In The Coming Wave, you refer to humanity as Homo technologicus, or the ‘technological animal’. Why use this label, instead of the more common Homo sapiens?
Mustafa Suleyman: From our earliest origins we haven’t been separate from our tools – from fire, to stone tools, to weaving, they were part of what made us become Homo sapiens itself. We have always been users and builders at a fundamental level. You can’t separate humanity from technology, and that’s what this is meant to convey, that we are an intrinsically technological species. And when we look to the future it’s vital to keep this intimate connection in mind.
Every generation seems to believe that it exists at the precipice of its own annihilation. Why do you think that the claims are more credible this time around?
Mustafa Suleyman: The critical turning point here was the invention of nuclear weapons. At that point it went beyond believing: the possibility of annihilation became a blunt fact. This is what has changed. We are now building technologies of immense world-changing power. These are developing at incredible speeds and with capabilities we can still only guess at. This isn’t science fiction but an active program of research. However, with things like AI, it’s not that we are standing on the precipice right now; it’s that they could, eventually, lead us toward that, and we have to be ready. The core challenge is to ensure that it doesn’t happen.
Why are AI and synthetic biology the two main drivers of the coming wave?
Mustafa Suleyman: They are the most general-purpose technologies imaginable. They address the two great fundamentals of our species: intelligence and life. Everything in our human world flows from these two properties, and so anything that gets to their heart, and allows control of them at a new level, is bound to be among the most significant inventions in history. As soon as you begin to examine the trajectories and range of use cases of these two technologies, it’s clear that they will shape the future in a profound way.
In case you had any doubts about the Singularity timeline: pic.twitter.com/DkCHYtU1Ej
— AI Breakfast (@AiBreakfast) August 12, 2023
How do you communicate the magnitude of these technologies to people who aren’t aware of their potential impact?
Mustafa Suleyman: It can definitely be difficult to picture. Our most advanced AI models have grown about five billion times bigger in the last ten years. That is a speed and size [that is] difficult to comprehend. Now project out another ten years…
One way of understanding what this means is to think about how it will change your life. AI is going to give you the world’s best chief of staff, lawyer, accountant, strategist, coach, counsellor, assistant, and all-round team in your pocket, capable of not just talking to you but doing things on your behalf. Whether organising a birthday party, or devising and executing a sophisticated business strategy. That’s just one example. These are huge changes and need to be understood in multiple different ways. Even those working on them don’t always have a grasp of the implications.
The projected risks of AI are clear and much talked about. What are the risks of not developing AI?
Mustafa Suleyman: AI is going to make this next decade or two the most productive in human history, unleashing a massive productivity and economic boom. It will unlock new discoveries in basic science and medicine, facilitating much cheaper and more reliable healthcare. It will help find the materials and tools needed for clean energy generation and meeting the challenge of climate change. Everywhere you look – from ageing and sick populations, to slowing economies, to environmental degradation – the world is beset with challenges, and AI can be a vital part of meeting them. Even on a smaller level, I think AI can make our lives easier, happier – that’s what we’ve tried to do for example with Pi, the chatbot from my company Inflection AI.
“These are huge changes... even those working on them don’t always have a grasp of the implications” – Mustafa Suleyman
The UK government recently announced an international AI safety summit. How effective do you think these kind of coordination efforts will be in solving the problems you raise?
Mustafa Suleyman: They can certainly help. We need leaders and states engaging with this problem at a high level, putting in the necessary time and energy to understand and respond to the issues. Equally, on its own, this isn’t going to be enough. No single summit or piece of legislation is sufficient for what I call containment, the task of keeping control of frontier technologies like AI. With something like the AI safety summit, we should applaud and support it whilst also recognising that no event or measure will ever amount to ‘job done’. Only a constant effort at every level, from individuals in companies, up to international treaties and global movements, can really address the problems, and even then it’s not a neat, lasting solution, but something that constantly needs work. Safety here isn’t a destination, it’s a path we must keep walking. The safety summit is a start.
Yuval Noah Harari writes that ‘social media was the first contact between AI and humanity, and humanity lost’. Do you agree? And what can we learn from our past failures?
Mustafa Suleyman: It’s easy to highlight all the problems of social media and entirely forget all the good that comes from it, all the memories shared, the knowledge discovered, the connections forged. I don’t think any simple accounting can do justice to the complexity of what’s going on, and it’s exactly the same with AI. Saying it is ‘good’ or ‘bad’ misses how far-reaching and nuanced the technology really is. What we can learn from social media, and indeed the history of technology more widely, is that technologies always proliferate far and wide. People want these tools, they drive use, development, and economies of scale which makes them more accessible, spurring more use and on and on. This is a very ingrained pattern with very few counter-examples. It suggests we can’t stop this new wave, just as we haven’t stopped previous waves, but it also highlights how we must try and shape it from the outset.
“We can’t stop this new wave, just as we haven’t stopped previous waves... we must try and shape it from the outset” – Mustafa Suleyman
How has your opinion on AI changed, from the founding of DeepMind, to the decision to write The Coming Wave? What caused this change?
Mustafa Suleyman: When I first started work on AI, what we were doing seemed impossibly ambitious to many people. AI was still a fringe academic niche and few took it seriously. Now it’s much clearer [that] this is all too real and will change the world. My opinion throughout has been fairly consistent, that it presents both immense benefits and risks. What’s changed is that I am speaking more publicly about both.
Do you think that it was a mistake to accelerate AI research with DeepMind? Is there anything you would do differently, if you were to go back in time?
Mustafa Suleyman: I think the work that DeepMind was doing then and the work they are doing now is incredibly important.
Technologies like AI or synthetic biology have many critics, but not so many ideas when it comes to solutions. Why do you think it’s so difficult to imagine a more positive future for new technologies?
Mustafa Suleyman: It would have seemed strange in the nineteenth century or the mid-twentieth century to only have a negative view of technology. What changed comes down to the growing awareness of the downsides that come with new technologies, a newfound maturity and understanding of how they impact in chains of unintended consequences. Broadly this is a good thing. It’s absolutely right that technologists take responsibility, and that mitigations are put in place from the start. But, as you say, identifying problems is easier than solutions, and that is where we are at right now. My proposals for containment are hopefully helpful but very much only a start. Getting this right is a generational challenge that needs massive, society-wide involvement.
As for imagining a positive future, it’s close to my heart. In The Coming Wave I explore a lot of the risks, but having done so I am now keen to start presenting that positive case alongside it.
The Coming Wave is out now.