Pin It
dazed ai illustration1

Prepare for the rise of the super-machines

Watching for when the robots turn evil with a Oxford director of the Future of Humanity

Never mind asteroids or global warming; this century’s biggest existential risks will arise from potential technological breakthroughs. At least that’s what philosopher Nick Bostrom, director of Oxford University’s Future of Humanity Institute, believes. Currently topping the list of future threats to the survival of mankind is artificial intelligence (AI). Although in the public’s imagination AI is still largely the preserve of science-fiction blockbusters, scientists are making real, if fragmented, progress in its development. But what if the robots don’t have our best interests at heart? Here Bostrom  gives us the facts we need for when the machines begin to mobilise.

The AI we have today is mostly the AI that can play chess really well. We have another AI that can search through billions of documents and retrieve the one most relevant to the user: Google. You have a third AI that can drive a car. But each can only really do that one thing. They don’t yet have a very high degree of general intelligence or the “common sense” we humans pride ourselves on. But eventually AI will creep towards a more general-purpose capability. 

If AI became very powerful it would be potentially dangerous because it might be able to reshape parts of the world according to its own preferences. If those aren’t benevolent preferences it might be bad for the other things in its sphere of influence, like us. If someone was building a parking lot, for example, there might be an ant colony there which is exterminated. Not because of a hatred for the ants; the people who built the parking lot simply didn’t care. They wanted a parking lot, so that piece of the world was transformed.

The situation might quickly shift into one where what happens to us is simply in the AIs’ hands. If they surpass us radically in intelligence, then it might not be a matter of how we choose to treat them, but how they choose to treat us. A one-sided relationship, much like the gorillas: they have very little say over what we humans do but it’s really up to us to protect them.

How can you load human-friendly values into an AI? How can you control the outcome of a potential intelligence explosion, where you go from something below human to something superhuman in a brief period of time? These are questions that have to be solved by the time we actually get the ability to build this kind of artificial intelligence. Since we don’t know how long that will be, it seems wise to get cracking as soon as possible. 

The first step is to realise how likely an existential catastrophy is, with the number of different ways in which things can go wrong. Then you can progress towards solving it. The problem is how to have some influence over the outcome of the transition to machine intelligence. 

We have our work cut out for us, but I think that the best -case scenarios will be enormously good. Pretty much any human sphere where we're not currently living the best kind of life could, in principle, be improved by superintelligence that has our interests at heart. 

Interview Ruth Saxelby