The OpenAI co-founder is among thousands of experts calling for a pause on AI systems that pose a ‘profound risk’ to humanity
When even Elon Musk – the man who blasted one of his sports cars into space on a SpaceX rocket, wants to trial monkey-killing brain chips on human subjects, and “disrupted” travel by inventing the radical concept of... trains – is warning about technological folly, then it’s probably a warning worth listening to. So listen up: Musk, alongside more than a thousand tech experts, has signed an open letter calling for a pause on AI development, citing “profound risks to society and humanity”.
Published by the tech-focused non-profit Future of Life, the open letter calls for a pause of at least six months on training artificially intelligent systems more powerful than the recently-launched GPT-4 – or, failing that, a government-imposed moratorium. Why? Because the necessary level of planning and management outlined by early AI experts, to keep the tools from spiralling out of our control, are not being adhered to, it says.
“Contemporary AI systems are now becoming human-competitive at general tasks,” the letter reads, and urges innovators to take a step back and ask some important questions about the technology’s future. Namely: should we let machines flood our information networks with propaganda? Should we automate jobs out of existence? Should we develop non-human minds that will inevitably outnumber and outsmart us, and “risk loss of control of our civilisation”?
Obviously, these questions have some scary (although quite self-evident) implications and, as noted in the letter, we can't rely on unelected tech leaders to provide the answers. The pause, it adds, needs to be public and verifiable, and include all “key actors”. Of course, it’s unclear how the pause will actually be policed. Even if, for example, the US government did intervene, how would it stop developers in other countries from taking advantage to get a technological and economic edge? (Worryingly, some have suggested international treaties and military action, similar to efforts to police the nuclear arms race.)
Regardless of the practicalities, the issues raised by Future of Life are undeniably important, echoing the concerns of many tech leaders among the rise of AI systems like ChatGPT, Bing, DALL-E... the list goes on. Alongside Elon Musk, the new letter is signed by the likes of Apple CEO Steve Wozniak, as well as the head of the Doomsday Clock organisation, which seems like... a bad sign.
Musk himself was a founder of OpenAI (the creator of GPT-4) in 2015, but resigned his board seat in 2018, having repeatedly warned about AI’s existential threat. Sam Altman, the current CEO, has spoken at length about the importance of developing the company’s AI tools in public, to help align it with human interests, though these plans don’t seem to include slowing down.
“At some point, the balance between the upsides and downsides of deployments (such as empowering malicious actors, creating social and economic disruptions, and accelerating an unsafe race) could shift,” OpenAI wrote in a recent statement. “In which case we would significantly change our plans around continuous deployment.” In response, Future of Life says: “We agree. That point is now.”
Read the full open letter here.