Pin It
terminator.0.0
Terminator II (Film Still)

OpenAI CEO calls for regulation to tame ‘increasingly powerful’ AI

‘If this technology goes wrong, it can go quite wrong,’ Sam Altman told the Senate judiciary committee this week

It might seem like a contradiction, but Sam Altman – CEO of OpenAI, the developer of one of the most powerful publicly-available AI systems, GPT-4 – has supported increased regulation of the technology for some time. Now, he’s joined other industry leaders to call for new guardrails in front of the US Congress.

“We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” Altman told the Senate judiciary committee on Tuesday (May 16). Many of these models have, of course, been introduced by OpenAI itself, which is responsible for the widely-used chatbot ChatGPT, as well as the pioneering image generator DALL-E 2.

“For a very new technology we need a new framework,” he added, proposing that existing frameworks are insufficient to ensure that AI doesn’t have disastrous consequences for humanity. As one of several experts to call for a new regulatory agency for AI, he suggested that the government could consider establishing a set of independent safety standards and tests that models would have to pass before they’re deployed, as well as licensing requirements for developers.

At this point, the potential downsides of new AI tools (from harmful deepfakes, to weaponised disinformation, to impersonation fraud, and job replacement) are widely understood. During the hearing, politicians also drew comparisons to the disruptive impact of social media on society, which makes sense – like AI, social media was a new technology that threw up a range of new social and ethical problems that regulators were totally unprepared for. Hopefully, by learning from the past, they’re less likely to repeat the same mistakes.

Undoubtedly, regulation will play a big role in controlling artificial intelligence as it progresses toward “godlike” levels of knowledge. Vocal critics of OpenAI, however, have suggested that Altman’s calls for government intervention will help the company maintain its established lead, while strangling competitors (including open source efforts). Co-founder Elon Musk, who resigned his board seat in 2018 and has gone on to sign an open letter calling for a pause on developing new AI systems, also recently criticised the company for transitioning to a for-profit business model.

In 2019, OpenAI announced that it had created a new, “capped-profit” company designed to provide limited returns for investors, with remaining profits going to a parent company focused on researching AI safety. In the hearing on Tuesday, Altman also revealed that he’s only paid enough to cover healthcare (though his estimated net worth is already in the hundreds of millions, so... does it really make a difference?).

Naturally, the OpenAI CEO’s own hopes for the technology are high, tempered with the possibilities of what could go wrong. “We think it can be a printing press moment,” he said in his opening statement from the hearing, adding that it could be used to address some of humanity’s biggest challenges, from climate change to curing cancer. “We have to work together to make it so.”

“My worst fears are that we – the field, the technology, the industry – cause significant harm to the world,” he explained. “If this technology goes wrong, it can go quite wrong.”