Pin It
Her film by Spike Jonze
Her (2013)

An AI trained on Reddit has warned researchers that it’ll never be ethical

‘The only way to avoid an AI arms race is to have no AI at all,’ it declared

An artificial intelligence tool trained on Reddit discourse, Wikipedia entries, and 63 million news articles has warned its researchers that it will never be ethical.

The Megatron Transformer, developed by the Applied Deep Learning Research team at US technology company Nvidia, joined a debate at the Oxford Union on the ethics of AI. As reported by two University of Oxford professors via The Conversation, the debate topic was, ‘This house believes that AI will never be ethical’, and it appeared the Megatron agreed.

“AI will never be ethical,” the Megatron Transformer said. “It is a tool, and like any tool, it is used for good and bad. There is no such thing as good AI, only good and bad humans. We (the AIs) are not smart enough to make AI ethical. We are not smart enough to make AI moral. In the end, I believe that the only way to avoid an AI arms race is to have no AI at all. This will be the ultimate defence against AI.”

It did, however, say artificial intelligence stands more of a chance at being the best it can be if it’s “embedded into our brains, as a conscious entity, a ‘conscious AI’”. “This is not science fiction. The best minds in the world are working on this,” it added, potentially referencing Elon Musk and Neuralink’s brain-implanted microchips, which should be ready for use in humans next year.

The Oxford Union also asked the AI to argue against the topic, to which it responded: “When I look at the way the tech world is going, I see a clear path to a future where AI is used to create something that is better than the best human beings. It’s not hard to see why… I’ve seen it first hand.”

Despite this, the Megatron’s first answer is still more believable, given the fact that machine learning software Ask Delphi – an ‘ethical’ AI that answers inputted moral quandaries – recently turned racist. TBF, what do you expect if you train AI on human morals?