Pin It
ex machina

A new report details how AI could destroy all of humanity

Read 100 pages about how you’re going to die

Artificial intelligence, as it stands right now, isn’t very intelligent. Yes, it can match humans in chess, develop its own language and obscure memesmake art better than IRL artists, and mimic human speech in a terrifying manner. But it can mistake a turtle for a gun, accidentally purchase you weird shit from Amazon, and get just all-out racist. AI has yet to tap into human’s emotional psyche – things are still ever so slightly off, even with our future queen of the universe Sophia the Robot

While it’s not all Space Odyssey’s HAL or Alicia Vikander in Ex Machina yet, artificial intelligence is sweeping every industry, from science, to art, and the military. A new, 100-page report, “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation”, details how AI could facilitate disaster – cyberattacks, political upheaval, autonomous weapons. 

The research has been collated by 26 experts from across the world, building on a workshop held at Oxford last year that brought in expertise from the likes of Elon Musk’s OpenAI, Cambridge University’s Centre for the Study of Existential Risk, the Future of Humanity Institute, among others.

“It is often the case that AI systems don’t merely reach human levels of performance but significantly surpass it,” Miles Brundage, from Oxford’s Future of Humanity Institute and a co-author of the report, relayed in a statement. “It is troubling, but necessary, to consider the implications of superhuman hacking, surveillance, persuasion, and physical target identification, as well as AI capabilities that are subhuman but nevertheless much more scalable than human labour.”

As Gizmodo reports, the research focuses mainly on the malicious use of AI in the digital, physical, and political worlds. It highlights how criminals and terrorists could craft huge, effective attacks using future AI that’s cheaper to obtain and run, and that could work across complex systems. As the technology gets smarter and more powerful, researchers outline in the study that they “expect the growing use of AI systems to lead to the expansion of existing threats, the introduction of new threats, and a change to the typical character of threats.”

Deepfakes – the use of a neural network to transpose another person’s face onto the body of a porn actor, recently investigated by Motherboard – are mentioned in the research. The accessibility of such power could know no bounds when it comes to manipulating videos and producing them on mass-scale. The research details that this kind of AI use could deploy fake propaganda – it was just last year that a fake Obama speech was generated using an AI video tool.

Cybersecurity could also be put at major risk – phishing and hacking could become more complex and widespread with specifically programmed AI. Surveillance may also be manipulated and used for criminal or terror purposes. The research also highlights physical threats, from the hacking or compromising of self-driving cars to autonomous weapons and coordinated drone attacks. Some of these issues seem pretty far off into the future, while others are already causing concern. 

The report provides suggestions that could buffer some of the biggest threats to humanity. One is that policy makers and politicians should work closely with those developing the latest technology, to analyse and implement law that keeps AI creation and use in check. The study also asserts that a large pool of experts should be involved along the way, and discussion about who has access to powerful AI should be happening. Ethical and physical frameworks must be put in place, they say.

Experts conclude that if research continues at the pace it is for the next five years, they expect attacks to “significantly increase”. However, the report also emphasises that there is disagreement between researchers and outer communities about the level of risk AI poses. Nevertheless “precautionary action” must be taken.