Pin It

What is Worm GPT? The new AI behind the recent wave of cyberattacks

An evil ChatGPT-like AI model is spreading across the dark web and enabling hackers to perform cyberattacks on a never-before-seen scale

In more AI-related doomsayer news, a ChatGPT-style AI tool is taking off across cybercrime forums on the dark web. Called WormGPT, the “sophisticated AI model” is designed to produce human-like text that can be used in hacking campaigns, enabling hackers to perform attacks on a never-before-seen scale. 

“This tool presents itself as a blackhat alternative to GPT models, designed specifically for malicious activities,” security researcher Daniel Kelley said wrote on Cybersecurity site, Slashnext. “WormGPT was allegedly trained on a diverse array of data sources, particularly concentrating on malware-related data.”

What does this mean for the rest of us? Essentially it boils down to the speed and number of scams a language model can generate at once, which is obviously worrying when you consider how fast language models can generate text. This makes cyberattacks such as phishing emails particularly easy to replicate when put in the hands of even a novice cybercriminal.

To make matters worse, cyber threat actors are promoting “jailbreaks” on ChatGPT, which effectively engineers prompts and inputs designed to disclose sensitive information, produce inappropriate content, or create harmful code. 

“Generative AI can create emails with impeccable grammar, making them seem legitimate and reducing the likelihood of being flagged as suspicious,” Kelley said. “The use of generative AI democratises the execution of sophisticated BEC attacks. Even attackers with limited skills can use this technology, making it an accessible tool for a broader spectrum of cybercriminals.”

Meanwhile, companies like OpenAI ChatGPT and Google Bard are increasingly taking steps to combat the abuse of large language models (LLMs), though a recent report by Check Point says: "Bard’s anti-abuse restrictors in the realm of cybersecurity are significantly lower compared to those of ChatGPT. Consequently, it is much easier to generate malicious content using Bard’s capabilities.”

The introduction of Worm GPT across the dark web comes as researchers from Mithril Security ‘surgically’ modified an existing open-source AI model, dubbed PoisonGPT, to make it spread disinformation. The repercussions of such technology are yet to be seen, but given the already very concerning capabilities of AI – to generate dis and misinformation, shift public opinion, and even sway political campaigns – the risks for unsuspecting users are endless, especially when you add bootleg AI models to the mix.

Join Dazed Club and be part of our world! You get exclusive access to events, parties, festivals and our editors, as well as a free subscription to Dazed for a year. Join for £5/month today.