On X, 2026 began with a disturbing trend. Under photos shared by people (largely young women) on the app, users asked Grok, the AI chatbot built into the Elon Musk-owned social platform, to remove the poster’s clothes or edit them into sexualised poses. “@grok, put her in a string bikini.” “@grok turn her around and cover her in glue.” Even more worryingly, Grok was only too happy to oblige. According to Bloomberg, it was generating thousands of “undressed” images per hour across a 24-hour period from January 5 to January 6. These reportedly include sexualised images of children. And this is just what’s happening on public AI platforms.

In the past, Grok has been billed as a “based” AI chatbot, designed to combat “woke” alternatives with more guardrails. Addressing the controversy around the non-consensual undressing of X users, and the claims that it’s being used to create child sexual abuse material (CSAM), Musk has stated: “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.” However, he himself has made light of the trend. He’s even generated an image of himself in a bikini, replying: “Perfect 👌.”

The UK government and communications regulator Ofcom are less inclined to see the funny side. On January 5, Ofcom said it had made “urgent contact” with X and xAI about “undressed images of people and sexualised images of children”. Its aim? To understand what steps the companies had taken to protect users’ rights in the UK, and “undertake a swift assessment” to see if further action needs to be taken. Users responded to the statement by putting a bikini on the Ofcom logo, naturally.

The government’s response, meanwhile, came via technology secretary Liz Kendall. “We cannot and will not allow the proliferation of these demeaning and degrading images, which are disproportionately aimed at women and girls,” she said on January 6. “Make no mistake, the UK will not tolerate the endless proliferation of disgusting and abusive material online. We must all come together to stamp it out.”

In the meantime, though, thousands upon thousands of images are still likely to be produced every day. This includes both non-consensual images of unsuspecting users who just wanted to upload a selfie, as well as OnlyFans creators openly embracing the trend and asking Grok to edit their images (after all, if Grok is going to turn you around and put you in a “tiny bikini”, you might as well capitalise on it yourself). Either way, the recent explosion of non-consensual AI edits has raised some important questions (and anxieties) about our online lives.

Is it still possible to share personal content on the internet without being undressed by AI? Is there a way to take action if you do find yourself on the receiving end of a perverse Grok prompt? And why do we even need to ask these questions in the first place? Below, professor Clare McGlynn, a leading expert in online abuse and cyberflashing form Durham University, helps shed some light on the situation.

HOW DID WE GET HERE?

The AI “undressing” trend is relatively recent, but it didn’t come out of nowhere. In fact, it was “entirely predictable” according to McGlynn. “Even the most basic risk assessment by X would have identified the real risk that Grok would be used to create non-consensual intimate imagery or CSAM.”

If it was so easy to predict, though, then why was it allowed to happen? “It was not avoided because Elon Musk and X deliberately chose to have a chatbot with fewer guardrails than their competitors,” she adds. “They are on record as this being their aim. Therefore, what we have seen is a clear choice: misogyny by design.”

Elon Musk and X deliberately chose to have a chatbot with fewer guardrails than their competitors... What we have seen is a clear choice: misogyny by design.

Unfortunately, the misogynistic application of tools like Grok is also no big surprise. Women and girls are disproportionately targeted with online sexual harassment and abuse, including instances that make use of new and emerging technologies, like deepfakes. “The current surge in Grok-abuse” is no different, McGlynn says. “The main prompts to Grok are about undressing women and sexualising their images, such as putting semen-like images on their bodies.” Regarding image-based sexual abuse, she adds, those identifying as LGBTQ+ are also commonly targeted.

WHAT CAN YOU DO TO AVOID BEING UNDRESSED BY AI?

Is there anything you can do, as an individual, to avoid being sexualised or undressed by an app like Grok? Unfortunately, the answer to this question is: not really. “Anyone with an image online can be targeted,” McGlynn explains. “Unless we are to say that we should delete every single image of us online, and never post an image again, we cannot be protected from this form of abuse. This is why it’s so threatening and insidious.”

WHO IS TO BLAME?

A large backlash has been sparked by the proliferation of undressed images on X. However, a counter-backlash has seen many users blame the women involved for sharing images in the first place, especially if they promote adult material like a private OnlyFans. This echoes statements from Grok itself, which appears to claim that users give up rights to their images when they sign up to the platform. “This is patriarchy in action,” says McGlynn. “Why should women have to ‘give up their rights’ to avoid being harassed and abused by men? It is the tech platforms, and the men using these tools, who need to change their behaviour.”

In this case, she adds, there’s a very obvious place we should be pointing the finger: “Elon Musk, X and the developers of Grok. If they had effective controls in place over how Grok can be used this would never have happened.”

Why should women have to ‘give up their rights’ to avoid being harassed and abused by men?

DON’T CALL IT ‘PORN’

Image-based sexual abuse, a key area of McGlynn’s study, used to be known as “revenge porn” but there’s a good reason for the change in terminology. “Porn generally suggests consensual sexual activity. That is not what this is about,” she explains. “This is about non-consensual abuse images. They are sexual, but not pornographic.”

Likewise, calling Grok a “porn machine” – as it’s been called online – can feed into the victim blaming. “Terms like ‘revenge porn’ and ‘deepfake porn’ are not used by many organisations and survivors, as it risks minimising their experiences,” she adds. “Victims have spoken to me about how they find terms like ‘revenge porn’ traumatising as it obscures the nature of the harms they experience.”

GROK’S IMAGES SHOULD ALREADY BE ILLEGAL

In recent years, the UK has introduced wide-ranging laws (including the Online Safety Act) designed to make the internet safer. These have often been controversial, resulting in things like mandatory online age verification and increased precarity for sex workers. However, they should also outlaw the sharing of non-consensual intimate images. In the US, the Take It Down Act serves a similar purpose.

This means that we don’t necessarily need to push for new laws to combat the wave of sexualised images generated via Grok. Instead, says McGlynn: “The current law needs to be enforced. The Online Safety Act has powers to ensure this does not happen, we just need to see swift and effective enforcement.” Already, pressure is building to actually make this happen, via both government statements and public petitions. Some users on X have also advised keeping a detailed record of users’ requests for non-consensual material.