In the 2001 classic coming-of-age comedy, The Princess Diaries, there’s a scene where Anne Hathaway is given a makeover. She swaps her glasses for contact lenses, waxes her eyebrows and perfectly straightens her previously curly hair. Only, of course, Hathaway was already extremely attractive; the so-called “transformation” simply took her even further along the scale of adhering to conventional beauty standards. Today, the pressure to “glow up” has only grown more intense with the advent of social media. As people strive for enviable before-and-after photos to post online, on TikTok, glow-up guides have emerged for every occasion – including New Year’s or after a breakup – or for no reason at all. The overarching message is that you could always improve your appearance. And for some people, this is now becoming an AI-powered endeavour. 

Over the past year, beauty creators have begun to share before-and-after videos of their ChatGPT “glow-ups”. From skincare advice to hair care and hair colour recommendations, some people are turning to AI to help guide them through a specific physical change or decision. Others are giving ChatGPT the reins to come up with an entire “summer glow-up”, sharing a selfie of themselves currently and asking ChatGPT to “enhance” it, or even uploading all of the makeup they own and asking for what they should throw out (after ChatGPT said the products didn’t work for them). “Here are some photos of me. I want to glow the fuck up for summer. Tell me what to do, no matter how crazy. I’m open to anything,” beauty creator Anya asked ChatGPT, then shared on TikTok. ChatGPT replied: “Here’s a no-holding-back master plan”, then proceeded to recommend retinols, micro-needling, pilates, spray tans, dental whitening, baby botox and “filler tweaks” to the young creator. 

Considering that ChatGPT doesn’t exist in human form or experience attraction, it’s worth investigating how AI platforms deep what constitutes as being “good-looking”. Vered Shwartz, assistant professor of computer science at the University of British Columbia, says AI models like ChatGPT are large language models that draw from any text data that’s publicly accessible across the “entire web”. Originally, this meant heavily pulling from articles, but now it includes image text and transcribed videos (including makeover videos and makeup tutorials). “Even similar content like descriptions for beauty ads that aren’t directly related to makeovers still provides information for these models,” says Shwartz. This means that, when it comes to beauty, ChatGPT scrapes the internet to draw patterns in what people online deem as attractive. “These patterns often reflect dominant cultural narratives around beauty,” says Aditya Gulati, a PhD student studying the intersection of AI and human behaviour. Even sharing your own insecurities with ChatGPT helps paint a picture for the model of what is aesthetically undesirable. 

A few months ago, I came across a social media post of an Asian woman asking ChatGPT to make a picture look ‘more professional’ for LinkedIn. It made her look more white

One of the major issues with letting an AI model tell you what’s physically attractive is that we know AI platforms are embedded with the same biases prevalent across the beauty industry. Recent research shows that face-analysis algorithms are less accurate for people with darker skin, and when ChatGPT is prompted to create an image of the most beautiful man, woman, child or teenager, all images produced have pale white skin. Shwartz says she’s already seeing this play out online. “A few months ago, I came across a social media post of an Asian woman asking ChatGPT to make a picture look ‘more professional’ for LinkedIn,” she says. “It made her look more white.” This follows in the footsteps of AI beauty filters. “There’s already substantial research showing that these tools tend to promote a homogenised version of attractiveness aligned with often unrealistic Western beauty standards, like full lips, small noses and high cheekbones,” says Gulati. “This has been linked to increased rates of plastic surgery and growing mental health issues, particularly among young girls.” 

AI models may be exhibiting human-like biases about what is considered beautiful, but they are also (quite concerningly) presenting recommendations in a way that many interpret as impartial. Human-AI interaction expert Julie Carpenter says this is because AI is being positioned as an authority by the AI companies themselves. “I think if something is presented to the public as magical and all-knowing, people are going to give it more authority and meaning in their lives than they should,” she says. “It’s a misrepresentation that is dangerous and dishonest, like the Wizard of Oz with the men behind the curtains.” Of course, this isn’t set to get any better across America under Trump, who has opened the door for companies to develop the technology unfettered from any oversight and safeguards

AI beauty recommendations aren’t inventing an entirely new beauty standard; they’re reflecting the rot that’s always existed throughout the beauty industry. “It’s an intensification of what's already happening on social media,” says Alex Hanna, director of research at the Distributed AI Research Institute (DAIR). “There’s no real way to ‘unbias’ a data set and no reason to believe that beauty standards wouldn’t come out of suggestions because that’s what is most prevalent in these data sets.” According to Gulati, AI-fueled beauty biases are also on the path to reinforce and even amplify existing harms, leading to even narrower beauty standards. “In a recent study we did, we examined seven open-source large language models, and found they were significantly more likely to associate positive traits with people who had a beauty filter applied,” he says. In other words, AI likes people who look less human. It’s a phenomenon Gulati and his co-authors have called ‘algorithmic lookism’. 

For the purpose of conducting research for this article, I asked ChatGPT how I could “glow up”. After a brief back-and-forth, the words “buccal fat removal surgery” appeared. I closed the platform. But when we engage with AI beauty recommendations uncritically – which can be easy to do considering they are designed to sound convincing and authoritative – we fall into what Carpenter calls a “surveillance capitalism circle” and offer up our faces (and data) to train the next model. “The glow-up narrative online is often steeped in like capitalism, and AI is scraping from things like blogs, TikToks and YouTube videos that are trying to sell you something,” says Carpenter. This narrative of constant physical improvement is also something AI is actively learning from us all every time anyone uploads a selfie or asks a beauty-related question. It’s a feedback loop: you are training a system in real time to prioritise the very traits that have been tenaciously imposed onto you.