Pin It
Science Gallery-PERFECTION-Film-Still-2
Image courtesy Lucy McRae

The illusion of perfection: the disturbing truth about AI beauty

As beauty brands continue to embed AI technology into their offering, real people are starting to value the advice of machine learning. But upon what ideals is our ‘beauty’ being judged?

As beauty brands continue to embed AI technology into their offering, real people are starting to value the advice of machine learning. But upon what ideals is our ‘beauty’ being judged?

“We are living through an exciting moment in history, when so much about life on earth is being transformed. But what is new is not always good — and technology does not always mean progress,” warns cyberpsychologist Mary Aiken in the The Cyber Effect, 2016. In the book, she talks about how technology has infiltrated every aspect of our lives – from the way we (over) share, to the way we date, and the way we shop… but she could easily be talking about the beauty industry. From advanced facial tracking and detection designed to dish out skincare advice and recommend makeup choices to suit your skin, to the growing number of apps that claim to ‘beautify’ your face in real time – current innovations in AI beauty mean complicated outcomes for us human users.

It goes without saying that anyone with an Instagram handle will be well versed in filters. Whether it’s a dog or a kitten filter, or something (marginally) more realistic, social media apps like Instagram and Snapchat have dramatically changed our social behaviour since 2015 (aka the birth of the dog and the rainbow puke filters). The alarming part? These apps sit in the palms of our hands, and absolutely everyone’s at it.

Let’s take Kylie Jenner’s custom Instagram filter as an example. Released one month after her cosmetics empire was announced to be worth $900 million (cemented with a Forbes cover), it allows you to virtually wear seven of her most hyped lip colours in a single swipe. If you’ve tried it, you’ll have likely noticed that your face was considerably smoother, your lips plumped, your eyes widened (and expertly lined), and your lashes lengthened to create a ‘perfect’ picture. Why be you, when you could be Kylie Jenner?  

Along with Jenner’s filter, there are hundreds of ‘beautification’ apps now on the market and have been since 2013, when four PhD students and one Supreme Court clerk founded Lightricks, the software company behind popular photo editing app FaceTune. It was in 2015 that Snapchat acquired facial recognition startup Looksery for $150 million – allowing us to photoshop in real time – and by the end of 2016 Lightricks had launched FaceTune2, a more advanced upgrade that costs $5.99 per month and includes a live-editing tool so you can ‘tweak’ your face before snapping a selfie.

Defined by the Oxford English Dictionary as “a fine adjustment”, the meaning of tweak has been somewhat obscured in a post-FaceTune era – the app lets you enlarge your eyes, thin your nose, and supersize your lips, amongst other modifications. It was Apple's top ranking paid app for 2017 and together, both apps have been downloaded over 50 million times. Recently, the app appeared as an Instagram advert – a before and after of a girl whose nose has been shrunk, captioned: “Ever wonder why your friends’ selfies look so good?”

According to Dr Amy Slater, Deputy Director at the University of the West of England’s Centre for Appearance Research, with these technologies come psychological concerns. “Many images presented on social media are still conforming to a narrow beauty ideal and the use of filters or modification apps is likely to perpetuate this,” she says. “The concern is that consistent exposure to ‘perfect’ images will leave people feeling like they don’t ‘measure up’. This can lead to negative feelings including body dissatisfaction, which is known to be a predictor of negative outcomes including lowered self-esteem, depression, disordered eating and lower academic achievement.”

“The concern is that consistent exposure to ‘perfect’ images will leave people feeling like they don’t ‘measure up’" - Dr Amy Slater, Deputy Director at the University of the West of England’s Centre for Appearance Research

The pressure to look a certain way is affecting young people the most. Gone are the days of blue mascara applied slapdash behind the curtain of a photo booth in your local cinema in a desperate attempt to purchase a ticket for a film rated 15. Kids as young as 11 now have an Instagram account – solely sharing filtered versions of themselves, sucked into an app-based ritual: snap selfie, smooth skin, widen eyes, repeat – and there’s a free version of FaceTune available on the app store with a 4+ age rating. According to Dr Jon Goldin, Consultant Child and Adolescent Psychiatrist, “[These apps] would seem to reinforce and indeed amplify these anxieties, which is reprehensible in my view and detrimental to young people’s mental health. Young people have enough challenges and stresses, including about their appearance, without adults designing commercial apps to profit from these and exacerbate their difficulties.”

With filters, modification apps, and suggested ‘tweaks’ so accessible, they’re fast becoming the new normal, with the line between natural and filtered beauty becoming increasingly blurred. By accepting this singular expression of beauty, there’s an increasing danger that we’re creating unrealistic beauty ideals. But it’s not just filters.

Remember the old adage: “Mirror mirror on the wall, who is the fairest of them all?” Well now there’s a modern incarnation of Snow White’s magic mirror. HiMirror is a smart-mirror that uses robust face tracking and detection to act as your daily beauty consultant. It analyses your skin to highlight your flaws and imperfections in order to recommend ‘improvement’ products. HiMirror judges the quality of person’s skin based on eight attributes: red spots, fine lines, complexion, pores, wrinkles, roughness, dark spots, and under eye circles, before combining the results to rate it on a scale of 1 (bad) to 100 (perfection). According to HiMirror’s founder Simon Shen, 90 is a healthy score.

Impressive as this technology may be, do we really want our worst traits highlighted before we step out of the front door each day? While negative feedback – or a low-score – could be damaging to a person's self esteem, a daily analysis could cause obsessive behaviour in the same way that stepping on a weighing scale on a daily basis could. And when our appearance is being judged on such subjective traits, the aesthetic normality or ‘perfect’ score that HiMirror promotes is out of reach for many users.

There’s also Beauty Score, a betastage application from cognitive services provider and billion dollar startup Face++, that applies machine learning to evaluate a person’s attractiveness by scanning an image of a face and evaluating the definition and positioning of its features. The app’s data points include ‘upper lip contour’ and  ‘nose tip’ and it positions itself as a tool to improve ‘attractiveness’ via makeup recommendations. The fundamental question here is upon what ideals would one judge ‘attractiveness’, let alone code an algorithm to do so? Who’s right to say that contoured cheeks and symmetrical features are the only way to be ‘attractive’?

AI beauty technology runs the danger of being racially discriminatory, too, something that’s become apparent in previous ventures. In 2016, deep learning group Youth Laboratories launched Beauty.AI, a contest judged solely by robots. The company states that their robots evaluate a person’s beauty based on “wrinkles, face symmetry, skin colour, gender, age group and ethnicity”. There were 7,000 entrants in 2016 and of the 44 winners selected, only one dark skinned. Such algorithmic discrimination was labelled as racist. Now, Youth Laboratories operate Diversity.AI, a think tank that’s devoted to addressing algorithmic bias – racial, gender, ethnic and age discrimination by artificially intelligent systems. By hosting meetings and seminars that inform guidelines and validation mechanisms, and by encouraging audits and open database sharing to allow developers to train the algorithms on minority data sets, Diversity.Ai aim to promote and encourage inclusion, equality and diversity through their work.

Someone who’s experienced discrimination by artificially intelligent systems first hand is graduate researcher at MIT Media Lab Joy Buolamwini, who noticed a problem when working with facial analysis software that didn’t detect her face, attributed to the fact that the people who coded the algorithm hadn’t taught it to identify a diverse range of skin tones and facial structures. Buolamwini has since founded the Algorithmic Justice League, an organisation in place to challenge the biases present in decision-making software, an ongoing issue that – according to The Age of Automation report – will only be truly accurate when non-bias conditions for creation and belief systems of their creators are ensured.

So what is the solution? Algorithm transparency is certainly a good starting point. World-renown sci-fi artist and body architect Lucy McRae, alongside researchers at the University of Melbourne, have recently designed a Biometric Mirror – an AI mirror that analyses an individual’s character traits based solely on their face – to investigate the way that people respond to AI analysis. The mirror compares its onlooker to a database of faces that have been assessed on 14 characteristics, before issuing them a statement that summarises their ‘attractiveness’ and their emotional state. In theory, the algorithm is correct, but it’s likely the information isn’t – because how can it be if it’s based on subjective information?

“Surprisingly, many people walk away from Biometric Mirror blindly accepting the algorithm’s feedback, but are shocked when they realise that a computerised assessment can have a personal consequence,” explains Dr Niels Wouters from the university’s Microsoft Research Centre for Social Natural User Interfaces. “For them, Biometric Mirror is an eye opening moment to start thinking about transparency of algorithms, consenting and deconsenting, and the current trend of perceiving algorithms (and AI) as the holy grail that will ultimately improve society.”

"When it comes to those creating a beauty ideal I’m more concerned as often it remains unknown what the ideal is based upon" – Dr Niels Wouters

According to Dr Wouters, it pays to question and critique these algorithms. “We should be able to look up – in accessible and meaningful ways – what the AI’s assumptions are based upon to determine whether we believe and action its assumptions or not,” says Dr Wouters. “For instance, why does an AI scan your face and then think you need a dark red lipstick? Is it because your skin is light, your hair is dark? This will help people to understand the true value of AI, but also to recognise where and when it gets things wrong. For example, hair colour may be light, but the space might have been temporarily underexposed at the time of the photo.”

The next step involves understanding diversity – through comprehensive training and by building inclusive data sets. “That’s the simple answer, but how we go about it, I don’t know. In fact, I think it may be impossible,” argues Dr Wouters. “Many AI datasets rely on crowd-sourced information. While guessing a person’s gender, age and (perhaps) ethnicity can be done quite objectively, things go haywire when you ask people to evaluate more complex characteristics, like our psychometrics. What is ‘weird’ for you will be different for me; the appearance of ‘attractiveness’ may also differ for you and I, or across cultures.” According to Dr Wouters it’s here that we need to establish boundaries and recognise the limitations of AI. “I’m hopeful for some fascinating applications in the beauty industry and personalised products, but when it comes to those creating a beauty ideal I’m more concerned, as often it remains unknown what the ideal is based upon – whether that’s Western ideals or the ideals of a select crowd, for example the developers or client.”

The beauty industry needs to take responsibility for its actions. While filters, modification apps and facial recognition technology remain at the forefront of our lives, it’s more important than ever to remember that beauty is in the eye of the beholder and that the aforementioned beauty apps and products that rely on AI and machine learning are not yet able to promote authentic diversity and inclusivity. Contrary to much social media representation, we all have blemishes and bags under our eyes – and that’s on a ‘good skin’ day. Having ‘flaws’ doesn’t make us any less ‘attractive’, while having none doesn’t equal ‘perfection’. Beauty is diverse, while variability and diversity are inherent aspects of life. But only when AI and machine learning applications are able to celebrate diversity, will the beauty industry be a truly progressive place.