“We are surrounded by technology that decides our fate in every instance of our lives. These technologies are often invisible, they are these mysterious spells that work in the background... but you can make them visible, and as artists we have a duty to make these things visible; in the case of AI, to show people that this is nothing to be scared of. That it can produce beauty, something unseen, a new aesthetic.” – Lukas Rudig, Beauty_GAN
There are many ways we interact with AI in the realm of beauty. Most will be subliminal. When we watch a beauty tutorial on YouTube, a learning system gathers information and cross-references it with metadata, attempting to figure out which contour clip we’ll want to view next. When we shop for cosmetics, our cookies follow, working out what to market to us. Beauty brands are angling themselves towards machine learning when it comes to selling us personalised products, too; AI bots scour data from thousands of product descriptors, ingredient labels and online reviews to tailor our choices in skincare or haircare – sometimes with added medical value, if we have, say, acne.
When it comes to AI in beauty, most of its applications are driven by a rapidly expanding and increasingly saturated global beauty industry (valued at over $400 billion in 2017), the key question being how intelligent computers can encourage consumers to fill their online baskets.
But two design studios – Selam X in Berlin and ART404 in New York – have teamed up to initiate a different application of AI in the realm of beauty. Comprised of computer scientists, art directors, coders and designers, the group are a global constellation of engineers and creatives who have created Beauty_GAN, a type of artificial intelligence algorithm that uses machine learning to produce imagery. In this case, beauty imagery, and specifically, the images of Kylie Jenner’s face in this publication.
The tech is not complicated, at least on the surface. The AI starts out with a data set: 17,000 images pulled from Instagram by the Beauty_GAN team. Those responsible gathered the most
popular and relevant beauty looks they could find, imagery as diverse and colourful as possible, with specs like ‘full face in shot’. They then sorted the imagery into categories and fed it into what is called a discriminator network, where the algorithm begins to learn stereotypical things about the images. It learns to distinguish an eye with make-up from an eye without make-up, or a smiling face versus a frowning face, for example. Eventually, the computer gets so good at distinguishing between categories that it is able to assign categories itself, to differentiate between a beauty selfie or, say, a picture of a dog.
But ‘GAN’ – generative adversarial network – systems are made up of two parts. Alongside the discriminator network, there is a generator network, which creates images. It is taught to spit out imagery. This imagery is fed into the discriminator network, which decides if the image is or isn’t a beauty image. This feedback is calculated by the generator and it produces more and more images until, after millions of tries, it manages to produce an image that fools the discriminator into thinking it’s an authentic beauty image, like those from the original data input. The computer is now able to create beauty looks without the help of a human.
“In other words, the two components train one another,” explains Lukas Rudig, a member of the Beauty_GAN group. “Imagine a counterfeiter and a police officer producing fake money. The police officer evaluates the fake money, the counterfeiter produces better fake money, and so on, until they get it right. That’s how this technology becomes good at creating aspects of the human appearance as though it were photorealistic.”
Of course, because the input is selected by humans, Beauty_GAN ultimately learns from a human, and it only has so much source material to draw from. Those 17,000 images are its entire world; tweak the data input, and you tweak the outcome. While machines like this are good at distinguishing between certain factors, other aspects of an image can be harder to teach them. “Gender, race, mean nothing to them,” says Lukas, “even though we input an ethnically diverse data set.” And as Marius Tetlie from Beauty_GAN points out, “The look is dependent on who is collecting the data. If someone else did it, it would be different.” But the machine’s bias or lack thereof is both its blessing and its curse: “It’s catching these images and saying something new with them,” says Marius. “It lacks so much of the general ideas we have around what is and isn’t beautiful.”
Beauty_GAN is not the only technology of its kind. Its creative director Sebastian Zimmerhackl sites German artist Mario Klingemann as an influence. Klingemann calls his work “neurography”, whereby he trains a machine to create images without a camera; his system can produce thousands of images a day. In the world of fashion, artist Robbie Barrat created an AI that ingested all of Balenciaga’s previous collections via their campaigns, lookbooks and catwalk imagery – his AI randomly generates hypothetical (but pretty plausible) Balenciaga designs.
As with Beauty_GAN, there is an uncanniness that occurs, as well as a weird, trippy aesthetic – the aesthetic of an alternate reality where machines dictate the way that we present our- selves. “It doesn’t look perfect,” acknowledges Sebastian of Beauty_GAN’s output, “because it’s an experiment. But in the future it is the artistic mistakes that we will remember.”
While we can debate the quality of these machines’ outputs, another question looms: what is the point of it all? In its current articulation, Beauty_GAN is a living artwork born of technology and is intended to challenge creativity and originality, a smart algorithm that understands the new definitions of beauty, with an output of images, video, and soon, an augmented reality filter. But its meaning is deeper. As Moises Sanabria, a creative technologist from Beauty_GAN puts it: “You have this industry of computer scientists and academics changing the game on what beauty and artificial intelligence mean, but how does that translate to people reading the news every day, to people reading Dazed Beauty? The idea is to warn the everyday person not dealing with AI that it will get implemented in more and more casual ways.”
For the images in this issue, we asked Daniel Sannwald to photograph Kylie Jenner in almost no make-up, like a blank canvas. We then painted her face with the images created by Beauty_GAN. In other words, she is wearing AI generated make-up. To choose Kylie, of all subjects, is not without irony or importance. The young beauty mogul has an Instagram account with 124 million followers (and counting), once tried to trademark her own first name and has grown her brand Kylie Cosmetics into an estimated $800 million company in just a couple of years. She is the person the whole world holds up as the poster girl of beauty, the face that we try to replicate. Kylie gets lip fillers? We get lip fillers. Kylie contours? We contour. Kylie endorses a product? We buy it. There is even an Instagram app that allows us to apply Kylie’s face, mask-like, onto our own. She is the face that spawned a thousand selfies.
One could argue that, of all the beauty imagery we see on Instagram today, Kylie Jenner’s face, her aesthetic, holds the most influence. Every time someone copies her contour or lip liner there’s a further proliferation that happens. She influences what we think of as beautiful, what exists on Instagram. The Beauty_GAN project sees this inputted into a machine, and then lets the machine take over; the machine creates what it thinks is beauty imagery, and then paints it back onto Kylie’s face. And so, the feedback loop closes.
“Who else could do it? You need someone like Kylie, who stands for contemporary beauty photography,” says Lukas. “It’s a collaboration: what the machine does to her is paint her face in the way it thinks it should be in a beauty selfie,” he concludes. “To put it in a really easy metaphor, Beauty_GAN is like a mirror of popular culture, but the reflection staring back at you might not be what you expected. We teach a machine to see us and what it shows us back is not always what we see ourselves.”
Words: Amelia Abraham
Photography: Daniel Sannwald at Management Artists
Make-up: Mary Philips at Blended Strategy
Hair: Cesar De Leon Ramirez at crowdMGMT
Styling: Rita Zebdi
Photo Assistants: Guillaume Blondiau, Kaveh Malek Styling Assistant Mackenzie Grandquist
Digital Tech: Brandon Kalpin
DP: Lane Stewart
Retouching: Studio Private
Producer: Carolina Takagi and Dario Callegher at Pink Production
Production Assistants: Greg Bonnet, Peter Cacciopoli
Artists: Selam X & ART 404
Creative Director: Sebastian Zimmerhackl
Art Director: Lukas Rudig
Computer Scientist: Jens Wischnewsky
Creative Technologist: Moises Sanabria
Designer: Marius Tetlie
Dataset Manager: Neneh Opheim
Developers: Artur Neufeld, Tim Pulver and Eduardo Maluf de Campos
Philosopher: Benedikt Fischer