Years after ‘CV dazzle’ first came onto the scene, the use of facial recognition software like Clearview AI against protesters has sparked a renewed interest in anti-surveillance makeup – but how does it hold up against new technology?
Over a decade ago, artist and activist Michelle Tylicki says she found herself in an open-cast coal mine, alongside a few thousand climate activists who had decided to shut the extractivist site down. It’s here, she says, where she first came across anti-surveillance make-up – otherwise known as “computer vision dazzle” or “CV dazzle”, a term coined by artist Adam Harvey in 2010 to describe camouflaging the face by obstructing the patterns facial recognition algorithms look for when attempting to recognise you. “We used the make-up tactic inside the pit,” says Tylicki. “The cops took mugshots of us when they took us off-site. Never heard from them since.”
While anti-recognition tactics have existed for decades – the concept of CV dazzle is based on dazzle camouflage used by Royal Navy ships during the First World War – in 2025, there’s been another uptick in interest in anti-face beauty. But can make-up really fight back against new and evolving facial recognition technology?
AI facial recognition is an industry that is spreading and evolving rapidly. It went mainstream in the early 2000s, before Facebook brought out DeepFace in 2014, and Apple introduced Face ID in 2017. Driven by advancements in AI, today there are countless emerging use cases for facial recognition (including personalised marketing and shopping experiences), and even more ethical concerns to keep up with, while the technology only gets faster and more widely accessible. Just last week, ICE signed a $9.2 million contract with Clearview for facial recognition to identify people who have “assaulted agents”, as they continue to ramp up efforts to locate and deport undocumented immigrants. The technology itself is biased: research shows that facial recognition is less accurate in identifying people with darker skin tones.
In the US alone, there’s been an ongoing battle between protesters and law enforcement using AI facial recognition software like Clearview AI, a provider of a controversial facial recognition technology that launched in 2017 and compares images of people to a huge database of faces scraped from social media and public websites. In 2020, during protests against police brutality in the wake of George Floyd’s killing, anti-surveillance make-up was shared as one of many ways to help protect one’s identity.
Recently, we’ve seen the NYPD bypass facial recognition bans to ID a pro-Palestinian student protester and an LAPD officer tell those protesting the ICE raids that “I have all of you on camera. I’m going to come to your house”. While some states have laws limiting police use of facial recognition, federal agents don’t have the same restrictions and can take footage or facial images to find matches across photos scraped from social sites. (There’s a reason why protesters urge people not to post faces on social media.)
A similarly problematic expansion of surveillance technology is taking place across the globe: the UK government has expanded police use of “facial recognition vans”. New proposed legislation, the UK’s Crime and Policing Bill 2025, would grant police the power to ban face coverings at protests. The “chilling effect”, according to Tylicki, has caused young people to self-censor and reduce their online and IRL participation in protest movements, for fear of being monitored.
“It makes sense that folks are turning to social media to reclaim control with dazzle make-up, feeling safer hidden in plain sight,” says Tylicki. “With this colourful act of resistance, dazzle make-up, we’re refusing to perform for the market. Make-up becomes a tool for messing with machine vision.” As digital camouflage takes off (again) with protesters and young people experimenting with TikTok, activists like Tylicki have been hosting hands-on workshops and sharing dazzle tactics, continuing the work of previous groups like The Dazzle Club, a collaboration by four artists exploring surveillance in public space that ended in 2021.
With this colourful act of resistance, make-up becomes a tool for messing with machine vision.
New York-based makeup artist Laramie says she became interested in finding ways to subvert the algorithm using anti-surveillance make-up around 2015. Using CV dazzle techniques from Harvey and Jillian Mayer, Laramie created CV dazzle for an editorial shoot in 2021, testing the effectiveness using iPhone biometrics. “The main techniques that worked for the 2021 shoot were using the hair to cover part of the face, creating ‘fake’ eyes to confuse the camera and using contrasting colours to shape or alter parts of the face,” she says.
For Tylicki, the key is obscuring the nosebridge area, a key focal point for facial landmarks. “Apply shapes, colours or lines in unusual directions. Algorithms love symmetry, so break it,” she says. “Then distort or cover the other key facial features like the jawline, nose, mouth, eyes and eyebrows. Stretch the shapes off the face onto the neck and into the hair. Don’t highlight or contour what you normally would – be weird AF.” For products, Tylicki says to skip the moisturiser and search for water-based products, as both pepper spray and tear gas bind to oil-based products.
One of the major issues of wearing anti-face make-up in public can be described by what’s called the Streisand effect. By attempting to be digitally anonymous, you can easily draw more attention to yourself and your elaborate make-up. But researchers at cyber-defence contractor PeopleTec have found that facial-recognition algorithms’ focus on specific areas of the face means that even subtler make-up has the potential to evade surveillance.
“There are anywhere from 20 to 60 key points in the face, largely concentrated around the eyebrows, the jaw and the sort of triangle of the nose,” says Dr David Noever, chief scientist at People Tech. “If you can introduce some sort of subtle shading and vertical lines that disrupt the distance between eyes and measure between lip distance, you can attack it that way.” However, Samantha Noever, a consultant at People Tech, says that while their research may not be as extreme as other versions of CV dazzle, it’s hard to hide the key points on the face without it looking noticeable.
Testing your make-up against your phone is one thing, but tackling the sheer scale of facial recognition today, years since Harvey’s vision of CV dazzle, is no small feat – something difficult to sustain through make-up alone. “I think at this point everyone knows it’s possible and somewhat trivial to block face recognition by wearing a mask, bandana or other facial coverings,” says Harvey. “The challenge is striking a balance between legal and aesthetic constraints.” As the technology evolves, so do the obstacles to achieving a true anti-face look. For example, gait recognition technology can track how you walk, like a fingerprint, even if your face is covered in geometric shapes and metallic bits.
Still, something CV dazzle make-up has proven itself useful for, time and time again, is serving as an eye-catching conversation starter about the ways in which we are all being surveilled, and what it means for those most marginalised. As Harvey puts it: “I definitely encourage everyone else to experiment with it and make it their own; I think fashion will contribute more to the solution than technology.”