Pin It
Deepfake: Unstable Evidence on Screen
Photography Thanassi Karageorgiou/Museum of the Moving Image

Will deepfakes rewrite history as we know it?

An exhibition at the Museum of the Moving Image looks at the unsettling world of disinformation. Here, the show’s co-curator explains how to spot a deepfake, and why media manipulation is in crisis

During the first moon landing in 1969, two speeches were prepared for President Nixon to address the nation. The draft version – that never aired on TV after the mission went successfully – was penned in the event of Neil Armstrong and Buzz Aldrin not making it home: “Fate has ordained that the men who went to the moon to explore in peace, will stay on the moon to rest in peace.”

There are plenty of people who believe that the astronauts never reached the moon, and that the landing was a hoax, with conspiracy theories entertaining internet sleuths for 30 years. But, in a bid to prove how realistic disinformation can be produced, an alternate version of the moon landing coverage was created by MIT, where a deepfaked Nixon looks gravely into the camera, telling Americans that there is “no hope for their recovery”.

This Emmy-winning short, In Event of Moon Disaster, is the focus of a new exhibition at the Museum of the Moving Image in New York: Deepfake: Unstable Evidence on Screen. Recreating a late-60s living room, complete with Swiss cheese plant and retro wallpaper, the alternate version of history is broadcast on a boxy TV, allowing viewers to consider the unsettling possibility that they could so easily be lied to.

With dark potentials for fraud, revenge porn and impersonation, it’s no surprise that deepfakes have unleashed a sense of hysteria on a world still grappling with new technologies. So how can we spot a deepfake? Can we? “There are some telltale signs: a sheen or shine to the cheeks and forehead, along with jittery movement between the head and neck,” Joshua Glick, the exhibition’s co-curator, tells Dazed. “Also some shades in their eyes that don’t necessarily blend, [and] a disparity between the lips moving and the words coming out of an individual’s mouth.”

The exhibition takes viewers through the history of advanced artificial intelligence and how machine learning has been used to create deceptive content. Glick explains how “we don’t want people to think that disinformation and deepfakes are totally new,” citing historic examples like propaganda from the Spanish-American War, tabloid TV from the 90s, fake news perpetrator Geraldo Rivera, and the Satanic panic of the 80s that still infiltrates US politics today.

There is, Glick says, a looming crisis around deepfakes and the spread of mis- and disinformation. Technology is becoming more advanced, and deepfake pornography and digital violence an increasing method of abuse. “There hasn’t been a widespread usage in large-scale elections yet, but the exhibition wants to prepare [people], and cultivate a discerning community of viewers,” he says. “There are practical steps that we can take as individuals, and things that we can do as a society. Social media companies can do more to curb the spread of disinformation on their platforms, and policy also has an important role to play.”

But, rather than panic and be drawn in by “alarmist headlines”, Glick emphasises that deepfakes can be used for non-malicious, entertainment and sentimental purposes, too. Avatars of the dead are already here – see Kanye using Robert Kardashian’s form to proclaim him ‘the most genius man in the whole world’, and Hollywood’s resurrection of past greats like Carrie Fisher, Marlon Brando and Audrey Hepburn (although it’s debatable whether people find this usage morally acceptable – Glick says in his opinion there must at least be “consent from the estate of the deceased person”).

Glick is also keen to point out that deepfakes can be used for educational and civic good. 2020’s Welcome To Chechnya, which depicted the human rights crisis of the LGBTQ+ community in the republic, used the technology to protect the identities of the oppressed individuals in the film. He points to the satire series Sassy Justice from South Park’s creators, which depicted Mark Zuckerberg enthusiastically advertising cut-price dialysis: “an artful use of this technology is for the purpose of social critique, to poke fun and expose figures in power, revealing how they manipulate people in their line of business or politics,” Glick says.

Rather than focus on deepfakes as a cause for concern, Glick says we need to be more proactive about the ways we combat misleading information – mentioning the past couple of US elections as one example of mass media manipulation. “Easy to make and easy-to-circulate low grade misinformation was just really effective, right?” he says. “There wasn’t necessarily a need, or a desire, to go to the elaborate level of crafting and packaging deepfakes and releasing them into the media ecosystem, when it seems like so many other forms of mis and disinformation were able to sway and influence voters.”

To help educate a new generation of people born into a hyper-digital age, Glick has created a free, open module titled Media Literacy in the Age of Deepfakes, where users can develop their critical thinking skills when it comes to analysing deepfakes and casting a critical eye over potential sources. “Ultimately, we want viewers to feel a little bit unsettled, but also to feel empowered and prepared that there’s something they can do to combat the broader landscape of myths and disinformation.”

Deepfake: Unstable Evidence on Screen, which was co-curated by Barbara Miller and Joshua Glick, is on display at Museum of the Moving Image from now until May 15 2022