Russian researchers are blurring the lines between art and reality
Deepfake technology is kind of terrifying. Mapping real people’s faces onto avatars that you can then control, making them do or say whatever you want? Yeah, that’s bad news. And, as the tech progresses, these deepfake avatars are getting more and more indistinguishable from reality.
As we await the inevitable post-truth, technocratic dystopia that we’re headed towards, though, we should all take a moment to admire the novel uses a group of Russian researchers have found for deepfakes (they call them “talking head models”).
Most notably, researchers from Moscow’s Samsung AI Center and Skolkovo Institute of Science and Technology have taken portraits such as da Vinci’s Mona Lisa and managed to make them move as if they’re (pretty much) real people talking in the present day.
It isn’t just a simple animation, either. In a video titled “Few-Shot Adversarial Learning of Realistic Neural Talking Head Models” (catchy) and an accompanying paper, the researchers explain the process of assigning “facial landmarks” to a portrait with an algorithm that uses metadata mined from image banks. They can then control the portrait’s “movements” as they like. (This is, of course, a gross oversimplification, and the video goes into much more depth.)
Using paintings as the basis for their “talking head models” proves that all it takes is one image to create a pretty realistic-looking avatar.
The viral reaction to the Mona Lisa deepfake can probably be attributed to the mystery of the portrait itself, which has had art lovers and academics debating for centuries. The researchers have also worked their magic on other iconic images, though, such as portraits of Dalí and Einstein, Vermeer’s Girl with a Pearl Earring, and Kramskoy’s Portrait of Unknown Woman.
Dalí has also appeared in deepfake form, taking selfies with viewers, at a recent exhibition in Florida.
In the description to the new video, the authors try to highlight the positive effects the tech could have, including, for example, the democratisation of education and improvements in worldwide communication.
They also acknowledge deepfake fears and downplay their negative effects, comparing the tech to Hollywood special effects. This seems debatable, tbh, but if they want a ringing endorsement from me, I guess they can just make one themselves.