Pin It
Screen Shot 2019-03-04 at 11.29.10 AM

AI is generating faces of people who don’t exist

It’s so hard to tell what is fake

Can anyone tell what is real anymore? Thanks to AI, the digital realm is increasingly lifelike, and some concerned scientists are on a mission to find out just how duped we could be by computers.

WhichFaceIsReal.com was set up by University of Washington academics, Jevin West and Carl Bergstrom (two names that sound like they too have been AI generated). The website greets users with two faces and asks them to pick which one is an actual human. It’s extremely frustrating because it’s really, really difficult.

“When new technology like this comes along, the most dangerous period is when the technology is out there but the public isn’t aware of it,” Bergstrom told The Verge. “That’s when it can be used most effectively.”

West said this site could be used to “educate the public” and make us more aware of the capabilities of this technology. For them, if people aren’t more in tune with what is real and what is fake it could undermine society’s trust – sort of like the sudden panic over fake news. “Just like eventually most people were made aware that you can Photoshop an image,” he explains.

If we’re not quick to spot the difference then people could begin using the technology in sinister ways – one example the pair gave was that AI could be used to spread disinformation online like a fake suspect for a terrorist attack.

The site uses machine learning to comb through lots of actual portraits, it learns the patterns and then replicates what it has seen. A part of what makes the AI system so effective is that it tests itself by comparing the generated images to the real portraits to see if it can tell the difference.

This same system can be applied to audio and video and is what led to the crisis in deepfake pornography which attached real people’s faces to fake sex scenes.