Remember that face-classifying algorithm that went viral last month for its reductive, and downright discriminatory categorisation of people? Well, as it happens, the Internet Powers That Be have actually taken note and removed 600,000 racist images from the online database used in the project.
Artist Trevor Paglen’s ImageNet Roulette – a site that lets users upload photos of themselves to be categorised by an object recognition database called ImageNet – went viral last month, revealing the reductive and discriminatory ways artificial intelligence uses biased data to categorise faces.
Since the popularity of Paglen’s art project, ImageNet has addressed the claims of bias, and removed the problematic images from its database.
Created by Princeton and Stanford University in 2009, ImageNet is the world’s most-cited object recognition database. The system uses a 10-point Caffe model of person classification, which bases its information on 14 million images that are organised into 20,000 categories with around 1,000 images per category. Widely used to train machines on face classification, this information can be used by law enforcement agencies, schools, employers, or basically anywhere using facial-recognition technologies for ‘security’ purposes.
But, as demonstrated by Paglen’s ImageNet Roulette, these categories can be wildly inaccurate, and sometimes, downright discriminatory. When we applied the database to some of our favourite celebrities, it labelled FKA twigs a “weirdo”, and Lana Del Rey, a “missy”.
“This exhibition shows how these images are part of a long tradition of capturing people’s images without their consent, in order to classify, segment, and often stereotype them in ways that evokes colonial projects of the past,” Paglen told the Art Newspaper.