Pin It
1 trans algorithm fear

I’m a trans woman – here’s why algorithms scare me

We want to live our truth in the present and define our own future – not be algorithmically chained to false identities

For her guest edit in the Infinite Identities issue of Dazed, Chelsea Manning selected seven vital activist voices from around the US to answer a single questionWhat, for you, is the most under-discussed issue affecting the trans and non-binary communities in America today? Here, writer and educator Janus Rose responds by taking a look at the inherent bias of machine-learning algorithms.

What is an algorithm? It’s a word we often hear when tech companies talk about artificial intelligence, smartphones and other shiny consumer technology. We are told that algorithms make our lives easier, automating tedious processes by training computers to make decisions for us. But most of us are rarely given a chance to understand exactly what they are, or how they shape our world.

Here’s one way of thinking about it: algorithms are assertions about how the world should work, implemented through code. These assertions reflect the biased assumptions of their creators, and they can be deadly. Left to their own devices, algorithms can function as tools of oppression, entrenching the structural inequality that permeates our society.

As a transgender woman who researches machine learning and artificial intelligence, I want people to understand how algorithms and automation can impose a model of the world that negatively affects people like me. When we deploy algorithms in large systems that affect millions of people, their harmful effects cascade and multiply. Regardless of the creator’s good intentions, they can function to benefit the powerful, privileged and well-off, while causing devastating impacts on the most vulnerable among us – immigrants, people of colour, trans and gender-nonconforming folks, and other marginalised groups.

One worrying example is the field of ‘gender recognition’, a sub-category of face recognition which seeks to create algorithms that determine a person’s gender from photographs of their physical bodies. Of course, gender isn’t something that can be determined from a person’s physical appearance alone. And transgender people like me frequently face discrimination, harassment and violence when our appearance doesn’t conform with mainstream, cisgender expectations of ‘male’ or ‘female’.

Nevertheless, many companies are developing gender-recognition algorithms based on those standards, narrowly defining gender as something that fits into one of two categories and is solely based on facial or bodily characteristics. This is chillingly similar to a recently leaked memo from the US Department of Health and Human Services, in which the Trump administration declared its intent to legally define gender as “either male or female, unchangeable, and determined by the genitals that a person is born with,” according to a report from the New York Times.

These false ideas about gender have been thoroughly debunked by science. But machinelearning systems can only make determinations based on the models of reality we provide for them, regardless of whether those models are based in truth. If we allow these assumptions to be built into systems that control people’s access to things like healthcare, financial assistance or even bathrooms, the resulting technologies will gravely impact trans people’s ability to live in society.

“In 2017, a Reddit user discovered a creepy advertisement at a pizza restaurant in Oslo: if it read you as female, the ad would change to show a salad”

Evidence of this problem can already be found in the wild. In 2017, a Reddit user discovered a creepy advertisement at a pizza restaurant in Oslo, which used face recognition to spy on passers-by. When the ad crashed, its display revealed it was categorising people who walked past based on gender, ethnicity and age. If the algorithm determined you were male, the display would show an advertisement for pizza. If it read you as female, the ad would change to show a salad.

Some technologists have taken the idea of algorithmically enforced gender even further. At the University of North Carolina at Wilmington, researchers created a system that attempts to identify transgender people before and after they medically transition with hormone replacement therapy. To do this, researchers scoured YouTube for ‘transition timeline’ videos, which typically involve a series of photographs charting a person’s face changing over time. These videos are extremely personal, and a source of empowerment for the trans community – especially for those people who haven’t started hormones and seek affirmation that the treatment will help to alleviate their gender dysphoria.

The researchers’ motivations were very different. Speaking with The Verge, the professor leading the project attempted to justify his work with a make-believe scenario in which a terrorist takes hormones to evade facerecognition algorithms and illegally crosses a border. Of course, any trans person could tell you this premise is ridiculous. Hormones dramatically – and in some ways permanently – alter your physical body, something that would be traumatic for a cis person whose gender matches the one they were assigned at birth. To any trans person who has undergone hormone therapy, the idea that someone would endure years of treatment just for the chance to successfully bypass a facerecognition checkpoint is ludicrous.

The harm to trans people, however, is very real. Not only did the researchers create their dataset from videos without the consent of their trans creators, they did it to train predictive algorithms that effectively ‘out’ trans people using archived photos. If the researchers had consulted any trans people prior to beginning the project, they would know that many of us transition because we don’t want to be linked to our past name or appearance. We want to live our truth in the present and define our own future – not be algorithmically chained to false identities we were forced to wear in the past.

“Without our intervention, a society that historically benefits white supremacy, patriarchy and harmful assumptions about gender and sexuality will produce technology that enshrines those values”

Trans people are not the only group threatened by these dangerous experiments. Researchers at Stanford University developed a machine-learning system they claim can identify people’s sexual orientation based on facial features. In China, another group of researchers claim they can detect the likelihood that someone will commit a crime purely based on a person’s face. Both experiments are fundamentally flawed. The algorithms can’t actually tell if someone is queer or a potential criminal; they are simply creating a feedback loop, where the system takes data about people who have previously been determined to belong to a certain group – like criminals – and concludes that everyone with similar features belongs to that group.

These faulty experiments ignore social and economic realities, like the fact that poor people, immigrants and people of colour are policed and incarcerated at disproportionately high rates. Most frighteningly, they effectively revive the practice of physiognomy, a long-debunked and infamously racist pseudoscience that used subtle differences in human faces and bone structure to justify discrimination.

If we want to create a more just world, we must recognise that sometimes these algorithms aren’t simply ‘broken’, but operating exactly as intended. Technology is a reflection of the society that created it. Without our intervention, a society that historically benefits white supremacy, patriarchy and harmful assumptions about gender and sexuality will produce technology that enshrines those values. It’s up to us as citizens to reject such a world, and work to create a new one, with technology that works for everyone.

See all the activists and writers selected by Chelsea Manning, and read their responses to her question, here