Pin It
2 AI trans

If AI is truly the future, it needs to learn to see non-binary people

As part of Chelsea Manning’s Dazed guest edit, civil rights advocate Kade Crockford reflects on how AI can show us the limits of our society

For her guest edit in the Infinite Identities issue of Dazed, Chelsea Manning selected seven vital activist voices from around the US to answer a single questionWhat, for you, is the most under-discussed issue affecting the trans and non-binary communities in America today? Here, civil rights advocate Kade Crockford describes why machine learning may yet herald necessary chaos for the forced-binary gender identification of culture.

I first realised I was categorically different from other kids when my preschool teacher instructed our class to form two lines: boys over there and girls over here. I froze, unsure of what to do. A variation on that square-peg-in-a-round-hole, forced-binary identification crisis has played out consistently throughout the rest of my life. It’s no less confounding now, but the shock of it has worn away.

Well before that day in school, someone would praise me to an adult: “What a strong little boy, so fast on the soccer field!” Another adult would swiftly interject, “She’s a girl.” I felt unimpressed by this repeated confusion and mostly ignored it. Something was amiss, I thought, in the insistence on nailing down my gender identity. My approach hasn’t changed much now. When someone calls me ‘sir’, I patiently wait until the person has realised their ‘mistake’, and then slightly less-patiently wait for their apology. “No apology necessary,” I sometimes say, causing yet more befuddlement.

For me, the befuddlement is the end point – not something to fear or to apologise over. My gender is neither male nor female. It resists binary classification. If you’re looking at me and your brain is toggling between man and woman, unsure of where to land – congrats. You’ve correctly identified my gender.

Rightly or wrongly, when human beings assign another person a gender marker, we make all sorts of assumptions about how to engage with that person. Often these assumptions lead to gendered language: “Right this way, ladies,” or, “Excuse me, sir.” Sometimes these assumptions lead to gendered acts: holding open a door, offering someone a seat, clutching a purse, crossing the street.

For the targeted advertising business – and therefore, for the internet – getting gender assignment right is more than social nicety; it is central to business success. Data brokers use gender as a marker to help clients ‘understand’ their potential customers. How else would Target know who is pregnant when? Facebook allows advertisers to target people who state their gender as male or female with ads for employment opportunities. Twitter offers advertisers an option to target their messages to people based on gender, offering just two choices: male or female. When a user hasn’t specified a gender, the company “use(s) the gender provided by some users in their profiles, and extend(s) that data to other users based on factors of account likeness”, according to its website.

Deploying gender classifications is also the norm in tech. Consider machine-learning technologies, which rely on categorisation. In classification machine-learning systems, the algorithm outputs a fixed response to a number of values, often a binary response: yes or no, zero or one. Does photo X match a photo in Y database? Is this picture an image of a cat? Is this person male or female?

I spend a lot of time thinking about the impacts machine-learning technologies are already having – and will continue to have – on human beings and our planet. Among the most vexing problems I see is the power asymmetry between the wealthy, largely white, mostly cismale engineers and product managers designing and implementing machine-learning systems to sort and make decisions for people, and the people acted upon by these systems. Amazon, for example, recently announced it would stop using a machine-learning algorithm to sort job applications after it discovered the algorithm was trashing applications that included the word ‘women’s’ – as in ‘women’s chess club’.

Machine-learning algorithms themselves aren’t new to computer science or mathematics, but the computing power and data needed to give them life have only been available to (a select few) human beings in the past decade or so. Even if we could teach everyone in the world about how machine learning works, it would be hard to democratise the technology. After all, even if they understood the maths and the code, most people don’t and never will have access to enough data and computing power to build and maintain complex machine-learning systems.

Thanks to Joy Buolamwini’s research at MIT, we know face-recognition algorithms are generally better at identifying white men than they are black women. We can safely assume the machine is not at fault for this discrepancy, because machines are inanimate objects with no free will of their own. The blame lies with the engineers who built the machines, who probably used their own white, male faces when training the algorithm how to recognise faces in pictures. Buolamwini’s study didn’t explicitly address how companies deal with gender-nonconforming or non-binary people, but it makes me wonder: what happens when a machine tries to tell you who you are, and gets it wrong?

“Seeing ill-fitting advertisements is one thing; it would be something else entirely to be denied access to a restroom, a locker room, or even an aeroplane because my face doesn’t ‘match’ the gender on my identification”

In some cases, I probably wouldn’t mind an algorithm misidentifying me. If I walk past a kiosk in a train station and the ad says, “Hey dude, check out this sick motorcycle,” I won’t be offended – even though I’m terrified of riding bikes. This Black Mirror-like future isn’t far off: more and more advertisers and machinelearning companies are vocalising their desire to use face recognition to sort people by gender for advertising purposes.

So, I might not mind falling through the cracks, gender-identification-wise, if someone is trying to sell me a motorcycle. But, as the use of facial analysis software spreads to every area of our lives, I do worry about the impact the reification of the gender-binary through artificial intelligence will have on my life and the lives of people like me. Seeing ill-fitting advertisements is one thing; it would be something else entirely to be denied access to a restroom, a locker room, or even an aeroplane because my face doesn’t ‘match’ the gender on my identification.

I’ve heard machine-learning critics say the technology is useful when you want to make the future look like the past. In the genderidentification context, that spells trouble for people like me, navigating a world mediated by artificial intelligence. If we don’t have large data sets of non-binary people’s faces, how will we ever design machine-learning systems that can accurately identify us? And what if there’s no such thing as a non-binary or gendernonconforming face? Maybe those of us who resist gender classification will help the rest of the world understand that there really is no such thing as a ‘male’ or ‘female’ face.

Artificial intelligence can magnify and exacerbate bias, as in the case of Amazon’s antiwoman resumé sorter. But it may, in some cases, also help us see how categories and systems that have ruled us for centuries or millennia are ill-fitting for surprising numbers of people. According to experts, there are as many intersex people in the United States as redheads. Young people in this country are increasingly identifying as trans or non-binary. What happens when all of us gender weirdos start mucking up the artificial intelligence gears? Some tech companies are already hip to the gender revolution: OkCupid, for instance, lets users choose from a number of gender categories, not just male and female.

In the near-future, gender-nonconforming, trans and non-binary children may encounter their first moment of categorical unease when they are misread to be either male or female by a computer, instead of an analogue preschool line-up. But maybe – just maybe – by then we will have stopped pretending there are only two genders, and our technology will reflect that.

See all the activists and writers selected by Chelsea Manning, and read their responses to her question, here