Pin It
Screen Shot 2023-08-15 at 11.33.03
Eugene Mymrin via Getty Images

AI gaydars are coming – should we be worried?

AI that can detect sexuality might sound harmless, but experts warn it could be used to persecute the LGBTQ+ community

Back in June, a group of Swiss researchers published research revealing that they had developed an AI gaydar – a model with the ability to tell whether a man is gay or straight. The researchers say that by studying subjects’ electrical brain activity, the model is able to differentiate between homosexual and heterosexual men with an accuracy rate of 83 per cent. According to them, their findings had “the potential to open new avenues for research in the field” and that it’s still “of scientific interest” to figure out if there are “biological patterns that differ between persons with different sexual orientations”.

This isn’t the first time research has been done into AI’s capabilities when it comes to ‘detecting’ sexuality. Back in 2017, two researchers at Stanford University trained a facial recognition system using over 35,000 facial images pulled from a dating app to identify whether someone was gay or straight. By using a single facial image – and analysing everything from nose shape to grooming style – the system could correctly distinguish between gay and heterosexual men in 81 per cent of cases, and in 74 per cent of cases for women. Humans’ own gaydar was nowhere near as good, with a 61 per cent accuracy rate for men and 54 per cent for women. 

The Stanford study was controversial. When it was published, The Human Rights Campaign (HRC) and Glaad, two leading LGBTQ+ organisations in the US, criticised the study as “junk science” and warned that it could put LGBTQ+ people across the globe in danger. Many argued that it was essentially rehashing physiognomy, a centuries-old pseudoscience which argues that a person’s appearance reveals their inner character. It’s unsurprising that people are concerned with scientists flirting with the idea of physiognomy, as at the peak of its popularity in the 19th century, physiognomy was often used as an excuse to justify scientific racism.

Though the new Swiss study examined participants’ brain activity as opposed to their physical appearance, it has also been met with backlash, with researchers on social media questioning whether the study was necessary. “Hard to think of a grosser or more irresponsible application of AI than binary-based ‘who’s the gay?’ machines”, tweeted Rae Walker, a specialist in the use of tech and AI in medicine.

Qinlan Shen is a research scientist at Oracle Labs’ machine learning research group. They also had reservations about the study. “In general, I would say I’m deeply concerned about the use and development of AI that has the power to detect sexuality in theory,” they say. “I think the biggest question mark for sexuality detecting technologies is for whom and for what purpose are they being developed for?”

According to Dr Sebastian Olbrich, Chief of the Centre for Depression, Anxiety Disorders and Psychotherapy of the University Hospital of Psychiatry Zurich and one of the study’s authors, the two objectives of the research were to “see if there is any biological and neurophysiological underpinning of the current state of sexual orientation in homosexual and heterosexual males, and we wanted to test the usage of deep learning models with EEG [a recording of electrical brain activity] data since this is a very promising method to better understand the human cortex and its activity.”

He explains that this is “basic research” with “no direct benefit” to the LGBTQ+ community, but adds that it could potentially quash the harmful idea that sexuality is something that can be ‘cured’ once and for all. “By showing that sexual orientation has some neurophysiological patterns associated with it, it cannot be denied that different sexual orientations are ‘real’ and not just a part of a personal imagination that might be reversed by any obscure therapy, as it has been done in the past,” he says.

Shen is less optimistic. “[This claim] may be grounded in the idea that a biological marker for sexuality provides evidence that sexuality isn’t a choice. But I personally don’t think that this particular application actually does much good for the LGBTQ+ community,” they explain. “I think it’s well recognised within the queer community that sexuality is an expression of a variety of biological, environmental, and social factors.” They’re also doubtful that anyone pushing conversion therapy would be persuaded to renounce their beliefs based on “biological evidence” alone, “since queerphobia can be fueled by a range of different negative attitudes and beliefs.”

They stress that, ultimately, it’s unclear how this sort of tech could benefit the LGBTQ+ community, but it’s “very clear” how it could be used to violate the privacy of queer individuals. “A technology that could theoretically identify anyone’s sexuality could also theoretically be used to out someone against their consent, in contexts where they do not feel comfortable to do so,” they say. It’s also possible that this sort of tech could be used to persecute queer people in countries where homosexuality is illegal.

“I do think it’s reflective of a tendency in work on AI for marginalised communities to build tools without fully considering what the communities themselves want or not want” – Qinlan Shen

The Swiss researchers are aware of these concerns. “I get the point on this,” Olbrich says. “As in many other fields, research and scientific findings can be used in different ways. We are aware that there are potentials of misuse. Like in other fields of AI, like facial recognition, there is an urgent need for regulation.” He stresses that chances of someone misusing the sort of EEG tech used in his research is “low”, but he is under no illusions that sexuality-detecting technology can be misused. “We need to be aware and careful, of course,” he says.

Shen acknowledges that the Swiss study is less concerning than the Stanford study, as it was “performed on self-identified volunteers who gave consent”, whereas the Stanford researchers used images pulled from dating apps without the subjects’ knowledge. Additionally, the method used in the Swiss study involved monitoring an individual’s brain activity, which is a lot harder to do without someone’s consent or knowledge than the facial recognition tech used in the Stanford study. “But,” they add, “I do think it’s reflective of a tendency in work on AI for marginalised communities to build tools without fully considering what the communities themselves want or not want.”

I don’t think the authors approached the study with active ill-intention towards LGBTQ+ community,” they continue. “Ultimately, I would argue that if developers want to claim that their technologies could be used to help the LGBTQ+ community or any other marginalised community, they need to work with members of those communities, at every step of the process, in order to understand a. what the community really wants and b. how to not inadvertently cause harm to the communities they’re trying to help.”

Join Dazed Club and be part of our world! You get exclusive access to events, parties, festivals and our editors, as well as a free subscription to Dazed for a year. Join for £5/month today.