“It all started with a dumb meme,” Theo* says. Some friends laughed when his boyfriend of eight years posted his AI-generated creation in the group chat. For Theo, 27, it hit a nerve. “He didn’t see it as a big deal at first, but during the argument it caused, he understood how vehemently opposed I am to generative AI.”

Although the disagreement was a one-off for the couple, the adoption of platforms like ChatGPT into people’s everyday lives (according to OpenAI, the chatbot had 700 million weekly users in September) has left Theo feeling at odds with many of his loved ones.  “When it first became mainstream, I naively assumed that my friends and colleagues, many of whom are also left-wing and creative, shared my view,” he says, his outlook anchored in concerns over AI stealing artists’ work and taking jobs. In reality, his friends use it daily, and his colleagues in publishing can’t send an email without it. “I find it very depressing.” 

In a world that seems destined to tear us apart, the relationships we share with our loved ones have never held so much weight. We find a sliver of comfort in the knowledge that our views and values align with those closest to us. Or so we thought. Enter: generative AI.

Since it crash-landed into public consciousness, leaders hail it as the magic plaster to heal society’s fractures — but chatbots get simple facts wrong (and sometimes, make them up entirely). Higher management lauds it as the key to efficiency and self-optimisation, even if that means displacing jobs and smoothing our brains. Marketed as a friend and companion, it’s also a weapon of sexual violence, able to strip clothing from images of women and children at the click of a button.

It’s not just a disruptive technology — generative AI is an overwhelmingly polarising one. “There’s now a level of understanding around the potential harm it can cause, despite the convenience it may provide us,” Tallulah Belassie-Page, policy and advocacy manager at the Online Safety Act Network, tells me. “This tension is playing out in our communities.” Like Theo, more of us are discovering this dissonance through interactions with our loved ones. 

Christmas Day, and an argument over AI has erupted in Kya’s* home. The only member of her family against it, the 27-year-old is concerned about the disproportionate environmental impact of data centres on Black and brown communities and how it handles personal data (currently, no federal privacy laws in either the US or UK are solely dedicated to regulating AI).

“They were advocating for its use and saying how much easier it makes things. My siblings think my avoidance of it is the equivalent of missing out on the ground zero of the gold rush,” she explains. Further tension arose regarding their mother, who struggles to distinguish between what’s real and AI-generated online. “I said they were contributing to that reality for her, while simultaneously wondering why she’s a victim.”

It does put me off people. When someone has the app, I think less of them

A sore subject rather than a make-or-break in her relationships, Kaya has also had several heated conversations with someone she’s dating over his “constant” use of AI chatbots. In the future, she predicts she’ll start to distance herself from people who rely on them. “Nobody can tell me their AI-generated grocery list or workout plan is more important than someone having clean water and breathable air.” 

For similar reasons, 30-year-old Ross is already there. “It does put me off people,” the content editor says, explaining that if they went on a date with someone who openly used ChatGPT, they wouldn’t see them again. “When someone has the app, I think less of them.” 

Like our political stance or how we spend our free time, the use of generative AI has become a trait used to make assumptions about others and the personal responsibility they feel concerning the world’s wider issues. Tallulah understands why some use it this way, but stresses that AI shouldn’t become another frontier in the “issue of our time” — social division. 

Alix Dunn is the founder and CEO of The Maybe, a public interest firm that works with academics, activists, and policymakers to challenge the power and political structures of technology, including AI. Although she understands the frustration felt by people like Theo and Kya, she says it’s misdirected. 

“Many of AI’s political problems sit with the companies pushing these products on us,” Alix explains. “The more anger directed at those who don’t have control over it, the less solidarity built that might result in actual mobilisation and change.” Tallulah highlights the British government’s pro-innovation approach, which positions AI as the answer to structural deficiencies in essential departments and services. “With that message coming from the top, how are individuals meant to operate differently?” 

Criticised for being too slow at her sales job, Victoria was encouraged to use ChatGPT to answer customer enquiries. Now, the 25-year-old uses chatbots “for any basic task or answer” in her daily life because of their instant nature and ease of access. Much to the dismay of her sister, she uses one for relationship advice. “She tells me it isn’t normal because it’s not a human, and that it’s favoured in my perspective.” 

Laughing off her sister’s concerns, Victoria admits to not giving thought to the ethical and environmental implications of AI, and calls people using it as a criteria to distance themselves from others “dramatic”. “The world is evolving. We're always going to have AI in it now,” she says. “New inventions and creations are just a part of life.” Will she keep using it, despite the concern? “Forever and always.”

Victoria makes a valid point about the future: it’s estimated that $2.9tn will be spent on building AI data centres through to 2028, meanwhile, OpenAI CEO Sam Altman’s latest business — which hopes to integrate AI with human biology — has already raised $252 million in funding. Dystopian headlines and existential anxiety aside, on an individual level, how can we talk to our own AI-obsessed loved ones about their use, especially when it concerns us? 

First, Alix advises to remember that the belief we should interact with new technologies has been drilled into us throughout our lives, and that everyone uses technology differently. “If you start from the presumption that everyone has the same motivations for using it, you might project your own judgements without understanding why they use these tools in a way you don’t expect”.

Leaning in with curiosity, rather than judgment, is key. “Ask what they get out of it and what they don’t. Explore it. Because maybe you’ll discover, for example, that your cousin is lonely and that’s why they talk to ChatGPT all the time.” Doing so presents a chance to help — like inviting said cousin out more so they build connections — and as Alix remarks, an opportunity to get to know a loved one on a deeper level.

One of the reasons why people are drawn to these tools is because they don’t make them feel shitty or stupid for thinking or saying something

With a technology that all too often feels shoved down our throats, it’s important our opinions can be heard, free from a sense of awkwardness or taboo. “Ask if they’re comfortable hearing about your feelings, especially if it directly affects your livelihood,” Alix continues, warning that it shouldn’t be positioned as a direct challenge to someone’s use. “If they’re a loved one, they should be interested in how it’s affecting you.” 

Getting everyone on the same page about AI may be a far-fetched ideal, but it doesn’t mean there isn’t hope for the future. The Online Safety Network is campaigning for comprehensive, coherent regulation to be enforced in the UK to mitigate the risks it poses, which Tallulah believes will prevent some of the moral questions around its use from falling on the shoulders of individuals.

“People like Elon Musk are doing God’s work in making these technologies offensive to everyone,” Alix says. As we’ve seen from the Grok scandal, generative AI continues to mirror the prejudiced, patriarchal world we live in — which can’t change unless society does. However, as scandals continue, the dizzying hype calms, and the actual potential for AI to solve the world’s problems becomes clearer, she believes public consciousness will shift towards applying deeper consideration when it comes to incorporating it into our lives. 

Our gut instinct may tell us to challenge a loved one for using a technology we disagree with, but that doesn’t mean it should be the first port-of-call. “One of the reasons why people are drawn to these tools is because they don’t make them feel shitty or stupid for thinking or saying something.” Alix continues. “To me, that’s a sign we need to do better in conversations with our loved ones.” 

Me, you, your friend, your partner — we’re all the lab rats for generative AI, a technology which, as we continue to discover, has been rolled out without suitable safeguards or regulations. Instead of turning on eachother over its flaws and threats, our focus needs to shift to holding the real culprits accountable: the lab coat-wearing Big Tech billionaires peering over the cage.

*Name has been changed