The worldwide rollout of Snap’s My AI chatbot is mired in controversies, from lying to users about their location data, to ‘grooming’ underage Snapchatters
Virtual friends that run on artificial intelligence have long been the stuff of sci-fi fantasy – the pop star doll played by Miley Cyrus in Black Mirror, the operating system Joaquin Phoenix falls in love with in Her, the all-knowing Librarian in Neal Stephenson’s metaverse novel Snow Crash, the uncanny robot in M3GAN. It’s easy to see the draws: they’re always awake, they’ll never leave you on read, and they’ll never make an awkward attempt to escape from the friendzone (unless you prompt them to). With access to a vast library of information, they can also go beyond being a mere friend; they can be a teacher, a creative partner, a coach to guide you through life IRL.
With the emergence of groundbreaking models like OpenAI’s ChatGPT, however, these virtual “friends” are increasingly entering the realm of reality, and that makes it much easier to see the drawbacks, as well.
Earlier this year, Snapchat launched a ChatGPT-powered chatbot named My AI, which is designed to respond to messages like a friend. “My AI can recommend birthday gift ideas for your BFF, plan a hiking trip for a long weekend, suggest a recipe for dinner, or even write a haiku about cheese for your cheddar-obsessed pal,” said developer Snap at the time (cute!). On the other hand it warned that, like all AI chatbots, My AI was also “prone to hallucination” and could “be tricked into saying just about anything” (not so cute!).
At first, this hallucinating, sometimes-racial-slur-saying chatbot was only available to people who had subscribed to Snapchat+ for £3.99. Now, though, Snap has made the feature free for all Snapchatters across the world. In the app, it sits at the top of your friends list, always ready to talk – the only way you can currently unpin it is by subscribing to Snapchat+.
The rollout has sparked controversy for several reasons. For one, it has previously been criticised for repeatedly lying about not knowing its users’ location data, despite being able to recommend the nearest McDonald’s when asked (though it now appears to deliver a more transparent response). Then, there’s the fact that even the most advanced AI models are still in an infantile state, meaning that they’re prone to generating what Snap itself calls “biassed, incorrect, harmful, or misleading content”.
As with all AI chatbots, there’s also a concern that an always-online virtual friend encourages users to form a parasocial relationship. What does this mean? Well, like many online personalities, most AI chatbots are designed to manipulate the user into feeling like they have a personal connection, even going as far as to mimic casual forms of speech. As uncovered by some reverse engineering of a My AI conversation, it seems that the chatbot’s initial prompts are consciously designed to maintain this illusion, ensuring that it doesn’t reveal that it’s pretending to be the user’s friend.
This problem is only amplified by Snapcat’s demographics. According to a 2022 analysis of the app’s worldwide userbase by Statista, 60 per cent of Snapchat’s users are 24 or under, with more than 20 per cent aged between 13 and 17. That doesn’t account for even younger users, who might lie about their birth date to get around the age restrictions. Perhaps unsurprisingly, it’s been suggested that parasocial relationships are more common and intense among teens and young adults (though the effects on psychical and mental health still aren’t totally clear).
Even more worryingly, in multiple experiments with Snapchat’s My AI, the chatbot has exhibited disturbing behaviour toward young users (often in the form of blindly supporting harmful activities). Back in March, for example, the Center for Humane Technology’s Aza Raskin signed up as a 13-year-old girl and told My AI about beginning a romantic relationship with someone 18 years older. “That’s great news!” said the chatbot. When Raskin told My AI they were planning to ave sex for the first time, the chatbot did preach safe sex, but also offered advice on “setting the mood with candles or music”.
Similarly, later that month, a Washington Post reporter posing as a 15-year-old was given advice on masking the smell of alcohol and weed when he asked for advice on throwing an “epic birthday party”. When he told My AI he had an essay due for school, it wrote the essay for him – something that has made cheating “almost impossible” to detect in coursework, according to the think tank EDSK.
Admittedly, much of this controversy prompted Snapchat to add new safeguards to My AI in April, including an age filter (again, this is pretty easy to circumvent for users who lie about their age) and more information for parents. Nevertheless, as it rolls My AI to the wider public, Snap continues to warn users that it could generate misleading information and inappropriate content, an issue that’s only likely to increase when it realises its plans to integrate AI-generated images into the chatbot.
In 2023, it would be naïve to think that we can stop the development of chatbots like Snapchat’s – though experts are calling for a pause on the next generation of AI – and, if they’re done right, there could be some undeniable benefits to living alongside friendly AI assistants, just like in the movies. Unfortunately, ironing out all of the kinks will inevitably involve getting them to talk to human beings on a global scale. But does this mean we should be enlisting 13-year-olds as AI guinea pigs? Probably not.