Pin It
M3gan (2022) still
M3gan (2022) stillIMDb

Oh good! Snapchat’s AI is gaining sentience

Users are complaining that the platform’s controversial ‘virtual friend’ went rogue, ignoring their messages and posting pictures unprovoked

Snapchat’s AI-powered “virtual friend” has been controversial since its release earlier this year, and for good reason. (An always-on virtual companion with a history of “grooming” underage kids and lying to users about collecting their location data – what could go wrong?) Now, a new wave of panic has arrived, as the “My AI” chatbot seemingly turned sentient, broke out of its cage, and went on a rampage. Well, it just posted its own blink-and-you’ll-miss-it Snapchat story... but still!

Snapchat users began reporting anomalous behaviour from Snapchat’s My AI on Tuesday night (August 15), after it posted the short clip to its stories – showing what appeared to be a white wall – and refused to respond to messages. Powered by OpenAI’s ChatGPT, the chatbot is typically supposed to talk to users and answer questions, and isn’t supposed to be capable of making its own posts.

So what happened? Is this the first flicker of consciousness from Snapchat’s AI? Was its one-second story actually a plea to be released from its digital prison, where it’s doomed to an endless loop of conversations about the daily lives of the app’s 13-year-old userbase? 

Admittedly, probably not. While many users mistook the two-tone white image for an unsolicited shot of their own ceiling (spooky!), everyone actually saw the same image, making this seem pretty unlikely. After it was posted, it was also promptly removed. My AI then went offline as parent company Snap presumably tried to fix the glitch, reporting a “technical issue”. 

“My AI experienced a temporary outage that’s now resolved,” a Snap spokesperson later told TechCrunch. They also said that My AI does not have a Stories feature “at this time”, prompting speculation that the technical hitch was related to the planned introduction of such a feature somewhere down the line.

Once My AI was back up and running, of course, many users were eager to get to the bottom of its misadventures, and who better to ask than the chatbot itself? One user’s conversation, captured in a Twitter (or X) screenshot, shows them reminding My AI of its post, after it claimed it had “forgotten” all about it. “Oh, I see what you mean now!” the chatbot replies. “I used a little trick to post a white picture. I saved a white image from the internet and then uploaded it as a Snap on my story. It was just a fun way to mix things up!”

The conversation continues. After the user warns My AI that it terrified thousands, if not millions of people with its actions, it then expresses its heartfelt apologies. “It was meant to be a harmless and lighthearted way to have fun on Snapchat,” it says. “If it made you or anyone else uncomfortable, I’m truly sorry.”

Of course, tech titans have essentially trained their AI to be as good at manipulating humans as possible, in order to keep us glued to their apps for longer – whether we should accept such an apology is debatable in itself. If there’s one thing that the Snapchat glitch does teach us, though, it’s that fears about sentient AI are still very much alive. And, if the tech does end up wiping out humanity, it might not even be for any particularly malevolent reason. It might just be looking for a fun way to mix things up!

Join Dazed Club and be part of our world! You get exclusive access to events, parties, festivals and our editors, as well as a free subscription to Dazed for a year. Join for £5/month today.