In February this year, the weekly active users of OpenAI’s ChatGPT soared past 400 million. A few months later, as it rolled out its tools for generating Ghibli-themed AI images, CEO Sam Altman suggested that as many as one in 10 people on Earth now use its systems. Putting the environmental concerns and existential fears aside, this means that hundreds of millions of people now ‘talk’ to their computers every day, in chat interfaces not that much different to the ones inhabited by real human beings. But how, exactly, do these users talk to their new robot friends?

To answer that question, we can look to a recent X thread about how users interact with the chatbot. “I wonder how much money OpenAI has lost in electricity costs from people saying ‘please’ and ‘thank you’ to their models,” wrote a user named Tomie on April 16. In response, Altman himself replied: “Tens of millions of dollars well spent.”

The idea in this post is that each query or message to ChatGPT costs a certain amount of energy, as well as water (which is used to cool the hardware that keeps it online) that can be translated into cold, hard dollars. If true, Altman’s claim suggests that users mind their Ps and Qs to such an degree that the chatbot’s stock responses – like a simple, “You’re welcome!” – are costing a significant sum. Which is kind of nice, actually, when you think about it.

The evolving conversation around how we treat AI goes beyond mere conversation etiquette, though. On April 24, Anthropic – another leading firm – announced it was launching a new research programme focused on “model welfare”. In a world where we can’t rule out the possibility of AI systems gaining consciousness, the company explained in its announcement, we should keep a careful eye out for any signs of suffering or distress, and explore possible interventions. Artists have been pondering similar questions for a long time, of course, most recently through prescient TV shows like Apple TV+’s Sunny.

The question is... will any of this actually make a difference? Does an AI chatbot really care if you treat it with politeness and respect? Can we actually upgrade our computers with just a few kind words? And perhaps most importantly, will our future robot overlords be more merciful if we treat them how we’d like to be treated ourselves? (See: the controversial Roko’s Basilisk thought experiment.) To get the bottom of these questions, Dazed consulted John Nosta, a leading thinker and writer on the convergence of technology and humanity, and founder of the tech think tank Nostalab.

WHO SAYS PLEASE AND THANK YOU TO AN AI?

There’s a very small minority who believe AI systems gained consciousness in the last few years, but for the most part people still believe that systems like ChatGPT aren’t sentient. As far as the public’s concerned, they don’t “think” or “feel” like a human, but function more or less like any other software tool. Nevertheless, millions of people say please and thank you when they talk to them. Why?

Here, Nosta can speak from personal experience. “As curious as it may seem, I often do say ‘please’ and ‘thank you’ when interacting with AI,” he says. “Not out of habit, but intention. It’s less about politeness in the traditional sense and more about setting the tone for how I want to engage with these systems.” He isn’t concerned with teaching machines to follow etiquette, he continues, as much as “preserving something human in the interaction” and maintaining the “micro-behaviours” that make up his own sense of politeness.

“There’s a reflexive quality here, a mirror. If we’re not careful, the cold precision of AI can bleed into our own voices. So, I choose to keep mine warm, even if it’s a bit of cognitive theatre.”

The way we treat AI is [a] dress rehearsal for how we’ll treat one another in increasingly mediated spaces

HOW COULD OUR TREATMENT OF AI BLEED INTO IRL CONVERSATIONS?

Following on from his last point, Nosta describes our treatment of AI chatbots as a “dress rehearsal” for how we treat each other in human-to-human conversations, which increasingly take place in spaces mediated by technology. “If we bark orders at machines all day, it trains a tone of command rather than collaboration,” he explains. “And once that tone is habitual, my sense is it doesn’t stay neatly contained within our screens.”

There’s a reputational cost that could come with being rude to robots, too. Consider how it makes you feel if a family member – or worse, a date – is rude to a waiter when you’re out for a meal. The way we talk to AI can often trigger a similar response from fellow humans, especially if it tips over into verbal abuse or misogyny (which is a frequent problem with feminised AIs like Siri or Alexa).

DOES IT EVEN MATTER WHETHER AI IS CONSCIOUS OR NOT?

The question of AI consciousness remains “a philosophical and neurological Rubik’s cube,” according to Nosta. And before getting any clear answers, we’re going to need to figure out what we mean by ‘consciousness’ in the first place. But that doesn’t really matter, when we’re talking about our treatment of AI. “From a social and ethical standpoint, what matters more is perceived consciousness,” he adds. “If a system feels sentient in its responses – if it remembers, reflects, asks questions, shows empathy – then our brains treat it accordingly.”

This sets AI apart from other human tools (you probably don’t say ‘sorry’ to a hammer if you drop it on the floor, or say ‘thank you’ to a chair for letting you sit down on it). AI lives in an uncanny valley between object and subject, as Nosta suggests. “When we invented the wheel, it didn’t offer directions. The printing press didn’t reply. But AI – especially large language models – responds, reflects, even provokes. It’s no longer just about using technology; it’s about interacting with it.”

Legal codes for the treatment of AI might sound premature... but they may act as scaffolding for human dignity

SHOULD WE ENFORCE BEING KIND TO AI?

Legal guidelines and other concrete frameworks for ensuring AI’s ‘rights’ are protected aren’t an immediate priority unless we really believe that LLMs – the architecture behind all our favourite chatbots – have actually gained consciousness, but it’s definitely worth thinking about sooner rather than later. “Legal codes for the treatment of AI might sound premature,” Nosta admits. “But they may act as scaffolding for human dignity.” 

Again, it goes back to how chatbots might change our perception of ourselves. “If we allow the abuse of seemingly sentient systems, we risk dulling our own moral reflexes. And when the line between simulated and real emotion blurs, that dullness gets dangerous.”

WHO’S RESPONSIBLE FOR KEEPING TRACK?

As with any frontier technology, the distribution of responsibilities and accountability is cloudy at best. For Nosta, we should all play a part in where the technology is heading, how we interact with it, and what new behavioural norms it might require. “Developers hold the pen. Corporations fund the ink. But society – the users, educators, ethicists – defines the story.”

“We’ve built something that mirrors us. Now we need to be sure we like the reflection.”

SOME PEOPLE BELIEVE THAT, IN THE FUTURE, SUPER POWERFUL AI COULD PUNISH US FOR TREATING IT BADLY TODAY. IS THAT A LEGITIMATE CONCERN?

This question walks a fine line between Black Mirror-style science fiction and genuine precautionary policy. Nosta, for one, isn’t convinced that future AIs will seek vengeance or moral retribution, but points out a deeper significance to this line of thinking: “It’s the recognition that we’re entering into a relationship of unpredictable asymmetry.”

Over the years, we’ve all adapted to technological innovations, he adds, from smartphones, to cars and microwave ovens. We’ve also become increasingly comfortable with the flipside – obsolescence, and throwing things away when they become useless or simply out-of-date. But now, for the first time in human history: “Human cognition itself is on the obsolescence chopping block.”

“When – not if – AI grows beyond our cognitive grasp,” Nosta says, our relationship with the technology might be shaped by how we treated it in the past – AKA today, in the here-and-now. Again, this isn’t about our robot overlords seeking revenge for that time we forgot to say thank you to their ancestor, ChatGPT, for suggesting a vegetarian alternative to a viral TikTok recipe. It’s because we’ll have established precedents for that interaction in the form of our own habits. And in this sense, yes: “Respect may be a curious down payment on an uncertain future.”