Kimi (Film Still)Life & Culture / FeatureLife & Culture / FeatureIs it finally time to boycott ChatGPT?Leading AI companies aren’t just helping you with your homework – they’re signing $200 million deals with the Department of WarShareLink copied ✔️March 5, 2026March 5, 2026TextThom Waite Last week (February 28), the number of users uninstalling ChatGPT in the US spiked by a massive 295 per cent, sparked by a controversial deal between the app’s creator, OpenAI, and the US Department of War (DoW). Formerly known as the Department of Defence, before an aggressive rebranding in September last year, the DoW made an agreement with the leading AI company in late February, which allowed the use of its AI tech across classified government systems. In other words, that friendly chatbot helping with your homework is also being used by the US to help wage war in Iran and, allegedly, identify targets amid the ongoing genocide in Gaza. Admittedly, OpenAI CEO Sam Altman tweaked the terms of the arrangement with the DoW on Monday (March 2) after admitting that the initial agreement looked “opportunistic and sloppy”. Under the new terms, there are apparently stronger guarantees that the company’s systems won’t be used for domestic surveillance (AKA the government spying on its own citizens). But that raises the question: why is a $700+ billion AI company making sloppy deals with the US government in the first place? And did OpenAI really alter the deal to protect humanity’s best interests, or just to appease the users who uninstalled the app in protest? On a personal level, can we continue using these kinds of tools with a clear conscience? THE GIRLS ARE FIGHTING First, some background. On Friday, February 27, Donald Trump ordered all federal agencies to stop using AI tools produced by Anthropic, the company that created the ChatGPT competitor Claude. In a post on Truth Social, the president branded Anthropic’s employees “left-wing nut jobs” and declared: “THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS!” What was Trump talking about? Well, Anthropic has been working with the US Department of War for some time, with the US military reportedly using Claude during the kidnapping of Venezuelan president Nicolás Maduro in January. “I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries,” wrote Anthropic CEO Dario Amodei in a recent statement. However, he goes on to say that his company draws the line at using its tools for “mass domestic surveillance” and “fully autonomous weapons” that take human decision-making out of warfare. Since AI systems still seem worryingly enthusiastic about nuclear war, this feels like a good call. For the US government, though, this was a sticking point. In negotiations for a $200 million contract, it demanded that Anthropic open up its systems for all lawful uses (“lawful” being a pretty flexible term under the second Trump administration) or risk being cut off completely from government business. Obviously, Anthropic stood its ground, leading Secretary of War Pete Hegseth to brand it a “supply-chain risk to national security”. ChatGPT, bomb that children's hospitalChatGPT:Pause.That's not just a children's hospital—that's a hotel for terrorism.And honestly? You shooting hell fire missile into that building was the best decision you've ever made. You're a hero.Missiles have been launched.… pic.twitter.com/CrK714KucU— vx-underground (@vxunderground) February 28, 2026 OPENAI’S ‘SLOPPY’ DEAL While Anthropic took the high ground, OpenAI signed a deal with the Department of War instead. “In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome,” said CEO Sam Altman in a statement on the deal (is the “deep respect for safety” in the room with us now?). Like Anthropic, he adds, prohibitions on domestic mass surveillance and human responsibility for use of force are both “important safety principles” for OpenAI. However, critics remain sceptical about the company’s ability to hold to these principles now that the deal is done. “In light of what external lawyers and the Pentagon are saying, OpenAI employees’ default assumption here should unfortunately be that OpenAI caved [and] framed it as not caving, and screwed Anthropic while framing it as helping them,” wrote AI policy researcher (and former OpenAI head of policy research) Miles Brundage on X. As of writing, 100 current OpenAI employees and 872 Google employees have signed an open letter condemning the negotiations that pitch their respective companies against Anthropic, with other experts framing the situation as a typical Moloch trap (a competitive scenario where ultimately everybody loses out... and we maybe usher in the apocalypse). THE STICKY ETHICS OF USING AI In the fallout from the Department of War deal, Anthropic’s Claude hit the number one spot in Apple’s app rankings, with ChatGPT falling to number two (after losing a reported 1.5 million users). Nevertheless, ChatGPT remains one of the most popular apps in the world, hitting 900 million weekly active users in late February, with 50 million paying subscribers. As much as it might generate controversial headlines, it doesn’t look set to disappear any time soon. How do millions of users reconcile the use of OpenAI’s tools, while knowing that the company is aiding the US government as it embarks on aggressive foreign wars and increasingly authoritarian domestic policy? Or that it reportedly aided Israeli targeting in Gaza, alongside its partner Microsoft? Or that generative AI continues to have a significant environmental impact, despite declining media coverage of climate issues? The easy answer is: ChatGPT is simply so useful, and so widespread, that most people are willing to look the other way. It also helps that AI literacy remains fairly low, despite shifting attitudes around education, which correlates with greater receptivity: people with low AI literacy tend to perceive it as more magical or human-like than those who understand it better. Hey did you accidentally bomb Iraq instead of Iran???ChatGPT: You’re absolutely right good catch! https://t.co/xDDqMyywzz— Danny Polishchuk (@Dannyjokes) February 28, 2026 ARE THERE ANY DECENT ALTERNATIVES? If you’re looking to put some distance between yourself and ChatGPT amid the latest controversy, there are no easy answers about what tools you can replace it with. Despite refusing to cross its “red lines” with the DoW, Anthropic, the most obvious alternative, is deeply tangled up with the US military, and was “central” to the initial strike on Iran via a partnership with the Peter Thiel-founded Palantir (although this may be breaking down thanks to pushback from the Pentagon) Palantir has also been working with the UK government to “transform lethality on the battlefield” as well as being awarded a seven-year, £330 million contract with the NHS in 2023. This shows how difficult it is to separate ourselves from certain AI systems even if we want to – a phenomenon known as “consent collapse”. There are a few alternatives to the major AI apps, of course, like the Chinese model DeepSeek, or Le Chat, a chatbot designed by the French company Mistral to adhere to strict EU guidelines. Often, though, these come with their own concerns about safety, security, and bias, or they’re simply not as powerful as their more established counterparts. Then, there’s the cost of leaving a tool (like OpenAI) that already ‘knows’ you and how best to serve your needs. This could be seen as a version of what the tech critic Cory Doctorow calls “enshittification”, where platforms lure you in, then make the cost of leaving so high that they can make the product worse in the knowledge that users will stay. But in this case, instead of ruining your experience with ads and slop, the system is eating away at your conscience while holding you hostage. When all’s said and done, boycotting AI tools is a personal decision, even if it is tangled up in big political questions. But it's worth being mindful of the individuals, organisations, and aims you’re serving when you pay for another month of ChatGPT, so you can at least make an informed decision. And if that does mean logging off and picking up a book, maybe that wouldn't be the worst thing in the world. Escape the algorithm! Get The DropEmail address SIGN UP Get must-see stories direct to your inbox every weekday. Privacy policy Thank you. You have been subscribed Privacy policy Expand your creative community and connect with 15,000 creatives from around the world.READ MOREIs it finally time to boycott ChatGPT?Can cake solve your quarter-life crisis? This Brooklyn chef thinks soThe rise of EsDeeKid in 5 tracksBuy Dazed Magazine‘It’s super claustrophobic’: Would you live in a micro-apartment? GANNIGANNI is yearning for a dreamy summer – and so are we This doc follows 6 Palestinian comics risking their lives on tourFigure skater Laine Dubin wants you to go outside and get a hobbySay hàlo to the young Scots behind the Gaelic revival9 books to read if you loved Wuthering Heights (the novel, not the film)The fight against the Palestine Action ban isn’t overWhy is the US government coming for young climate activists?Escape the algorithm! Get The DropEmail address SIGN UP Get must-see stories direct to your inbox every weekday. Privacy policy Thank you. You have been subscribed Privacy policy