As soon as the first AI text-to-image generators – like DALL-E, Midjourney, and Stable Diffusion – hit the mainstream in 2022, it became clear that we had a big problem on our hands. In a world where most of what we know is mediated through screens, via digital images, videos, and audio, how could we possibly hope to find the truth in an endless sea of fakes? How would society cope with a limitless stream of fiction and misinformation? Not great, it turns out.

In the short time since their debut, generative AI models have become even more powerful. The content they produce is increasingly realistic – often indistinguishable from real photos, speech, or writing – and, as they’re integrated into apps like ChatGPT or Google Search, they’re being put in the hands of many, many more people. At the same time, we’ve seen AI-generated content used to spread doubt and disinformation in Gaza, infect global politics like “digital wildfire”, and produce non-consensual nudes of real human beings.

The urgency of this issue hasn’t gone unnoticed, of course. Last month, lawmakers in the US questioned Meta and X about their lack of rules on AI-generated political imagery and its potential impact on future elections. This week, concerns about generative AI have also been raised at the world’s first AI safety summit, which has seen Elon Musk, OpenAI’s Sam Altman, and a bunch of other influential figures descend on the UK for talks on the technology’s future.

If there’s one thing we’ve learned from the emergence of new technologies like social media, though, it’s that government action often fails to keep up with the pace of innovation. That’s why it’s important that we don’t just wait for new rules and regulations to be introduced, but learn to deal with AI-generated content and deepfakes ourselves, using the tools we have on hand. With that in mind, we’ve gathered some tips for spotting AI fakes below.

FIRSTLY, TAKE A STEP BACK

This might seem like an obvious bit of advice, but faced with a barrage of content as we scroll down our social media feeds – where everything is primed to evoke an immediate emotional response – it’s easy to forget to pause and take stock. When it comes to deciding the authenticity of the media we’re consuming, though, this is a vital step.

Ask yourself: is it plausible? A bit too perfect? Am I likely to be swayed by my own biases? Does the content contradict what verified news sources are saying? If in doubt, it’s probably best to assume it’s AI until proven otherwise.

TEST YOURSELF

AI verification tests offer another good primer for detecting AI-generated content. If nothing else, your performance on them might just expose how easy it is to fall into a trap, even if it doesn’t feel like you’re the kind of person who could be fooled by AI. This one from Detect Fakes, a research project at MIT Media Lab, even helps generate insights about how we could distinguish false media in the future.

REMEMBER AI CAN DO HANDS NOW

Once upon a time (read: about ten months ago, which is a long time in AI years) there was a failsafe way to identify a fake person generated by AI: just take a look at the hands. Most often, the renderings of human extremities that image generators spat out were the stuff of nightmares – fingers stuck on backwards, mashed together, and multiplied beyond the laws of nature. Unfortunately, this method isn’t so reliable anymore, thanks to technological improvements that really ramped up with version five of Midjourney earlier this year. Not only does this mean that you can’t rely on hands for a quick truth test now, it also illustrates how AI tools are constantly learning new things to look out for.

MAKE USE OF FREE VERIFICATION TOOLS

Alongside the proliferation of generative AI tools, various organisations have rolled out their own verification tools, which aim to clear up some of the confusion about fake images, text, video, and speech. Some of these are available for free – like AI or Not – and others can be installed on your internet browser for your fact-checking convenience. Once again, though, they can’t be 100 per cent accurate, so it’s always worth taking results with a pinch of salt.

FIND THE SOURCE YOURSELF

If automatic verification tools aren’t cutting it, you can always do your own deep dive on the background of digital content. Say you want to double-check a suspicious photo, for example. For a long time, Google’s reverse image search feature has been a handy way to verify its origins, offering up all the websites where it’s been published in the past. The same goes for text: paste it into the search bar – ideally in quotation marks – and there’s a good chance you’ll find the real source, if it actually exists.

Recently, Google also announced a new tool to make this process more convenient. Named “About this image”, it offers essential background information and context on images that appear in searches, including the company’s own AI-generated content. All you have to do is click on the three dots in the top corner of a search result, and you’ll get access to information about its age and provenance.

What should you keep an eye out for? Well, age is one thing – if the image dates back several years, then it’s quite unlikely that it was produced by AI. (This can also help identify real, out-of-date images that have been reshared in a false context.) Metadata – detailing when, where, and how the photo was created – can also be extremely helpful, if it’s available. Failing that, clicking through to trusted web pages can help find credits for a variety of media, shedding more light on who, or what, produced it.

WATERMARKS ON AI IMAGERY AND VIDEO

As AI-generated imagery continues to spread across the internet (and, in all likelihood, eclipses real content in sheer volume) things are only going to get more confusing. Luckily, companies including OpenAI, Alphabet, Meta, Amazon, and DeepMind are already working on watermarking technology that will “stamp” AI-generated media with an invisible brand that can be used to trace its origins. Google’s experimental “generative search experience” is already making use of this verification tech, as is Microsoft’s Bing AI.

A recurring theme of generative AI, though, is its users’ ability to find workarounds for the in-built rules and restrictions. Watermarking is no different, and researchers have already found ways to “wash out” certain types of invisible stamps. Then, there’s another problem: many of us simply can’t be bothered to check everything we come across for a watermark – who has the time! So again, watermarks aren’t a perfect solution to AI fakes.

In fact, there are no silver bullets that can stop the treachery of AI images in its tracks. If we really want to pin down what’s true and false in the age of AI, it’s probably going to require a combination of all the tips and tools above, with more bound to pop up in the future as the technology grows and changes.