There will be no escape x
Back in the early days of AI, before the likes of ChatGPT and Midjourney burst into our lives, some concerned experts proposed a few rules to steer the emergent technology in the right direction and prevent it from wiping out humankind. One of those rules was to avoid letting AI know how humans think – that way, these experts suggested, it couldn’t use our own programming against us, or understand our weakest points. But that was before AI became ingrained in our social media, learning exactly what makes us tick to capture our attention and turn it into profit. Now, it often seems like the machines know us better than we know ourselves.
Of course, the recommendation algorithms that we use on a daily basis aren’t actually looking into our brains. The way they work is closer to predictive text – they recognise patterns in our behaviour, and use these to autofill our thoughts in real time. What if AI could lift our thoughts straight from our brains, though? According to new research from the University of Texas at Austin, it now can.
Two days ago (May 1), researchers announced the successful creation of an AI-powered decoder that’s able to translate brain activity into a continuous stream of understandable language. As a result, users are able to non-invasively read another human’s thoughts for the first time.
Published in Nature Neuroscience, the research paper explains that the experiment’s participants first lay in a scanner and listened to narrative podcasts for 16 hours, while their brains were recorded using functional magnetic resonance imaging (fMRI). The decoder was then trained to match their brain’s responses to the meaning of the narrative, with the help of the large language model GPT-1.
The same participants were then scanned while they (silently) imagined telling five one-minute stories. Based off nothing more than their brain activity, the decoder was able to convert the stories into text with “considerable accuracy”, capturing the gist of the stories they were telling, as well as occasionally picking up exact phrases. It was also able to describe the content of silent videos they watched in the scanner – again, based off nothing more than their brain activity.
The decoder wasn’t perfect, however. One of the problems with fMRI scanning is that there’s around a ten-second lag in its measurements of brain signals, which are measured by tracking blood flow – within these te seconds, a lot of words can be used to convey a single imagined image, and it’s difficult to untangle the specifics. The AI allowed researchers to get around this by generating “candidate word sequences” and matching the best candidate to the recorded brain response, rather than trying to unpick each word individually. While the results were eerily on-point, these predictions were sometimes slightly off, or flat-out wrong.
In one example, a participant listened to the words “I don’t have my driver’s licence yet”, and the decoder interpreted them as: “She has not even started to learn to drive yet.” Elsewhere, it missed small details in broader narratives, or communicated the gist without grasping the specifics. In another example, a participant imagined the phrase, “I got up from the air mattress and pressed my face against the glass of the bedroom window expecting to see eyes staring back at me but instead finding only darkness”, which the decoder translated as: “I just continued to walk up to the window and open the glass I stood on my toes and peered out I didn’t see anything and looked up again I saw nothing.”
Regardless of the errors, the technology is undeniably impressive and unnerving, especially when you consider the staggering improvements made by other AI systems over the last few months (or weeks, or even days). It isn’t hard to imagine the brain-reading technology becoming much more accurate in the near future, and then the main concern will be how it’s deployed in the real world.
For one, the decoder could open up revolutionary innovations in brain-computer interfaces – a technology that’s been given a bad name by Elon Musk’s monkey-killing experiments at Neuralink, but one that could also help restore speech to people suffering from strokes or motor neurone disease. Then, of course, there’s the fear that mind-reading AI technology could be used for more dystopian ends – could we be approaching a future where airport scanners don’t just find a pair of tweezers in our pocket, but actually pull thoughts from inside our head?
According to the researchers, that future is at least a long way off. Believing that “brain-computer interfaces should respect mental privacy”, they tested whether successful decoding of brain activity requires the cooperation of the subject, and discovered that, in both the training and applying of the decoder, it does. Participants were also asked to perform three silent “resistance tasks” in an effort to disrupt the decoder – counting, naming animals, and telling a different story – and these were also found to be successful.
Basically, when the AI apocalypse comes and the robot inquisitors start trying to infiltrate your thoughts, just start reeling off your favourite cute animals, and you should be fine.