For so long the discourse surrounding AI and music has centred on the rehashing of the same narrative: AI will steal your jobs, replace your favourite artists, your life, practically everything! There’s no denying that the mainstreaming of generative AI has ushered in a new wild west era, yes. But it’s also creating new possibilities for creativity. While there are certainly real fears surrounding labour and artistic rights – artists like Billie Eilish have spoken openly about the “predatory” use of AI – there’s more to AI than Big Tech and doomer discourse. Early adopters like Holly Herndon have long championed the creative potential of AI, while artists like Lee Gamble and Oneohtrix Point Never have all incorporated the tech into their recent releases, with FKA twigs, Grimes and Sevdaliza experimenting with AI-generated alter egos.

When talking about the applications of AI within music, it’s important to separate AI as a tool used by artists and the companies profiting from it. “At most, AI is a fascinating collaging tool the likes of which we've never experienced before – with creative and economic implications we have barely begun to grapple with,” says Team Rolfes, the digital crew behind Herndon’s AI cover of “Jolene” alongside other virtual visuals for the likes of Lady Gaga and Danny L Harle. The group are currently gearing up to present their ‘321Rule’ live AV show as part of Sónar+D’s extensive AI and music programme, a real-time club-theatre performance that incorporates AI with slapstick humour and manic visuals in their most ambitious project to date. “To be meaningfully compelling, AI (like any other tool) has to be wielded with a level of intentionality that surpasses its (largely ephemeral) novelty,” they add. ”Everything else is just one more drop of oil, lubricating the slide of our cultural luge race to the bottom.”

AI is capable of creating the perfect replica of a human voice or, in fact, any instrument, real or imagined. Want to hear what a human voice would sound like as a tuba? Or a reggaeton track mashed up with death metal and drone? The perfect language model with unlimited cognitive power and near-infinite memory and processing ability, AI can do it all: a shiny virtual musician living inside your DAW. “Machine learning is kinda techno,” says Hyperdub musician Lee Gamble. “This sci-fi idea that techno is built out of a man-machine symbiosis.” With AI, computers are co-evolving with humans through the exchange of cultural data through data sets, which create the parameters for generative AI to operate. “I definitely feel we’ve entered a new age of collage, mash-up, edits and sampling, especially with stem separation getting so good,” agrees Gamble. “That said, the future or futurism(s) can often feel more interesting as ideas than as a realised thing actually in your hands.”

On one hand, it can be helpful to imagine AI in dialogue between the human imagination (us) and the hallucinations conjured by the machine. “This is where the real magic is,” says Team Rolfes. While text-to-image generators can provide a decent gateway to entry-level musicians – new tech start-up Suno AI hails itself as Chat GPT for music – this will only intensify the AI slop clogging up our feeds. Think of all the bland-sounding AI-generated muzak promoted by the Spotify algorithm, or the millions of discarded Midjourney images collecting dust across the web. This is an inevitable outcome for any technology that’s easy to access – democratising a tool will also encourage laziness, but it can also open pathways to new and unexpected results. “I would never look to a machine for the complete work, but it can throw curveballs your way that send your own creativity bouncing into unexpected directions,” adds Team Rolfes. 

“AI seems a really applicable tool for exploring the technology of synthetic voice simulation but also how to think about staging them and the human body together” – Lee Gamble

Anyone who’s played around with AI enough will tell you that there’s a certain magic to the machine and its incantations. Oftentimes, it feels dreamlike, disembodied, disjointed, forcing the listener to step outside of their brain, to consider what lies on the outer edges of human understanding. On his 2023 album Models, for example, Gamble uses AI to create a choir of synthetic voices that mutate across the seven tracks in a way that sounds particularly inhuman. Accompanied by a live performance by choreographer Candela Capitàn, the Sónar and Unsound co-produced show features AI as a disembodied human clone juxtaposed with real human dancers, whose movements are literally chained to their phones, and broadcast live to TikTok, highlighting the tension between what’s real and artificial. 

Models is an example of the ways AI can interrogate the ongoing relationship between man and machine. “AI is a simulation technology looking for human approval by showing us how good it is at being like us, but it’s not really, there are tonnes of indications to reveal it as non-sentient,” expands Gamble. “What I found most intriguing was the failures, the stumbles, the falls, the moments when an experiment gets stuck in a loop and runs into a wall over and over again, or generally moves in a way that the programmers don’t want. I think you can see Models leaning into these things both on the album and in the performance piece. In many ways, Models is dealing with spirit, mimicry and embodiment and AI seems a really applicable tool for exploring that both in terms of the technology of synthetic voice simulation but also how to think about staging them and the human body together.”

There’s an idea that music creators will be replaced by machines, but you only need to look at the metaverse flop and the lack of general enthusiasm around live streams or augmented reality to glean that people want to experience real people. It’s something that can already be felt across music – as technology speeds up, there seems to be a heightened desire for ‘real’ instruments and live bands. If the so-called return of guitar music is any indication that ultra-smooth tech has gotten too impersonal, too smooth and unreal, then we won’t be ascending into AI domination anytime soon. That’s not to say that AI music won’t function as its own genre, especially for commercial use, but that AI will become just another tool in the musician’s arsenal, comparable to other popular music like samplers or synthesisers. 

Humans should keep driving and define where we want to go with it and what the limits are,” says AI researcher Anna Xambo. “As with any new popular music technology – such as the sampler in the 80s or streaming services in the 00s – there is a sociocultural change in creating and thinking about music that goes together with any new technology adoption. Aspects that should be discussed more systematically are the potential environmental issues brought by the intensive use of AI, as well as how to avoid the potential increase of the digital divide between those privileged who can get access to these technologies versus those who cannot afford it.”

In 1935, Walter Benjamin wrote about the age of mechanical reproduction to describe how mediums like photography altered the reaction of the masses toward art. A century on, we’re living in the age of algorithmic production, where works of art are not only mechanically produced copies of each other but phantasms pulled out from the Borg-like hive mind that is AI. Maybe then, the question shouldn’t be the threat of AI to musicians, but rather what it means to be a musician on the brink of a new and surreal technological revolution, where artists can use AI to harness the errors, glitches and imperfections of the machine to create something beautiful and otherworldly.

This doesn’t mean we shouldn’t be worried, there’s plenty of red flags to be cautious – “behind the scenes, I already hear about the major labels looking to use AI to replace not just the musicians, but directors, designers, publicists, you and me,” says Team Rolfes. One solution could be to reverse the problem, to imagine the relentless stream of AI content as a way to reevaluate the tools we have available to us. “Our algorithms are already choking with slop,” elaborates Rolfes. “AI may free many of us from the pretence that mass algorithmic culture is worth engaging with in the first place – a shaky proposition currently as it is!”

Lee Gamble presents Models with Candela Capitán, Team Rolfes present ‘321Rule’ live AV starring Lil Mariko, and Sevdaliza will perform at Sónar festival taking place in Barcelona between June 13 and 15. AI & Music powered by S+T+ARTS debuts at Sónar, find out more about the programme here