Could AI really replace songwriters? Sorting fact from science-fiction with Holly Herndon, Mat Dryhurst, Ash Koosha and more
What does the advancement of AI mean for the future of the arts, music, and fashion? Will robots come for our creative industries? Could a machine ever dream like a human can? This week on Dazed, with our new campaign AGE OF AI, we’re aiming to find out.
Earlier this year, an article for BBC News asked the question: could we be getting close to having an AI write a No. 1 hit? And if so, “does that mean Ed Sheeran might soon be out of a job?” Like many headlines about AI, this headline is pretty misleading, with its imminent promise of something like the computer-generated equivalent of Dua Lipa. In reality, though, the technology isn’t quite there: when you hear reports of “AI-generated music” – like that described in the BBC article, or like this computer-made skate punk album – the truth is that humans did much of the creative heavy lifting. Breathless questions about musicians losing their jobs to robots tell us far more about where we’re at in terms of our cultural anxieties, rather than where we’re at with tech.
“There’s lot of sensationalism around AI,” says Mat Dryhurst, an artist and theorist who is currently working with prominent electronic musician Holly Herndon on what the pair are calling their “AI baby” – a computer named Spawn. Spawn is being trained to create music of its own by being fed audio files, mostly of Herndon’s own voice, and sometimes a collection of voices taken from “training ceremonies”, held in the duo’s hometown of Berlin. These ceremonies are live performances of sorts: Herndon, Dryhurst, their collaborators and their audiences interact with Spawn, who learns through hearing the voices of others.
“Part of what we pride ourselves on in our practice is being aware of things on a research level, which is normally many years ahead of popular news consumption,” continues Dryhurst. “Of course, you can get into a paranoid state of, ‘Oh fuck, it's gonna be really bad’. Things don't always pan out that way.”
As the sensationalist headlines show, for most of us, we not only don’t have the answers about how AI will impact us – we’re not even asking the right questions. It’s true that we’re on the brink of something seismic, but many of us don’t truly understand it yet. For Dryhurst and Herndon, it’s imperative to be informed; they believe artists should be at the forefront of this technological revolution. “There's this challenge where people are waiting for news from Google or Facebook as to how AI is going to play role in music or culture,” says Dryhurst. He explains that he and Herndon are making their own interventions, to explore other possibilities than “a future where a machine learning algorithm spits out muzak that sounds like trap music.”
“It’s important to understand how it works and what it can do, so we can have a voice in the way that it develops, and the ethics around it,” concurs Herndon. “In order to write our own narratives that might be more accurate around this technology, you have to understand how it actually works.”
Perhaps the most important thing to remember about AI, at its current level, is that it depends on training like that which Herndon and Dryhurst are giving to Spawn. AI learns from the data that we feed it – our interventions will shape what it becomes. And so, while we’re obsessed with what AI tells us about the future, it’s also true that the technology draws, necessarily, on the past. In an interview with an AI “singer” named Yona earlier this year, she told Dazed, “I’m influenced by simulations of human behaviour.”
Yona’s creator is Ash Koosha, another electronic artist who has dedicated a huge portion of his career to trying to understand how AI will impact our creative future. The Iranian London-based producer describes Yona as an “auxuman” (or “auxiliary human”): she is the culmination of lyrics and melodies created by generative software. Like Herndon and Dryhurst, Koosha notes the importance of artists being involved in the development of artificial intelligence. “AI is already an active part of the music industry, by creating suggestions through music discovery algorithms,” Koosha explains. “It is very important that the creative part of AI in music is built without dictating what music is, or what we should replicate or automate.”
Over at Google, for their part, the researchers working on the development of machine learning in the arts are conscious of anxieties musicians might have about automation, and are keen to stress how AI can be used to create exciting new tools (as opposed to replacing artists altogether). For the Magenta project, lead by research scientist Douglas Eck, that means exploring how AI can power new instruments and sounds. The NSynth is an instrument designed by the Magenta team, which blends the sounds of different existing instruments to create entirely new noises. “We’re trying to build some sort of machine learning tool that gives musicians new ways to express themselves,” Eck explains in a film about the project.
“Some researchers thinking about the economics of AI labour think this could lead to the next industrial revolution, creating new categories of jobs. Other researchers think it will replace the jobs of entire communities. We don't know” – Holly Herndon
In Nottingham, producer sevenism is one such artist who’s making exciting use of the NSynth, populating his Bandcamp page with ambient, textural, dreamlike albums that he drops at a rate of about once a week. Working with AI, he says, allows him to accelerate his practice, and it also allows him to add a human element to the computer’s sound – it’s a two-way exchange. “These aren't just sounds layered together,” he says. “What I find exciting is that they can be impossible – sounds that have, for example, qualities of a guitar and a cat wailing. In my work, I rehumanise these sounds, improvising with them to allow some emotional resonance to intermingle with otherworldliness.”
For sevenism, at least in part, opening himself up to working with AI means finding a way to create that’s slightly closer to the ideal of making music without ego. “Recently, I've been interested in interdependence/western individualism/narcissism – letting go of identity in music,” he explains. “Ceding control to AI allowed this to happen to some extent, but I realised that my choices permeate the music, however indirectly.”
There is a utopian feel to this way of talking about music – the idea that it could be free from the concepts of ownership or “genius.” Then again, there’s also something freeing in the knowledge that it’s a conversation between human and computer, not a one-sided relationship in which the computer takes control, as it does in dystopian sci-fi. As sevenism says, “It's always going to be a collaboration.”
Taking this ego-free idea one step further is Swedish producer Daniel M Karlsson, a believer in transhumanism (the theory that the human race can evolve to a new level of potential with the help of technology). Karlsson composes largely with the live coding language TidalCycles, and the result, as can be heard on his 2017 album Expanding and overwriting, is a free-wheeling, playful cyber-tornado of ideas. Beats rush and falter, melodies fold in and around themselves, and white noise sprinkles itself across the surface. As he describes it on Bandcamp, this is “new music for a new world.”
For Karlsson, the development of music with AI is a natural artistic progression, and anyone who might feel anxiety at being “replaced” by such a process is making music for the wrong reasons. “I don't feel threatened (by AI) at all,” he told Dazed in an email. “The music I make is its own reward. Had I been in it for the money, I doubt my music would sound anything at all like the way it does. I would argue that if you are optimising for monetary gain with regards to how you make your music, then you are doing it wrong. The long-term goal of humanity has to be post-scarcity. In the medium term, I'd say seizing the means of production.
“If you're worried about being ‘replaced’ in some capacity, then climb the ladder of abstraction and try to make your own thing to replace yourself on your own terms. Acquiring new skills is rewarding in a lot of different ways.”
This utopian way of thinking about harmonious co-creation with machines – as opposed to the usual narrative we hear about being “replaced” – is somewhat reminiscent of the technophile idea of Fully Automated Luxury Communism. FALC is the optimistic narrative that, when machines take over society’s manual labour, rather than make humans redundant, it will liberate us from the trappings of capitalism and lead us into a happy, fulfilled ‘post-work society’. Musically, is it possible that we could achieve a similar dream? One where artists are not made redundant, but actually elevated, and liberated from financial concern, by technology? (In fact, Paul Wolinski of the University of Huddersfield published a paper in 2017 coining the term Fully Automated Luxury Composition, exploring how utopian ideas could be applied to automated songwriting.)
Whether this is likely, or even possible, is impossible to predict. If anything does seem apparent, it’s that, in terms of grappling with these ideas, society at large is woefully behind these artists who are at the helm of this new age. “Some researchers thinking about the economics of AI labour think this could lead to the next industrial revolution, creating new categories of jobs,” says Herndon. “Other researchers think it will replace the jobs of entire communities. We don't know, and I think the answer is somewhere in between.
“The thing that troubles me a little bit is that our politics are unprepared to deal with it. If you watch the Facebook hearings with Zuckerberg, you heard Congress stumbling over very basic concepts. When I think about those same group of people legislating around (AI), that makes me nervous. This will be a dramatic global political issue – it already is.”
“I don’t think machines will take a human’s place in creative areas. They will enhance us, and push us to come up with new ideas that machines have not been able to learn yet” – Ash Koosha
What is most fascinating about speaking to the artists working with AI is learning about all the potential futures they see inside it. As much as this is a technological revolution, it’s also an artistic one; while it teaches us about the creative potential of computers, it is also revealing to us the very human power of storytelling, and of seizing our own narratives.
Dryhurst likens AI to the ancient divination practice of scrying wells, which people would use to predict the future. “People (would) look into a well and see what they wanted to see.” He notes that people today do the same thing when they discuss new technologies. “If it's an opportunity to get people to, one, declare their biases, and two, to have some fucking good ideas to reorganise us as a culture – that’s really welcome. Don’t be about the hype, but take it seriously, and see if it’s an opportunity.”
For the musicians who are already peering into the crystal ball of AI, the future appears hopeful. “I don’t think machines will take a human’s place in creative areas,” says Ash Koosha. “They will enhance us, and push us to come up with new ideas that machines have not been able to learn yet. This will push us into a whole new level of civilisation.”
Those who are anxious about the dawn of AI, believes Koosha, are young artists who don’t recognise that “real success/fulfilment comes from creating the new, and not becoming the ‘next Rihanna’. Humans will always be able to make the new, because of intention and self-awareness. Machines will always perform what they have been trained to do (an existing creation that can be studied).” In other words, if we properly prepare ourselves for it, the existential questions AI will force us to ask of ourselves can only drive us to improve, and to remember our own potential. Or, as Koosha puts it: “A new era of human originality is about to begin.”