With so much focus on visual and tactile experiences, it’s time to give aural/acoustic tech some long-overdue attention. Human psychology has long had an ancient fascination with cultures that relied on oral history to preserve traditions and myths and customs, as befits our collective human programming to make the unknown known. In the age of surveillance, there is no mystery about things that might have been said or overheard, and consequently, sound tech has branched out beyond recording and preservation to encompass anything from voice-controlled wearables and biomimetic aids to new uses for sonar and echolocation. For instance, the now infamous flight MH370 seems to have defied time and space and wound up in another dimension of nonexistence, but isn’t it strange that even with all of this cutting-edge technology and interconnectivity, we’ve still regressed to using good old-fashioned pinging methods to try to locate it?
On a less immediately-functional front, artists and music theorists such as Florian Hecker, Minsu Kim, Nonotak, and Iannis Xenakis have long explored (and continue to explore) leftfield sound art, hoping to trigger new methods of sound synthesis that could be, in many ways, analogous to developing new language. How should we consider a piece of music composed by artificial intelligence? With the rise of new building materials, how can we use technology to improve spatial acoustics and enhance urban architecture? How can we develop methods for A.I. to recognize some of the more nuanced human sounds – laughs, sighs, and everything in between – and adapt to them in a functional, behavioral way? Dazed tunes into ten cool things being done with sound that could change the way we interact with art and science.
Engineers have created a 3D acoustic cloak out of plastic, which basically means that we can hide objects from sound waves in situations where vision isn’t possible. In one professor’s words, “by placing this cloak around an object, the sound waves behave like there is nothing more than a flat surface in their path.” The 3D aspect refers to the fact that the cloak offers omnidirectional invisibility regardless of a sound wave’s origin/direction – perfect for military use, and on a much smaller scale, privacy-minded startups looking for a new breed of wearable tech.
Neil Harbisson, a British artist born with achromatopsia, has spent the last ten years researching and developing his own osseointegration tech to allow him to see colors. Harbisson, who could only see in black and white, previously wore an electronic eye that allowed him to experience basic colors as specific sound vibrations; when he first began using an external eye to interpret colors, Harbisson said that adapting to this new mode of sensory input gave him “strong headaches because of the constant input of sound, but after five weeks (his) brain adapted to it, and (he) started to relate music and real sound to color.” This month, after persuading a Catalonian doctor to implant a chip in his skull, Harbisson was finally able to unveil his self-styled “eyeborg,” a curved antenna that allows him to experience a wider spectrum of colors. The eyeborg uses a camera to pick up light/color frequencies, and translates the specific frequencies into vibrations that he interprets through bone conduction. Mass-producing this sort of tech, initially dismissed by some as an art-science frivolity, could be revolutionary for the future of assistive tech as well as mainstream art.
NON-LINEAR ACOUSTIC DESIGN
Most sensible humans are fond of a nice, sensible linear relationship between form and function, but experiments in non-linear acoustic design could prompt some radical change in how we approach the spatial elements of art and architecture. Volumes for Sound is an ongoing artistic initiative that aims to encourage non-linear sound design using ad-hoc construction techniques and modular structures. This collaborative exercise is, in effect, a patient exploration of how small-scale temporary sound structures and low-tech improvisation could trump the need for large-scale, hi-tech architecture in the quest for better acoustics.
Documenting through words and visuals (gifs, film, still photos, paintings, sketches, and so on) is old. Vinyl is also old, but thanks to the undying flannel-clad efforts of music purists (and slightly misguided corporations – yes, Whole Foods, we’re talking about you, records are making a comeback. Brian House has taken these two virulently nostalgic things and woven them into one entity – the Quotidian Record, an actual record that attempts to “make sense of the everyday” by using meticulously collected personal data. As House explained, “This project builds off a year’s worth of data that I recorded using my phone…I then clustered all these points to discover what the most prominent places in my life were, and how they were connected. Each place gets assigned a step of the scale in the music, and each city a key. There’s kind of an underlying pulse to the composition, each pulse which represents two hours of actual time. And what you hear on top of that are these little motifs, the geographic narratives that I cycle through over the course of my daily movements…one rotation of the record corresponds to one day of lived time.” Part life-sundial, part nostalgic artefact, part sonic experiment, House’s project shines a light on alternative methods of preserving memory and a fully-immersive example of multimedia journaling that could birth a new breed of data-fuelled art.
Russian media artist and audio experimentalist Dmitry Morozov, better known as vtol, has invented a sound controller for reading tattoos like sheet music. Using a combination of sound manipulation software, a Nintendo Wii remote, black line sensors, and an Arduino Nano microcontroller, vtol has created a wearable device that can “read” the black lines in a tattoo while allowing the user full control of the audio output – the demo video evokes a theremin-type situation in which the user can move his/her forearm to create different effects, as well as manipulate the speed at which the music is “read.” Sign us up.
Streaming moments of silence isn’t exactly a lucrative M.O. for any musician (besides one John Cage, but I think we can agree that musically, he’s in a league of his own), but one band from Michigan, Vulfpeck, has decided to do just that via Spotify. Last week we covered Vulfpeck’s new album Sleepify (ten tracks of complete and utter silence) which the band have urged fans to stream while they sleep, which will accumulate enough royalties for the band to tour. Of course, silence as a statement/musical piece has been flogged to death over the years, but in this case, Vulfpeck has hijacked the technological bread and butter of the modern-day indie band and effectively commodified an otherwise-free phenomenon. Not sure how well this is going to work out for the size/scale of tour they’re aiming for, but this could point to future incidences of selling silence via technology – a perfectly plausible business given that our natural state of being is one in which we’re constantly bombarded by noise.
In an unprecedentedly practical use of sound technology, researchers have discovered a way to skim dairy milk using ultrasonic waves. Australian scientists are working on a new milk separation technique that can separate fat particles by size, allowing for greater precision in milk skimming overall. One professor suggested that this could lead to culinary developments and new tastes in dairy products. Speaking for baristas and food service employees everywhere, we can say that we’re just as excited about the prospect of creating ten new gradients of milk.
In 1991, sound artist Laetitia Sonami built the ultimate lo-tech prototype for wearable tech with a pair of rubber kitchen gloves, magnets, and glued-on transducers. The idea was to create a new, innovative means of controlling her synthesizers and other equipment. Fast forward to 2014, an age when computer-generated music is a sore spot for a fair few old-guard musicians who find the technology rather inhospitable to their creative process. Imogen Heap is pretty (understandably) frustrated that she isn’t as well-versed in studio technology as other musicians, and fair enough, the world of synth and computerized MacBeats can be a little daunting for someone who’s used to actual conventional instruments and tabs (we’re not taking sides here – we love both kinds equally). Behold Mi.Mu gloves, an open-source gestural music system helmed by Heap to offer musicians more accessibility to hi-tech resources and tech-driven innovation. Instead of being tethered to knobs and switches, musicians using Mi.Mu have greater creative freedom to perform with their own custom-mapped controls tailored to specific movements. The glove itself is a detail-oriented labor of love, taking bare-skin handclaps and naked fingertips into account. So far we’re excited to see the project get funded, and hopefully push the envelope for more artistically-driven DIY haptics.
Canadian engineering students are working on a rather odd bit of automotive technology that uses music to soothe angry drivers. Relying on music’s naturally calming effect on humans, researchers used sensors to monitor a pissed-off driver during a stressful encounter, and “treated” said road rage by playing a favorite soothing song via a mood music app on your smartphone. The system first has to learn the difference between a driver’s normal face and his/her “angry face,” which isn’t much different from existing facial recognition technology used to prevent people from falling asleep at the wheel.
VOICE OF THE VOICELESS
Moving past the cursory creepiness of a project named the Human Voicebank Initiative, synthesizing a custom voice may soon become a marketable reality. Rupal Patel, a speech scientist, has been researching custom-made voices for the past few years in hope of accomplishing just that – determining what makes a distinctive unique voice “by harvesting sounds from a donor of similar gender, age, size and geographical background.” A donor could give a mute or disabled person an artificial voice, which would be incredible once we figure a way to link the cognitive process to said artificial speech. Given the ambiguous nature of voices, especially disembodied ones heard on the phone or via other telecommunication devices (also consider: voice modifiers in the vein of insidious film villains), this could create even more horrific security problems for the future voice-controlled tech, but we’re hoping that synthetic voices will be used for good, not evil.
Follow Alexis Ong on Twitter here @steppinlazer