Interviewed for Wired

On generative apps and their discontents

I was interviewed for this Wired article by Arielle Pardes, published this morning, about a new wave of generative music apps, among them Endel and Mubert:

Marc Weidenbaum, a writer and cultural critic who studies ambient music, sees this adaptive quality reshaping the future of music itself. “The idea of a recording as a fixed thing should’ve gone away,” he says. With a generative music app, there is potential not just to listen to something organic and ever-changing, but something that strives to emulate your desired mind state exactly.

Weidenbaum says we may be seeing a surge in generative music because our phones are capable of more computational power. But another reason might be that the genre offers a way for companies, advertisers, and game-makers to skirt licensing issues when adding music to their products.

“That’s a little cynical,” he says, but “I think it has a lot to do with cost savings, control, optimization, and a veneer of personalization.” For the rest of us, these apps offer a pleasing surrender to the algorithms–ones that shape the world to our desires and ask nothing in return.

Now, to be clear, I love generative music. I was an early and strong supporter of the RJDJ app, which later evolved, in a manner of speaking, into the Hear app mentioned in the article. (RJDJ creative director Robert M. Thomas has been a frequent participant in and friend of the Disquiet Junto music community.) I’ve also avidly tracked and used Bloom, among other apps created by collaborators Brian Eno and Peter Chilvers. A central theme in my book about Aphex Twin’s album Selected Ambient Works Volume II is the wind chime, a pre-electronic tool for generative expression.

The distinction I’m drawing is between art and commerce. Art projects of course have financial restraints of their own, but it is modern commerical products and services that undergo rigorous cost-benefit analysis as part of their ongoing development and maintenance. This distinction is what led to my self-described cynical (perhaps a better word is skeptical) view of certain economically incentivized flourshings of generative music.

Much as Uber and Lyft are simultaneously employing countless drivers and pursuing driverless transportation, some activities in generative music seem less like artistic ventures and more like attempts to remove the need for human participation. If the clear primary goal is simply to cut costs through automation, that’s when I think the venture should be viewed (and, to mix the imminent metaphor, heard) through a keen, critical lens.

As a friend recently reminded me, ambient music has its foundation in the writings on cybernetics by Norbert Wiener, a mathematician and philosopher who inspired Brian Eno, the genre’s originator. A key text is Wiener’s 1948 book, Cybernetics: Or Control and Communication in the Animal and the Machine, which developed a following in management theory. You might even say that the interest by corporations in generative sound in 2019 is the 70-year-old cybernetics concept coming full circle. Then again, in his later book, God & Golem, Inc.: A Comment on Certain Points Where Cybernetics Impinges on Religion (1964), Wiener employed the image of the golem, a pre-Frankenstein symbol of artificial life gone awry. Which is to say, skepticism isn’t unprecedented.

Read the full piece at wired.com.

SOUND RESEARCH LOG: Always On: Rainforests, Sleep Disorders, More

Nick Shchetko at blogs.wsj.com/digits surveys recent app developments related to “always on” microphones.

There’s Rainforest, a chainsaw-detection tool halfway through its Kickstarter campaign.

He also lists examples that “assess the quality of sleep, explain why a baby is crying, tell you when you’re stressed, identify mental disorder, track gunshots and even help to crowd-monitor endangered cicada species.”

And then there’s BodyBeat, prototype pictured above:

A crude prototype of BodyBeat, revealed in mid-June, uses an external custom-made microphone to track body sounds, such as breath or cough, with the ambitious aim to detect illnesses or record food consumption.

The microphone is placed on the neck with a 3D-printed neckpiece, which is plugged into a small audio processing device that is wirelessly connected to a smartphone. BodyBeat authors plan to redesign the system for better usability in commercial applications.

It may sound far-fetched. But there could be plenty of market opportunities for systems like BodyBeat. Breathing sounds are indicative of lung conditions, and data on what users consume ”“ say, how often do they drink or eat certain products ”“ can provide important data for diet tracking apps.

There are certainly limitations to sound-detection technology. The quality of embedded microphones remains a concern, for one. “The problem is you can’t create a robust app because everyone is using different microphones,”said Alexander Adams, who helped develop BodyBeat.

Found thanks to Alexis Madrigal’s http://ift.tt/1lPwWYp.

This entry cross-posted from the Disquiet linkblog project sound.tumblr.com.