芽菜 Glitch

A secret recipe, apparently

I’m not entirely sure how or when precisely this glitch occurred. It was sometime yesterday shortly after I uploaded a series of camera shots of the dan dan noodles I made for dinner. This resulting image just popped up in my camera’s photos, and I noticed it this morning, though I do now have some vague memory of it having appeared briefly on my screen yesterday. It’s a mystery.

I think of this as “happenstance glitch,” because it happened by accident and isn’t (to my knowledge) repeatable. I would have said “true glitch,” but the word “true” would get me into trouble The idea of “true” plays into matters of authenticity and purity, and that’s not my intention. I just mean to distinguish, not prioritize, actual accident from the aesthetic impression of accident. Then again, perhaps what happened here is, somehow, reverse-engineerable, and if someone knows how to accomplish this, that’d be cool (not the end result specifically, but in the “what happened on my phone when I uploaded to Instagram” sorta way). Like, could I do this regularly if I chose to?

I watch a lot of YouTube videos in which expert video game players traverse unmarked borders beyond the game designers’ intention and explore artifact territories not in the official game. No doubt this glitch image of my bowl of noodles is simply a glitch, an error from which I happen to derive pleasure, though I do like the idea that perhaps there is a nascent or discarded-experiment glitch filter in Instagram that I somehow accessed by accident. I don’t have difficulty imagining that Instagram might, someday, add a “glitch filter” to its toolkit. Maybe they’ll title it Akihabara or Darmstadt.

There is an additional layer of irony here. Making dan dan noodles at home was a big deal for me. It’s one of my favorite dishes, Chinese or otherwise, and being able to prepare it at home from (relative) scratch by following a recipe was a remarkable feeling, not just how the end result tasted, but also how the various phases of preparation, especially in terms of smell, registered. (By “relative” scratch I mean that I didn’t, you know, actually pickle my own mustard greens. I just bought pickled mustard greens, which I now know are called ya cai, or 芽菜.) It is ironic that while I was documenting one favorite flavor, the glitch — another favorite flavor — surfaced, and it has a recipe I do not know how to access.

PS: Odder still, there is now a second glitch of the same image on my phone, and even though they appear in reverse order chronologically, I believe this one is a glitch of the one up top, resulting from when I posted the “original” (humorous word in this case) to Instagram.

The Generative Tuba

The glorious web video series of id m theft able

There’s a running series on the YouTube channel of user “id m theft able” that is one of my current favorite things on the internet. (I put quotes around that name simply so it’s clear where the name begins and where it ends, and also so it’s clear that the sentence constructed around the name isn’t disintegrating as you read it.) Each of the user’s videos in this series places a tuba somewhere, “with a microphone in it,” as the description always points out.

We then hear both the sound of where the tuba has been placed — along a river bank, adjacent to a waterfall, in the wind and rain, in the snow — and that sound echoing inside of (tracing the contours of, limning the deep recesses of) the tuba itself.

The footage generally runs, uncut, for about an hour. Which is to say, it doesn’t blink. YouTube is filled with nature footage. And if you spend time in the realm of ambient electronic music, there’s a lot that’s shot of battery-powered setups out in the wild. But the generative tuba is the rare drone music video that is, truly (an oft misused term), of nature.

There are 11 videos thus far: youtube.com.

Friday Office Ambience

Video via my YouTube channel, youtube.com/disquiet

Friday office ambience. Six of FM3’s Buddha Machines, from left to right: indeterminate color gen2, light blue gen1, clear gen 3 (aka Chan Fang), dark blue gen 4 (the edition for Philip Glass’ 80th birthday), white gen5, and green gen1. At a reader’s suggestion, I let this one run on for over two minutes.

I may do another with the Gristleism box (FM3 + Throbbing Gristle) added on Monday, though a low-pass filter is probably required to have it settle in with the other members of the robot choir.

Generative at 35,000 Feet

SFO -> iOS -> LAX

There was no audio stored on my iPad or on my phone, and the plane’s wifi wasn’t functioning. The noise cancellation feature of my headphones helped, to some degree, in muting the tense political discussion unfolding behind me between what might, in Fight Club terms, be described as single-serving combatants. The poor newborn crying one further row away was, as well, kept at bay. There remained, however, room for improvement. It was a short flight, just from San Francisco to Los Angeles, but what was I going to listen to?

I pulled up two apps on my iPad. One, a sequencer, would send note values. The other, a synthesizer, would produce sounds in accordance with the sequencer’s directions. The sequencer, named Fugue Machine, can be slowed to a near-glacial pace. Its four independent lines send varying passes on the shared piece of music (depicted in “piano roll” form) they traverse. One of these might read the music in a standard left-to-right direction, another in reverse; some might ping-pong back and forth, while others might treat the note sequence as a refrain to be repeated over and over. I then set the synth, named FM Player 2, on a preset titled Eno’s Feelings: soft pads reportedly based on one of Brian Eno’s own sounds developed on the Yamaha DX7.

And then I just let it roll. Instant generative music, an ever-changing patterning of contrasting yet interrelated melodic and harmonic elements. In the absence of fixed recordings, I filled the noisy void with automated indeterminacy.

Interviewed for Wired

On generative apps and their discontents

I was interviewed for this Wired article by Arielle Pardes, published this morning, about a new wave of generative music apps, among them Endel and Mubert:

Marc Weidenbaum, a writer and cultural critic who studies ambient music, sees this adaptive quality reshaping the future of music itself. “The idea of a recording as a fixed thing should’ve gone away,” he says. With a generative music app, there is potential not just to listen to something organic and ever-changing, but something that strives to emulate your desired mind state exactly.

Weidenbaum says we may be seeing a surge in generative music because our phones are capable of more computational power. But another reason might be that the genre offers a way for companies, advertisers, and game-makers to skirt licensing issues when adding music to their products.

“That’s a little cynical,” he says, but “I think it has a lot to do with cost savings, control, optimization, and a veneer of personalization.” For the rest of us, these apps offer a pleasing surrender to the algorithms–ones that shape the world to our desires and ask nothing in return.

Now, to be clear, I love generative music. I was an early and strong supporter of the RJDJ app, which later evolved, in a manner of speaking, into the Hear app mentioned in the article. (RJDJ creative director Robert M. Thomas has been a frequent participant in and friend of the Disquiet Junto music community.) I’ve also avidly tracked and used Bloom, among other apps created by collaborators Brian Eno and Peter Chilvers. A central theme in my book about Aphex Twin’s album Selected Ambient Works Volume II is the wind chime, a pre-electronic tool for generative expression.

The distinction I’m drawing is between art and commerce. Art projects of course have financial restraints of their own, but it is modern commerical products and services that undergo rigorous cost-benefit analysis as part of their ongoing development and maintenance. This distinction is what led to my self-described cynical (perhaps a better word is skeptical) view of certain economically incentivized flourshings of generative music.

Much as Uber and Lyft are simultaneously employing countless drivers and pursuing driverless transportation, some activities in generative music seem less like artistic ventures and more like attempts to remove the need for human participation. If the clear primary goal is simply to cut costs through automation, that’s when I think the venture should be viewed (and, to mix the imminent metaphor, heard) through a keen, critical lens.

As a friend recently reminded me, ambient music has its foundation in the writings on cybernetics by Norbert Wiener, a mathematician and philosopher who inspired Brian Eno, the genre’s originator. A key text is Wiener’s 1948 book, Cybernetics: Or Control and Communication in the Animal and the Machine, which developed a following in management theory. You might even say that the interest by corporations in generative sound in 2019 is the 70-year-old cybernetics concept coming full circle. Then again, in his later book, God & Golem, Inc.: A Comment on Certain Points Where Cybernetics Impinges on Religion (1964), Wiener employed the image of the golem, a pre-Frankenstein symbol of artificial life gone awry. Which is to say, skepticism isn’t unprecedented.

Read the full piece at wired.com.