I wrote about Erika Nesse’s fractal music about a month ago (“A Nautilus of Percussive Expressivity”), and she just posted this week another example that’s well worth a listen. Titled “You Can Wish It All Away,” the short piece, not even two full minutes in length, takes tiny snippets of source audio, in this case a woman speaking, and renders from them a slowly evolving rhythmic flurry. Slivers of syllables — not whole verbal sounds but mere bits of them, so even the softest vowel can serve as a plosive thanks to a hard truncation — become an ever-changing fantasy of computer-generated beatcraft.
Two moments seem to suggest that the piece isn’t directly the result of a computer using fractals to break and reformat the source, but that Nesse herself plays a role in the work’s composition — that she is using the fractal algorithm as a source for musical development, much as the algorithm itself is using the original source audio. The first of these moments appears at about the one-minute mark, when the previously furious mix of layered sounds gives way to a harshly minimalist, staccato metric. The second is at the end, when the original sample audio is heard in full, revealing itself as a line from an early episode of The Twilight Zone: “If I wish hard enough, I can wish it all away.” That’s the main character, a former film star, speaking in the episode titled “The Sixteen-Millimeter Shrine.”
Erika Nesse makes fractal music. She codes the music — “coding” being a term that has as much application these days as do “writing” and “composition” to the production of sound. This following playlist collects over a dozen examples of her algorithms set to work on a variety of audio sources. Listen as sounds ranging from white noise (“Fifty One”) to verbalization (“One two three”) to gentle bleeps (“It goes bop”) cycle through patterns within patterns, coming back around to familiar riffs even as they expand continuously outward, a nautilus of percussive expressivity.
For context, Nesse, who’s based in Boston, Massachusetts, wrote the following about the process behind What the Machine Replied, a five-track EP of her fractal music:
This album was generated entirely with fractals, nesting beats within beats to create a self-similar system. I give a small seed pattern of a couple of notes to the machine, and it goes deep into the tree of recursion and echoes back a dizzying track minutes long. Thus, “what the machine replied”.
Here’s a video visualization that aligns the sounds with images, helping the mind trace the patterns:
Someone adds an entry about a cooking magazine from the 1950s? Boom …
Someone corrects the release date in the discography of James Taylor? Bleep …
Someone undoes a spelling correction in an entry about an undersecretary from a mid-century U.S. presidential administration? Bong …
Audible tones and expanding, colored circles are used in tandem to announce changes to the vast collaborative encyclopedia thanks to the great online tool Listen to Wikipedia (listen.hatnote.com), one of the best examples of realtime sonification on the web. Developed by Stephen LaPorte and Mahmoud Hashemi, it’s the subject of a short recent interview from radio station KQED. The conversation with Hashemi goes into the background of the tool. He talks about the software’s actions, and how it serves both as an expression of Wikipedia and as a response to the economic focus of Silicon Valley.
There’s something very pleasing and centering about the placid surveillance of Listen to Wikipedia, all that communal and often rancorous activity transformed into dulcet tones. Sometimes I just let it run on a side screen as I work. Sometimes I also run this pure geographic visualizer, at rcmap.hatnote.com:
Up at the top of this post is a sample still frame of Listen to Wikipedia in action. Here is an example of the sort of realtime information that Listen to Wikipedia parses:
This documentation summarizes how the sounds and related images of Listen to Wikipedia correlate with actual edits:
Bells indicate additions and string plucks indicate subtractions. Pitch changes according to the size of the edit; the larger the edit, the deeper the note. Green circles show edits from unregistered contributors, and purple circles mark edits performed by automated bots.
Here’s a short video of Listen to Wikipedia in action:
This track is Stringbot trying things out in Numerology, the music software program that just saw its update to version 4.0 on November 25, a month ago. The software is self-described as being “all about building musical phrases by starting with simple patterns of repeating notes, and then manipulating the pattern with a set of easily applied transformations.” Among those transformations are “generative” capabilities — that is, changes that are not entirely predictable, such as engaging with chance operations or following evolutionary steps. I first learned about Numerology from Brian Biggs, who records as Dance Robot Dance. This track by Stringbot appeared in his SoundCloud feed on Christmas Day. It puts the chance capabilities to work in “Rando Gamelan,” yielding a gentle pattern that brings to mind a metal mallet instrument submitted to the operations of a wind chime.
This short video documentary, Tristan Perich: Mind the Machine, by Russell Oliver, explores artist and composer Perich’s processes and thoughts on automation, sound, systems, and art. As Perich describes it, he’s interested in “where the physical world around us meets the abstract world of computation and electronics.” Perich speaks throughout, describing his approach to his work, and the video includes a studio tour — his studio being as much an electronics tinkering zone as it is a musician’s home recording space. He’s at work, for example, on a variation on the microtonal wall that consisted of 1,500 small speakers, and the studio is filled with clear plastic boxes to help him manage all his parts. He connects his own minimalist — “bare bones,” in his words — approach to that of his father, the artist Anton Perich. Like his father, Perich has explored an automated drawing machine, images of which open the film. There’s some especially glorious material toward the end in which a chorus of exposed speaker cones accompany pianist Vicky Chow in a live performance.
• February 6, 2019: First day of the new semester of the 15-week "Sounds of Brands" course I teach once a year at the Academy of Art in San Francisco.
• March 22, 2019: I'm giving a talk at the Algorithmic Art Assembly, two days of events in San Francisco: aaassembly.org.
• December 13, 2019: This day marks the 23rd anniversary of Disquiet.com.
• January 7, 2020: This day marks the 8th anniversary of the Disquiet Junto.
• Ongoing: The Disquiet Junto series of weekly communal music projects explore constraints as a springboard for creativity and productivity. There is a new project each Thursday afternoon (California time), and it is due the following Monday at 11:59pm: disquiet.com/junto.
• My book on Aphex Twin's landmark 1994 album, Selected Ambient Works Vol. II, published as part of the 33 1/3 series, an imprint of Bloomsbury, is now in its second printing. It has been translated into Spanish, and is due out soon in Japanese, as well. It can be purchased at amazon.com, among other places.
The Disquiet Junto is an ongoing weekly collaborative music-making space in which restraints are used as a springboard for creativity. Subscribe to the announcement list at tinyletter.com/disquiet-junto. There is an FAQ. ... These are the 5 most recent weekly projects: