I wrote about Erika Nesse’s fractal music about a month ago (“A Nautilus of Percussive Expressivity”), and she just posted this week another example that’s well worth a listen. Titled “You Can Wish It All Away,” the short piece, not even two full minutes in length, takes tiny snippets of source audio, in this case a woman speaking, and renders from them a slowly evolving rhythmic flurry. Slivers of syllables — not whole verbal sounds but mere bits of them, so even the softest vowel can serve as a plosive thanks to a hard truncation — become an ever-changing fantasy of computer-generated beatcraft.
Two moments seem to suggest that the piece isn’t directly the result of a computer using fractals to break and reformat the source, but that Nesse herself plays a role in the work’s composition — that she is using the fractal algorithm as a source for musical development, much as the algorithm itself is using the original source audio. The first of these moments appears at about the one-minute mark, when the previously furious mix of layered sounds gives way to a harshly minimalist, staccato metric. The second is at the end, when the original sample audio is heard in full, revealing itself as a line from an early episode of The Twilight Zone: “If I wish hard enough, I can wish it all away.” That’s the main character, a former film star, speaking in the episode titled “The Sixteen-Millimeter Shrine.”
Erika Nesse makes fractal music. She codes the music — “coding” being a term that has as much application these days as do “writing” and “composition” to the production of sound. This following playlist collects over a dozen examples of her algorithms set to work on a variety of audio sources. Listen as sounds ranging from white noise (“Fifty One”) to verbalization (“One two three”) to gentle bleeps (“It goes bop”) cycle through patterns within patterns, coming back around to familiar riffs even as they expand continuously outward, a nautilus of percussive expressivity.
For context, Nesse, who’s based in Boston, Massachusetts, wrote the following about the process behind What the Machine Replied, a five-track EP of her fractal music:
This album was generated entirely with fractals, nesting beats within beats to create a self-similar system. I give a small seed pattern of a couple of notes to the machine, and it goes deep into the tree of recursion and echoes back a dizzying track minutes long. Thus, “what the machine replied”.
Here’s a video visualization that aligns the sounds with images, helping the mind trace the patterns:
Someone adds an entry about a cooking magazine from the 1950s? Boom …
Someone corrects the release date in the discography of James Taylor? Bleep …
Someone undoes a spelling correction in an entry about an undersecretary from a mid-century U.S. presidential administration? Bong …
Audible tones and expanding, colored circles are used in tandem to announce changes to the vast collaborative encyclopedia thanks to the great online tool Listen to Wikipedia (listen.hatnote.com), one of the best examples of realtime sonification on the web. Developed by Stephen LaPorte and Mahmoud Hashemi, it’s the subject of a short recent interview from radio station KQED. The conversation with Hashemi goes into the background of the tool. He talks about the software’s actions, and how it serves both as an expression of Wikipedia and as a response to the economic focus of Silicon Valley.
There’s something very pleasing and centering about the placid surveillance of Listen to Wikipedia, all that communal and often rancorous activity transformed into dulcet tones. Sometimes I just let it run on a side screen as I work. Sometimes I also run this pure geographic visualizer, at rcmap.hatnote.com:
Up at the top of this post is a sample still frame of Listen to Wikipedia in action. Here is an example of the sort of realtime information that Listen to Wikipedia parses:
This documentation summarizes how the sounds and related images of Listen to Wikipedia correlate with actual edits:
Bells indicate additions and string plucks indicate subtractions. Pitch changes according to the size of the edit; the larger the edit, the deeper the note. Green circles show edits from unregistered contributors, and purple circles mark edits performed by automated bots.
Here’s a short video of Listen to Wikipedia in action:
This track is Stringbot trying things out in Numerology, the music software program that just saw its update to version 4.0 on November 25, a month ago. The software is self-described as being “all about building musical phrases by starting with simple patterns of repeating notes, and then manipulating the pattern with a set of easily applied transformations.” Among those transformations are “generative” capabilities — that is, changes that are not entirely predictable, such as engaging with chance operations or following evolutionary steps. I first learned about Numerology from Brian Biggs, who records as Dance Robot Dance. This track by Stringbot appeared in his SoundCloud feed on Christmas Day. It puts the chance capabilities to work in “Rando Gamelan,” yielding a gentle pattern that brings to mind a metal mallet instrument submitted to the operations of a wind chime.
This short video documentary, Tristan Perich: Mind the Machine, by Russell Oliver, explores artist and composer Perich’s processes and thoughts on automation, sound, systems, and art. As Perich describes it, he’s interested in “where the physical world around us meets the abstract world of computation and electronics.” Perich speaks throughout, describing his approach to his work, and the video includes a studio tour — his studio being as much an electronics tinkering zone as it is a musician’s home recording space. He’s at work, for example, on a variation on the microtonal wall that consisted of 1,500 small speakers, and the studio is filled with clear plastic boxes to help him manage all his parts. He connects his own minimalist — “bare bones,” in his words — approach to that of his father, the artist Anton Perich. Like his father, Perich has explored an automated drawing machine, images of which open the film. There’s some especially glorious material toward the end in which a chorus of exposed speaker cones accompany pianist Vicky Chow in a live performance.
Marc Weidenbaum founded the website Disquiet.com in 1996 at the intersection of sound, art, and technology, and since 2012 has moderated the Disquiet Junto, an active online community of weekly music/sonic projects. He has written for Nature, Boing Boing, The Wire, Pitchfork, and NewMusicBox, among other periodicals. He is the author of the 33 1⁄3 book on Aphex Twin’s classic album Selected Ambient Works Volume II. Read more about his sonic consultancy, teaching, sound art, and work in film, comics, and other media
• February 5, 2020: The first session of the 15-week course I teach at the Academy of Art about the role of sound in the media landscape.
• April 15, 2020: A chapter on the Disquiet Junto ("The Disquiet Junto as an Online Community of Practice," by Ethan Hein) appears in the forthcoming book The Oxford Handbook of Social Media and Music Learning (Oxford University Press), edited by Stephanie Horsley, Janice Waldron, and Kari Veblen. (Details at oup.com.)
• December 13, 2020: This day marks the 24th anniversary of Disquiet.com.
• January 7, 2021: This day marks the 9th anniversary of the start of the Disquiet Junto music community.
• There are entries on the Disquiet Junto in the forthcoming book The Music Production Cookbook: Ready-made Recipes for the Classroom (Oxford University Press), edited by Adam Patrick Bell. Ethan Hein wrote one, and I did, too.
• At least two live group concerts by Disquiet Junto members in the San Francisco Bay Area are in the works for 2020.
• I have liner notes for a musician's solo album and an essay in a book about an art event due out. I'll announce as the release dates come into focus.
• The Disquiet Junto series of weekly communal music projects explore constraints as a springboard for creativity and productivity. There is a new project each Thursday afternoon (California time), and it is due the following Monday at 11:59pm: disquiet.com/junto.
Since January 2012, the Disquiet Junto has been an ongoing weekly collaborative music-making community that employs creative constraints as a springboard for creativity. Subscribe to the announcement list (each Thursday), listen to tracks by participants from around the world, read the FAQ, and join in.
• 0456 / Line Up / The Assignment: Interpret a painting by Agnes Martin as if it were a graphic score.
• 0455 / Inner Invertebrate / The Assignment: What does a moment (or a day) in the life of a jellyfish sound like to a jellyfish?
• 0454 / Lsoo Vneg / The Assignment: Encode the name of someone you love into a piece of music.
• 0453 / Dial Up / The Assignment: Imagine the technologically mediated First Contact through sound.
• 0452 / Let's Scream / The Assignment: Get cathartic. Be resilient. Turn your scream into music.