I wrote about Erika Nesse’s fractal music about a month ago (“A Nautilus of Percussive Expressivity”), and she just posted this week another example that’s well worth a listen. Titled “You Can Wish It All Away,” the short piece, not even two full minutes in length, takes tiny snippets of source audio, in this case a woman speaking, and renders from them a slowly evolving rhythmic flurry. Slivers of syllables — not whole verbal sounds but mere bits of them, so even the softest vowel can serve as a plosive thanks to a hard truncation — become an ever-changing fantasy of computer-generated beatcraft.
Two moments seem to suggest that the piece isn’t directly the result of a computer using fractals to break and reformat the source, but that Nesse herself plays a role in the work’s composition — that she is using the fractal algorithm as a source for musical development, much as the algorithm itself is using the original source audio. The first of these moments appears at about the one-minute mark, when the previously furious mix of layered sounds gives way to a harshly minimalist, staccato metric. The second is at the end, when the original sample audio is heard in full, revealing itself as a line from an early episode of The Twilight Zone: “If I wish hard enough, I can wish it all away.” That’s the main character, a former film star, speaking in the episode titled “The Sixteen-Millimeter Shrine.”
Erika Nesse makes fractal music. She codes the music — “coding” being a term that has as much application these days as do “writing” and “composition” to the production of sound. This following playlist collects over a dozen examples of her algorithms set to work on a variety of audio sources. Listen as sounds ranging from white noise (“Fifty One”) to verbalization (“One two three”) to gentle bleeps (“It goes bop”) cycle through patterns within patterns, coming back around to familiar riffs even as they expand continuously outward, a nautilus of percussive expressivity.
For context, Nesse, who’s based in Boston, Massachusetts, wrote the following about the process behind What the Machine Replied, a five-track EP of her fractal music:
This album was generated entirely with fractals, nesting beats within beats to create a self-similar system. I give a small seed pattern of a couple of notes to the machine, and it goes deep into the tree of recursion and echoes back a dizzying track minutes long. Thus, “what the machine replied”.
Here’s a video visualization that aligns the sounds with images, helping the mind trace the patterns:
Someone adds an entry about a cooking magazine from the 1950s? Boom …
Someone corrects the release date in the discography of James Taylor? Bleep …
Someone undoes a spelling correction in an entry about an undersecretary from a mid-century U.S. presidential administration? Bong …
Audible tones and expanding, colored circles are used in tandem to announce changes to the vast collaborative encyclopedia thanks to the great online tool Listen to Wikipedia (listen.hatnote.com), one of the best examples of realtime sonification on the web. Developed by Stephen LaPorte and Mahmoud Hashemi, it’s the subject of a short recent interview from radio station KQED. The conversation with Hashemi goes into the background of the tool. He talks about the software’s actions, and how it serves both as an expression of Wikipedia and as a response to the economic focus of Silicon Valley.
There’s something very pleasing and centering about the placid surveillance of Listen to Wikipedia, all that communal and often rancorous activity transformed into dulcet tones. Sometimes I just let it run on a side screen as I work. Sometimes I also run this pure geographic visualizer, at rcmap.hatnote.com:
Up at the top of this post is a sample still frame of Listen to Wikipedia in action. Here is an example of the sort of realtime information that Listen to Wikipedia parses:
This documentation summarizes how the sounds and related images of Listen to Wikipedia correlate with actual edits:
Bells indicate additions and string plucks indicate subtractions. Pitch changes according to the size of the edit; the larger the edit, the deeper the note. Green circles show edits from unregistered contributors, and purple circles mark edits performed by automated bots.
Here’s a short video of Listen to Wikipedia in action:
This track is Stringbot trying things out in Numerology, the music software program that just saw its update to version 4.0 on November 25, a month ago. The software is self-described as being “all about building musical phrases by starting with simple patterns of repeating notes, and then manipulating the pattern with a set of easily applied transformations.” Among those transformations are “generative” capabilities — that is, changes that are not entirely predictable, such as engaging with chance operations or following evolutionary steps. I first learned about Numerology from Brian Biggs, who records as Dance Robot Dance. This track by Stringbot appeared in his SoundCloud feed on Christmas Day. It puts the chance capabilities to work in “Rando Gamelan,” yielding a gentle pattern that brings to mind a metal mallet instrument submitted to the operations of a wind chime.
This short video documentary, Tristan Perich: Mind the Machine, by Russell Oliver, explores artist and composer Perich’s processes and thoughts on automation, sound, systems, and art. As Perich describes it, he’s interested in “where the physical world around us meets the abstract world of computation and electronics.” Perich speaks throughout, describing his approach to his work, and the video includes a studio tour — his studio being as much an electronics tinkering zone as it is a musician’s home recording space. He’s at work, for example, on a variation on the microtonal wall that consisted of 1,500 small speakers, and the studio is filled with clear plastic boxes to help him manage all his parts. He connects his own minimalist — “bare bones,” in his words — approach to that of his father, the artist Anton Perich. Like his father, Perich has explored an automated drawing machine, images of which open the film. There’s some especially glorious material toward the end in which a chorus of exposed speaker cones accompany pianist Vicky Chow in a live performance.
Marc Weidenbaum founded the website Disquiet.com in 1996 at the intersection of sound, art, and technology, and since 2012 has moderated the Disquiet Junto, an active online community of weekly music/sonic projects. He has written for Nature, Boing Boing, The Wire, Pitchfork, and NewMusicBox, among other periodicals. He is the author of the 33 1⁄3 book on Aphex Twin’s classic album Selected Ambient Works Volume II. Read more about his sonic consultancy, teaching, sound art, and work in film, comics, and other media
• I was on Vivian Host's Peak Time show (on Red Bull Radio) on March 11 to extol the timeless virtues of Aphex Twin's Selected Ambient Works Volume II, and related works. You can listen to a recording here: redbullradio.com.
• My latest article is a review in the March issue of The Wire magazine of the week-long Recombinant Festival (held in San Francisco), whose performance highlights included Herman Kolgen, Rrose, and Electric Indigo.
• March 22, 2019: I'm giving a talk at noon on Friday at the Algorithmic Art Assembly, two days of events (Friday and Saturday) in San Francisco: aaassembly.org. The talk is titled "The Woodshed Is a Black Box" and this is its description in the program: "How a rules-based system formed, shapes, and fuels the long-running online music community known as the Disquiet Junto."
• May 7, 2019: This day sees the release of Rob Walker's book The Art of Noticing: 131 Ways to Spark Creativity, Find Inspiration, and Discover Joy in the Everyday (Knopf), which has entries about the Disquiet Junto.
• May 22, 2019: Final day of the semester of the 15-week "Sounds of Brands" course I teach once a year at the Academy of Art in San Francisco. I post occasional updates here. Follow the tag #sounds-of-brands.
• December 13, 2019: This day marks the 23rd anniversary of Disquiet.com.
• January 7, 2020: This day marks the 8th anniversary of the Disquiet Junto.
• A chapter on the Disquiet Junto ("The Disquiet Junto as an Online Community of Practice," by Ethan Hein) appears in the forthcoming book The Oxford Handbook of Social Media and Music Learning (Oxford University Press), edited by Stephanie Horsley, Janice Waldron, and Kari Veblen.
• There are entries on the Disquiet Junto in the forthcoming book The Music Production Cookbook: Ready-made Recipes for the Classroom (Oxford University Press), edited by Adam Patrick Bell.
• The Disquiet Junto series of weekly communal music projects explore constraints as a springboard for creativity and productivity. There is a new project each Thursday afternoon (California time), and it is due the following Monday at 11:59pm: disquiet.com/junto.
Since January 2012, the Disquiet Junto has been an ongoing weekly collaborative music-making community that employs creative constraints as a springboard for creativity. Subscribe to the announcement list (each Thursday), listen to tracks by participants from around the world, read the FAQ, and join in.
Disquiet Junto Project 0375: Despite Yourself
• 0375 / Despite Yourself / The Assignment: Make a piece of music that sounds as unlike you as you can accomplish.
• 0374 / Glitch Glitch / The Assignment: what happens when you glitch something that's been glitched?
• 0373 / Copernican Music / The Assignment: Record a piece of music intended for an alien species.
• 0372 / Honeymoon Phase / The Assignment: Record a piece of music with (only) your most recently obtained instrument or music/sound tool.
• 0371 / Concrete Ambience / The Assignment: What could concrete wallpaper music sound like?