My 33 1/3 book, on Aphex Twin's Selected Ambient Works Volume II, was the 5th bestselling book in the series in 2014. It's available at Amazon (including Kindle) and via your local bookstore. • F.A.Q.Key Tags: #saw2for33third, #sound-art, #classical, #juntoElsewhere: Twitter, SoundCloud, Instagram

Listening to art.
Playing with audio.
Sounding out technology.
Composing in code.

tag: software

The App Developer Prepares a Performance

Chris Carlson plays inside the guitar

Chris Carlson has a performance coming up. This is of note because Carlson is the developer of an iOS app called Borderlands Granular. Carlson’s app allows for a gestural, elegant, detailed exploration of the sounds within sounds. He has posted pre-performance test runs of his approach (the track’s title is “Pigment Library”), which in this case involves guitar chords as the source audio. The result is at times more orchestral than it is rock, more the jubilant yet anxious chaos of strings tuning up than the strumming, however fierce, of a six-string. You can hear moments of guitar-like presence, like the touching of fingers to taut metal, the bending of the wires. But more often than not Carlson is deep inside the guitar, the cloud-like structures of his Borderlands app unfolding the source material, laying bare and layering its inherent textures.

Below is an image of what a Borderlands looks like in action:

Screen Shot 2016-04-18 at 10.33.33 PM

Track originally posted at soundcloud.com/cloudveins. More from Carlson at cloudveins.bandcamp.com, modulationindex.com, and borderlands-granular.com.

Also tagged , , , , / / Leave a comment ]

This Week in Sound: 3D Crimes + Posthuman Postrock

+ caption studies + !@#$ patents + Google metronome + iPad conducting + seismic listening

A lightly annotated clipping service:

3D Crimes: The hum of a refrigerator may not be enough to allow identification of its make and model, and the electric car may have let us to make our engines sound like something else entirely (see the SoundRacer), but more consequentially the rumblings of a 3D printer may contain sufficient detail for the someone “to reverse-engineer and re-create 3D printed objects based off of nothing more than a smartphone audio recording”: 3ders.org, via Barry Threw.

thirdarm

Posthuman Postrock: There is now a “wearable third arm” for drummers, which brings to mind both the opportunities for posthuman postrock, and the kit developed for Rick Allen of Def Leppard after he lost an arm in the mid-1980s. Above photo shows Tyler White accompanied by Gil Weinberg: gizmag.com, via twitter.com/showcaseJase.

[Heavy Breathing]: Last year, Sean Zdenek published Reading Sounds, a book about captions, about how the audio of filmed entertainment (dialog, diegetic sound like a passing car, and non-diegetic sound like a score) is represented with words superimposed on images. Now there’s a two-day “virtual conference” on captions (Caption Studies) scheduled for August 1 and 2 of this year. If you’re the sort of person, like me, who thrills to “[dramatic music]” and “[ninjas panting],” then I’ll see you there. Well, that is, we’ll be online simultaneously: captionstudies.wou.edu.

!@#$ Patents: This sounded like an April Fools joke, but it appeared on Business Insider on March 31, and appears to be the case: Apple has technology that automatically removes the curse words from songs. Filed in 2014, the patent is titled “Management, Replacement and Removal of Explicit Lyrics during Audio Playback.” Keep in mind that two years prior to that, in 2012, the Apple Match service — which adds to your cloud the albums you already own, saving you the perceived hassle of ripping and uploading them — accidentally replaced people’s NSFW versions with the “clean” edits that play in fast-food restaurants and on cautious radio stations — via factmag.com, Scanner, and King Britt

metronome

Google BPM: Well, Google the word “metronome” and you’ll be provided a functioning metronome that allows you to select an integer between 40 and 208 and hear what that click track sounds like: androidpolice.com.

iClassical Pro: Alan Pierson, of the adventurous chamber ensemble Alarm Will Sound, has uploaded to Medium an article first published two years ago on the group’s blog, but it’s new to me. It’s Pierson talking about how he moved from using paper scores to digital scores when conducting. His take: “And while conducting off tablet is safer in many ways, it’s almost certainly more prone to catastrophe on any particular gig than working off of paper scores: a PC crash is probably more likely than music falling off a stand or out of a binder and harder to recover from. But the plusses seem to far outweigh the minuses.” At least now Google can help with the BPM.

Ear on the Apocalypse: “Seismologists at the University of Alaska Fairbanks Alaska Volcano Network have developed a refined set of methods that allows them to detect and locate the airwaves generated by a volcanic explosion on distant seismic networks.” That is to say, scientists are listening for earthquakes: “This study shows how we can expand the use of seismic data by looking at the acoustic waves from volcanic explosions that are recorded on seismometers”: uaf.edu.

This first appeared, in slightly different form, in the April 5, 2016, edition of the free Disquiet “This Week in Sound” email newsletter: tinyletter.com/disquiet.

Also tagged , , , / / Leave a comment ]

The Sixteen-Millimeter Fractal

A work of percussive wordplay by Erika Nesse

16millimeter-nesse

I wrote about Erika Nesse’s fractal music about a month ago (“A Nautilus of Percussive Expressivity”), and she just posted this week another example that’s well worth a listen. Titled “You Can Wish It All Away,” the short piece, not even two full minutes in length, takes tiny snippets of source audio, in this case a woman speaking, and renders from them a slowly evolving rhythmic flurry. Slivers of syllables — not whole verbal sounds but mere bits of them, so even the softest vowel can serve as a plosive thanks to a hard truncation — become an ever-changing fantasy of computer-generated beatcraft.

Two moments seem to suggest that the piece isn’t directly the result of a computer using fractals to break and reformat the source, but that Nesse herself plays a role in the work’s composition — that she is using the fractal algorithm as a source for musical development, much as the algorithm itself is using the original source audio. The first of these moments appears at about the one-minute mark, when the previously furious mix of layered sounds gives way to a harshly minimalist, staccato metric. The second is at the end, when the original sample audio is heard in full, revealing itself as a line from an early episode of The Twilight Zone: “If I wish hard enough, I can wish it all away.” That’s the main character, a former film star, speaking in the episode titled “The Sixteen-Millimeter Shrine.”

Track originally posted at soundcloud.com/conversationswithrocks. More from Nesse, who’s based in Boston, at conversationswithrocks.tumblr.com and erikanesse.bandcamp.com. Film clip screenshot via youtu.be.

Also tagged , / / Comments: 2 ]

A Nautilus of Percussive Expressivity

The fractal music of Erika Nesse

Erika Nesse makes fractal music. She codes the music — “coding” being a term that has as much application these days as do “writing” and “composition” to the production of sound. This following playlist collects over a dozen examples of her algorithms set to work on a variety of audio sources. Listen as sounds ranging from white noise (“Fifty One”) to verbalization (“One two three”) to gentle bleeps (“It goes bop”) cycle through patterns within patterns, coming back around to familiar riffs even as they expand continuously outward, a nautilus of percussive expressivity.

For context, Nesse, who’s based in Boston, Massachusetts, wrote the following about the process behind What the Machine Replied, a five-track EP of her fractal music:

This album was generated entirely with fractals, nesting beats within beats to create a self-similar system. I give a small seed pattern of a couple of notes to the machine, and it goes deep into the tree of recursion and echoes back a dizzying track minutes long. Thus, “what the machine replied”.

Here’s a video visualization that aligns the sounds with images, helping the mind trace the patterns:

SoundCloud set originally posted at soundcloud.com/conversationswithrocks. Keep an eye on Nesse’s fledgling fractalmusicmachine.com website.

Also tagged , / / Comment: 1 ]

Realtime Sonification

A KQED interview with Mahmoud Hashemi about Listen to Wikipedia

20151127-listentowik

Someone adds an entry about a cooking magazine from the 1950s? Boom …

Someone corrects the release date in the discography of James Taylor? Bleep …

Someone undoes a spelling correction in an entry about an undersecretary from a mid-century U.S. presidential administration? Bong …

Audible tones and expanding, colored circles are used in tandem to announce changes to the vast collaborative encyclopedia thanks to the great online tool Listen to Wikipedia (listen.hatnote.com), one of the best examples of realtime sonification on the web. Developed by Stephen LaPorte and Mahmoud Hashemi, it’s the subject of a short recent interview from radio station KQED. The conversation with Hashemi goes into the background of the tool. He talks about the software’s actions, and how it serves both as an expression of Wikipedia and as a response to the economic focus of Silicon Valley.

There’s something very pleasing and centering about the placid surveillance of Listen to Wikipedia, all that communal and often rancorous activity transformed into dulcet tones. Sometimes I just let it run on a side screen as I work. Sometimes I also run this pure geographic visualizer, at rcmap.hatnote.com:

20151127-rcmap

Up at the top of this post is a sample still frame of Listen to Wikipedia in action. Here is an example of the sort of realtime information that Listen to Wikipedia parses:

20151127-wikiapi

This documentation summarizes how the sounds and related images of Listen to Wikipedia correlate with actual edits:

Bells indicate additions and string plucks indicate subtractions. Pitch changes according to the size of the edit; the larger the edit, the deeper the note. Green circles show edits from unregistered contributors, and purple circles mark edits performed by automated bots.

Here’s a short video of Listen to Wikipedia in action:

Track originally posted at soundcloud.com/kqed. The KQED story was produced by Sam Harnett, of the podcast The World According to Sound (theworldaccordingtosound.org). Check out Listen to Wikipedia at listen.hatnote.com. It’s also available as a free iOS app (itunes.apple.com).

Also tagged , , , / / Leave a comment ]