My 33 1/3 book, on Aphex Twin's Selected Ambient Works Volume II, was the 5th bestselling book in the series in 2014. It's available at Amazon (including Kindle) and via your local bookstore. • F.A.Q.Key Tags: #saw2for33third, #sound-art, #classical, #juntoElsewhere: Twitter, SoundCloud, Instagram

Listening to art.
Playing with audio.
Sounding out technology.
Composing in code.

tag: studio journal

Synth Learning: “Tako Friday”

The soap-opera narrative of my modular synthesizer diary is me breaking up with and then getting back together again with my Soundmachines UL1 module. I think we finally committed to a long-time engagement last night. Season-ending episode.

This evening, to celebrate the 24-hour-versay of our vows, I ran a slow arpeggio of a series of electric guitar chords through the UL1, and through four other processing units.

Here’s more technical detail, as part of my modular diary, mostly for my own memory: All five of these separate processings of the guitar play simultaneously, though two are being gated, meaning you don’t hear them consistently. The UL1 is a lofi looper, and it’s the thing here being pushed into glitch territory. The UL1 is receiving a narrow, high-end band of the guitar signal, as filtered by the Make Noise FXDf. Another narrow band, also on the high end, is going from the FXDf straight out. A third narrow band, the highest of the trio, is going into a slowly clocked Befaco Muxlicer, the relative volume of the signal changing with each pulse. That same pulse is determining whether a fourth channel, the guitar through the Make Noise Erbe-Verb reverb module, is to be heard or not (as clocked by a slow square wave on a Batumi). That Erbe-Verbe is also having its algorithm flipped into reverse, on occasion, based on the same clocked pulse, but the gate delayed a bit (thanks to the Hemispheres firmware running on an Ornament and Crime module). And finally, the guitar is running through Clouds, a granular synthesis module, which is also being clocked to occasionally snag a bit of the guitar signal and turn it into a haze.

It took awhile to get the chords right. The only note the four chords have in common is an open D. The piece fades in with the D played on two strings, setting the backing tone. It also took awhile to get the right processing decisions made. I started with the UL1, and then built up and adjusted from there. I’m working on having more randomness in the triggering of the UL1, but this is pretty good, far as it goes.

It sounds a bit “Octopus’s Garden,” so it’s titled “Tako Friday” (tako being Japanese for octopus, and this being Friday). In retrospect I hear a bit of “The Dark Side of the Moon” in there, too. The audio was recorded through a Mackie mixer into a Zoom H4n, and then trimmed and given a fade in and fade out in Adobe Audition.

Track originally posted at soundcloud.com/disquiet.

Also tagged / / Leave a comment ]

Studio Journal: “The Body Pneumatic”

Usually when I use my iPad as part of the process, it’s just that: part of the process, creating something that I then employ in another context (like a sample for my modular synth), or processing something external (such as my electric guitar). This time, I wanted to do something where the iPad was the beginning as well as the end of the process, and everything in between, all the way through uploading the finished recording to SoundCloud.

The short version of the process: I recorded my breath (something closer to a breathy vowel), then cut it up into slivers, then enacted some alterations on those individual slivers, then triggered them, then recorded a second variation triggered differently, then combined the two tracks by overlaying them, and then uploaded.

This is part of the current Disquiet Junto project (number 0384), in which we find rhythmic material in our breathing. When working with the sample-triggering, I set the pace of this to 60bpm, which is sort of my happy pace. I didn’t think of the start of each breath as the pace, but instead various moments within the breath.

For more detail, here are the iPad apps I used: I recorded the breathy vowel into AudioShare. It took several tries to get the quality I was looking for. The iPad’s microphone turned most of the initial breath attempts into harshly serrated white noise, which is when I added a vowel/hum quality to the breath, and that took the edge off it. I transferred the sample from AudioShare to ReSlice, and then I used ReSlice to break it down into evenly divided segments, and then changed the attack, decay, and release on those slices, in order that each had a unique quality (I also set two of them to play in reverse).

I used the Autony app to trigger the slices in ReSlice for one track, and then added more randomness within Autony to a second round of triggers, yielding a second track of equal length. I could have done those two tracks separately and added them together after the fact, but I wanted to hear what they sounded like together, so I did this all in the AUM app. When I was happy with the balance between the two Autony-triggered ReSlices, I transferred the two lines to the Cubasis 2 app, then used the mixdown tool within Cubasis to output a finished mix. Then I sent that back to AudioShare, and used AudioShare to upload to my SoundCloud account.

More on this 384th weekly Disquiet Junto project — Breath Beat / The Assignment: Explore breath as a resource for rhythm — at:

disquiet.com/0384/

More on the Disquiet Junto at:

disquiet.com/junto/

Subscribe to project announcements here:

tinyletter.com/disquiet-junto/

Project discussion takes place on llllllll.co:

llllllll.co/t/disquiet-junto-pr…t-0384-breath-beat/

There’s also on a Junto Slack. Send your email address to twitter.com/disquiet for Slack inclusion.

Image associated with this project adapted (cropped, colors changed, text added, cut’n’paste) thanks to a Creative Commons license from a photo credited to Victor Morell Perez:

flic.kr/p/4M5zUQ

creativecommons.org/licenses/by-nc/2.0/

Also tagged , / / Leave a comment ]

Synth Learning: Muxlicer Piano (First Patch)

A first go with a new tool

This is my first patch with a new module, a device from the manufacturer Befaco called the Muxplicer. It is capable of many things involving slicing up an incoming signal and effecting changes upon it, such as triggering all sorts of percussive cues. In this case what’s happening is a sample of an electric piano is being triggered every eight beats, and then for each of those beats (pulses, really) various things occur. In most of the cases it’s a matter of the volume level shifting, but in two cases (that is, on two of the pulses) some heavy reverb is put upon it. In addition, a sliver of of the signal is being sampled and replayed in a glitch manner at a lower volume. (Technically the first slice was the same patch processing live guitar chords, but I decided to use a sample playback on this initial round.)

Also tagged / / Leave a comment ]

Synth Learning: Distant Train Vapor Trail Fragments

Reworking a Wyoming field recording

This is a single field recording of a train horn (source below) being warped simultaneously in various ways by my modular synthesizer. One strand is going through a reverb, another through granular synthesis, another through a lo-fi looper, and another through a spectral filter, and then there’s the unadulterated line, and all of those are being warped or tweaked themselves. The shape of the overall sound, for example, is affecting the density of the granular synthesis. Various LFOs are adjusting the relative prominence of different elements, and other aspects such as the size of the reverb.

The source audio is from freesound.org. It is a distant train horn recorded by Andy Brannan in Wyoming.

Also tagged , / / Leave a comment ]

Guitar + Synth Learning: Ultomaton Software

This is a quick, initial attempt on my part with a new piece of software called Ultomaton. The name is a play on the word “automaton” because the software employs Conway’s Game of Life, the famed “cellular automation” simulation, as a source of triggers and other controls, such as volume and place in the stereo spectrum, for all manner of sonic processes. These effects include stutter, backwards audio, looping, and granular synthesis, several of which are heard in this test run.

What I’m playing on electric guitar as the source audio is the first of the 120 right-hand exercises by composer Mauro Giuliani (1781 – 1829). I’ve been working my way through these exercises for the past few weeks, and sometimes experimenting with ways to make practice even more enjoyable, such as prerecording the chords to supply accompaniment. The real-time processing of Ultomaton provides accompaniment as I play, built from the very things I am playing at that moment. The accompanying screenshot shows the Ultomaton controls as they were set for this recording.

The electric guitar went into the laptop, a Macbook Air (2013), via a Scarlett audio interface. After recorded, the audio was cleaned up in Adobe Audition: volume increased, bit of reverb added, and fade-in/fade-out implemented.

Track originally posted at soundcloud.com/disquiet. The Ultomaton software is the work of Benjamin Van Esser, who is based in Brussels, Belgium. The software is free at github.com/benjaminvanesser. More information at benjaminvanesser.be.

Also tagged , / / Leave a comment ]