My 33 1/3 book, on Aphex Twin's Selected Ambient Works Volume II, was the 5th bestselling book in the series in 2014. It's available at Amazon (including Kindle) and via your local bookstore. • F.A.Q.Key Tags: #saw2for33third, #sound-art, #classical, #juntoElsewhere: Twitter, SoundCloud, Instagram

Listening to art.
Playing with audio.
Sounding out technology.
Composing in code.

tag: software

Guitar + Synth Learning: Ultomaton Software

This is a quick, initial attempt on my part with a new piece of software called Ultomaton. The name is a play on the word “automaton” because the software employs Conway’s Game of Life, the famed “cellular automation” simulation, as a source of triggers and other controls, such as volume and place in the stereo spectrum, for all manner of sonic processes. These effects include stutter, backwards audio, looping, and granular synthesis, several of which are heard in this test run.

What I’m playing on electric guitar as the source audio is the first of the 120 right-hand exercises by composer Mauro Giuliani (1781 – 1829). I’ve been working my way through these exercises for the past few weeks, and sometimes experimenting with ways to make practice even more enjoyable, such as prerecording the chords to supply accompaniment. The real-time processing of Ultomaton provides accompaniment as I play, built from the very things I am playing at that moment. The accompanying screenshot shows the Ultomaton controls as they were set for this recording.

The electric guitar went into the laptop, a Macbook Air (2013), via a Scarlett audio interface. After recorded, the audio was cleaned up in Adobe Audition: volume increased, bit of reverb added, and fade-in/fade-out implemented.

Track originally posted at soundcloud.com/disquiet. The Ultomaton software is the work of Benjamin Van Esser, who is based in Brussels, Belgium. The software is free at github.com/benjaminvanesser. More information at benjaminvanesser.be.

Also tagged , / / Leave a comment ]

Algorithmic Art Assembly

I'll be giving a talk at this two-day event in San Francisco on March 22

My friend Thorsten Sideb0ard is hosting Algorithmic Art Assembly, a new event in San Francisco on March 22nd and 23rd this year, “focused on algorithmic tools and processes.” I’ll be doing a little talk on the 22nd, which is a Friday.

Speakers include: Windy Chien, Jon Leidecker (aka Wobbly), Julia Litman-Cleper, Adam Roberts (Google Magenta), Olivia Jack; Mark Fell (a Q&A), Spacefiller, Elizabeth Wilson, M Eiffler, Adam Florin, Yotam Mann & Sarah Rothberg — and me. Performances include: Kindohm, Algobabez, Renick Bell, Spatial, Digital Selves, Wobbly, Can Ince; Mark Fell, W00dy, TVO, Shatter Pattern, William Fields, Sebastian Camens, Spednar. Here’s a bit more from the website, aaassembly.org:

Algorithmic Art Assembly is a brand new two day conference and music festival, showcasing a diverse range of artists who are using algorithmic tools and processes in their works. From live coding visuals and music at algoraves, to virtual reality, gaming, augmented tooling, generative music composition, or knot tying, this event celebrates artists abusing algorithms for the aesthetics.

Daytime talks will present speakers introducing and demonstrating their art, in an informal and relaxed setting, (very much inspired by Dorkbot).

Each day will feature one workshop in an intimate setting, creating an opportunity for you to learn how to create live coded music using two of the main platforms, SuperCollider and TidalCycles. Workshops are limited in space, with reservation required – details to come.

Evening performances will be heavily based upon the algorave format, in which the dancefloor is accompanied by a look behind the veil, with several artists projecting a livestream of their code on screen. Performers will play energetic sets back to back, with minimal switch-over time.”

It was a new year, so I cleaned up my bio a bit. Here’s how it reads currently:

Marc Weidenbaum founded the website Disquiet.com in 1996 at the intersection of sound, art, and technology, and since 2012 has moderated the Disquiet Junto, an active online community of weekly music/sonic projects that explore creative constraints. A former editor of Tower Records’ music magazines, Weidenbaum is the author of the 33 1⁄3 book on Aphex Twin’s classic album Selected Ambient Works Volume II, and has written for Nature, Boing Boing, Pitchfork, Downbeat, NewMusicBox, Art Practical, The Atlantic online, and numerous other periodicals. Weidenbaum’s sonic consultancy has ranged from mobile GPS apps to coffee-shop sound design, comics editing for Red Bull Music Academy, and music supervision for two films (the documentary The Children Next Door, scored by Taylor Deupree, and the science fiction short Youth, scored by Marcus Fischer). Weidenbaum has exhibited sound art at galleries in Dubai, Los Angeles, and Manhattan, as well as at the San Jose Museum of Art, and teaches a course on the role of sound in branding at the Academy of Art University in San Francisco. Weidenbaum has commissioned and curated sound/music projects that have featured original works by Kate Carr, Marielle V Jakobsons, John Kannenberg, Steve Roden, Scanner, Roddy Schrock, Robert Thomas, and Stephen Vitiello, among many others. Raised in New York, Weidenbaum lives in San Francisco.

More on the Algorithmic Art Assembly at aaassembly.org. The event will take place, both days, at Gray Area Foundation for the Arts grayarea.org.

Also tagged , / / Leave a comment ]

Live Coding the 100th Ambient Performances Video

A hand-typed drone sequence from musician Charlie Kramer

This video, a five-plus-minute exploration of pinging drones by musician Charlie Kramer, marks the 100th video in the ongoing playlist I’ve been maintaining of live performances of ambient music. The entry marks a milestone, and also a deviation, more about which in a moment.

First, a bit about the playlist itself. It began in April of 2016 “A YouTube Playlist of Ambient Performances,” front-loaded with a handful of pieces by such musicians as Andreas Tilliander, Christina Vantzou, Ryuicki Sakamoto, Nils Frahm (as a member of Nonkeen), and Jon Hassell. At the time I started it, I listed the following rules for its existence:

This “Ambient Performances” set is a playlist-in-progress of live performance videos on YouTube of ambient music by a wide variety of musicians using a wide variety of equipment.

Rule #1: I’m only including recordings I’d listen to without video.

Rule #2: I’m only including recordings where the video gives some sense of a correlation between what the musician is doing and what the listener is hearing.

Rule #3: By and large, the new additions to the playlist will simply be, reverse-chronologically, the most recent tracks added, but I’ll be careful to front-load a few choice items at the beginning.

Those rules summarized the filters that lead to video selection, but they don’t touch on the reasoning behind the playlist, nor did the initial post announcing the playlist’s existence. The underlying reasons included, certainly, curiosity on my part about how such music was made, and in particular about the creative tension at work in which effort was required to make music that seemed, by its categorical nature, to eschew the notion of effort — ambient music, that is.

But there was another reason, which was simply that the majority of videos featuring technology I found interesting (tutorials, live sets, peeks inside people’s studios, behind-the-scenes footage) had music I couldn’t stand listening to. This playlist of mine was an attempt to focus on the rare material that satisfied my ears, my eyes, and my imagination.

One hundred videos later, something had been surfacing in my thoughts, which was that while the videos all adhered to the initial rules, they had also come to focus often on mechanisms, along with video production, that was as beautiful as the music itself — synthesizers on fields and beaches, keyboards amid flowers and carefully placed objects. It’s no surprise that musicians who can achieve a certain aesthetic in the sonic realm might also be capable of carrying it over to the visual realm. However, I had come to wonder if I’d fallen for beauty, and if visual beauty had become something of a magnet rather than a mere byproduct of what I was after.

In any case, it was with that in mind that I began to actively pursue less visually compelling videos that still satisfied the rules that launched the playlist, and in the process I came to narrow and lightly edit the rules, since the third one only really applied at launch, yielding this amended list, which still applies to all the videos added to date:

This “Ambient Performances” set is an ongoing playlist-in-progress of live performance videos on YouTube of ambient music by a wide variety of musicians using a wide variety of equipment. There are two rules for it:

Rule #1: I’m only including recordings I’d listen to without video.

Rule #2: I’m only including recordings where the video gives some meaningful sense of a correlation between what the musician is doing and what the listener is hearing.

Note: The list appears in reverse-chronological order, which means that the video listed as #1 is the most recent. When a new video is added, the current #1 becomes #2.

Which brings us to Charlie Kramer’s piece. While all previous videos in this playlist involved physical equipment, with an emphasis on modular synthesizer, Kramer’s recording is a document of live coding — of computer programming as performance practice. The only instrument is his computer, seen here in footage of his screen. What he is doing throughout the piece is manipulating computer code in real time. As with the previous videos in this playlist, there is a direct, informative correlation between what Kramer is doing on screen — we don’t see his hands, but we see keystrokes being entered, and a mouse moving around — and what our ears are taking in. When he fixes some indents, as he does around 1:03 in the video, there is no commensurate change in sound. However, when, later, some integers are changed, we hear variations on what was sounding out previously.

As Kramer explains in the accompanying note, this piece is composed — is coded — in the language Chuck. Each time he hits the Add Shred button at the top of the window in which the Chuck code appears, the current instance of that code begins to be executed: new variables and new commands bringing to life new musical directions. When Kramer does so, a giant green plus sign appears briefly on the screen. That giant green plus a perfect depiction of the connection between precise action and subtle sound that this playlist was intended to explore.

Kramer’s track was recorded as part of the most recent weekly music compositional prompt project in the ongoing Disquiet Junto series. Kramer, who also goes by NorthWoods, posted the video and the code, along with some background on the piece, to the llllllll.co message board, where it’s still available for perusal.

The video is hosted at Kramer’s YouTube channel.

Also tagged , / / Comment: 1 ]

RIYL Streaming

Introducing a new Disquiet.com ambient music playlist on Spotify: Stasis Report

What would make a useful streaming ambient-music playlist format?

I’ve tried in the past on various services, including Spotify, Google Play Music, and Apple’s, and I’ve never felt like I got to something useful. So then I’d stop. By way of contrast, I have in mind the YouTube playlist I maintain of fine live ambient music performances in which the music technology employed is in plain view. That playlist came out of a need and served a purpose. The need was that most of the music technology videos I watched involved music I couldn’t stand, and the purpose was to explore the tension between the near stasis of ambient music and the activity employed to produce it.

With a Spotify/etc. playlist. the main need and purpose are less clear, in that they’re no doubt satisfactorily fulfilled already — by humans, and by algorithms, and by the hybrid thereof. The need and purpose are the same: a collection of regularly updated, recommended ambient music to listen to, presumably while doing something else. Between readily available full-length recordings and algorithmic dispensing of RIYL “discovery,” there’s plenty background music to listen to and to pay attention to if you want to. (I’ll continue to put quotes around music “discovery” until the quotes’ presence is widely presumed.) Which makes me wonder what would make an ambient playlist distinct — because, furthermore, the very nature of streaming services arguably turns all music into, if not ambient music, then certainly background music. Perhaps the answer of the purpose question is largely related to the selection, but it feels like the context of the selection is also part of the purpose, part of the process. It’s not just about a set of items; it’s about their context in the reader’s life, in the music’s life.

I posted a rough draft this thinking on Twitter today to get the thoughts out, to see if anyone had input (which they did). I figure there would be two playlists, not one. There would be a main playlist, defined perhaps by length: 90 minutes or so of peace, for lack of a better word. The main ambient playlist would be time-sensitive — a somewhat ironic concept for music that aims for a genre-defining sense of timelessness. In any case, the main playlist would be recent releases, and music timed to events (birthdays, deaths, milestones, news, anniversaries). The second playlist is where tracks go after they’re no longer timely.

Part of the complexity I’ve faced in doing a Spotify playlist is that I don’t listen to many playlists. I like knowing what I’m listening to, and most playlist functionality is defined by its active dearth of context. (Now, Active Dearth would make an excellent playlist name.) And when I say Spotify, I mean any of the streaming services, more or less, since each has its own plusses and minuses.

Anyhow, yeah, a big part of what led to my development of my YouTube ambient-performances playlist was my having gotten into active — systematic, enjoyable, anticipatory — viewing on YouTube. Rather than YouTube being a thing I ended up on on occasion, I started subscribing to channels and checking my subscription items each morning. In contrast I still use music streaming services mostly as a way to catch up or to fill in a mental blank. Each week I get more music (because I write about music) and buy more music (perhaps merely m-m-my generation’s habits) than I could hear in a week. Streaming, in contrast, has been, for me, a supplement.

Backing up to the “why” of this hypothetical playlist, the answer is fairly simple. Once upon a time as a music critic, you wrote about music and you reached an audience who read it. Today, as a music critic, you can (also) make a playlist and reach people who might never have read what you would have written. That’s why I always liked DJing — in college on WYBC, later on KDVS when I moved to California. (And, yeah, I did a couple podcast episodes and plan to get back to it, but that’s really a separate story from playlist production.) Anyhow, this is what’s on my mind, and thinking it out in public can be productive. If you have thoughts about what would constitute an ambient playlist, it’d be appreciated.

For the time being, the playlist is called the Stasis Report. It launched today with music from Brian Eno and Marcus Fischer, Madeleine Cocolas and r beny, Lisa Gerrard and William Basinski, Emily A. Sprague and Grouper, among others. There is a secondary playlist, to which tracks are moved after they’re out of circulation on the Stasis Report. That playlist is called the Stasis Archives.

Also tagged , / / Comments: 2 ]

Snakes & Oscillators

A glimpse at a video-game music interface by Jon Davies

A post shared by Jon Davies (@jonpauldavies) on

Just to follow up yesterday’s post of an Instagram video depicting a tiny robot band playing artfully arranged instrumental music, here’s another solid example of the miniature musical-technological (a slightly more humane appellation than “music-technology”) wonders found on the social network.

As you listen to the clip, a brief synthesized melody is being modulated in real time, the sound warping at the whim of a controller. The familiar shape of the x/y control pad is viewable in the lower right hand corner of the illuminated grid device. What it controls is this snake, familiar from video games like Centipede, the early-1980s classic. The snake can be aimed at a little stationary reward, whose consumption by the snake ushers in a new phase of the melody, which appears to move up the register a step at a time, or something along those lines.

The rules of this game-composition aren’t entirely clear, but it does appear that while you can aim the snake to hit that reward light right on the schedule that the rhythm suggests, you can also delay doing so, letting the standing melody extend for awhile. It’s nice to imagine how an audience in a live setting would get engaged in such a performance, becoming aware of the process and enjoying the occasions of delayed gratification as the snake takes its time to consume its prey. It’s also interesting to think how the scenario can train a player to keep time, or adeptly veer from it, along the lines of Guitar Hero and other so-called rhythm games.

Video found via a post by Scanner Darkly on the llllllll.co boards. Software by Jon Davies, on whose Instagram account the clip was published. The device is the open-source Monome Grid controller (more at monome.org). Davies says the code will soon be shared publicly, for those who want to play along at home.

Also tagged , , / / Leave a comment ]