My 33 1/3 book, on Aphex Twin's Selected Ambient Works Volume II, was the 5th bestselling book in the series in 2014. It's available at Amazon (including Kindle) and via your local bookstore. • F.A.Q.Key Tags: #saw2for33third, #sound-art, #classical, #juntoElsewhere: Twitter, SoundCloud, Instagram

Listening to art.
Playing with audio.
Sounding out technology.
Composing in code.

tag: software

Disquiet Junto Project 0381: Shared System

The Assignment: make music using a free software synth assembled by Scanner.

Each Thursday in the Disquiet Junto group, a new compositional challenge is set before the group’s members, who then have just over four days to upload a track in response to the assignment. Membership in the Junto is open: just join and participate. (A SoundCloud account is helpful but not required.) There’s no pressure to do every project. It’s weekly so that you know it’s there, every Thursday through Monday, when you have the time.

Deadline: This project’s deadline is Monday, April 22, 2019, at 11:59pm (that is, just before midnight) wherever you are. It was posted in the morning, California time, on Thursday, April 18, 2019.

Tracks will be added to the playlist for the duration of the project.

These are the instructions that went out to the group’s email list (at tinyletter.com/disquiet-junto):

Disquiet Junto Project 0381: Shared System
The Assignment: make music using a free software synth assembled by Scanner.

Step 1: This week’s project involves all participants working with the same instrument. The instrument is a specific set of modules in a free software synthesizer. The software is Reaktor Blocks Base. It comes as part of the free Reaktor Komplete Start, which is available here:

https://www.native-instruments.com/en/products/komplete/bundles/komplete-start/

(Certainly if you’d prefer to emulate this week’s “shared system” using VCV Rack, or another piece of software, or your own hardware modules, that is totally fine.)

Step 2: The musician Scanner (aka Robin Rimbaud) graciously agreed to create a shared system based on Reaktor Blocks Base. It consists of these modules:

Bento Box Osc
Bento Box SVF
Bento Box VCA
Bento Box Mix
Bento Box Env
Bento Box LFO
Bento Box S&H
Bento Box 4 Mods

Here is some background on Scanner’s thought process in the development of this system: I think it would be interesting to present a limited package of blocks they can use, and to not use a traditional sequencer. Instead, people would consider how an LFO or modulation can move a sound or series of sounds around. (In some sense, this is a more West Coast than East Coast approach.) I’m concerned if we include the sequencer then it would suggest lots of decent pattern-oriented music wrapped around a similar theme or approach. This idea of such reductionism is basically about avoiding the obvious in these encounters and leaving the creator to think a little more than they might have to otherwise. It could perhaps be reduced further, but it’s enough to get people shaping sounds and creating shapes. Any less and it could potentially be too limiting and uninspiring. It’s truly a Bento Box Delight. I presume there’s a modest learning curve for some users, but there’s a guide that seems very clear on the NI website.

Step 3: Create a piece of music using only the modules (one of each) as described in Step 2 above.

Seven More Important Steps When Your Track Is Done:

Step 1: Include “disquiet0381” (no spaces or quotation marks) in the name of your track.

Step 2: If your audio-hosting platform allows for tags, be sure to also include the project tag “disquiet0381” (no spaces or quotation marks). If you’re posting on SoundCloud in particular, this is essential to subsequent location of tracks for the creation a project playlist.

Step 3: Upload your track. It is helpful but not essential that you use SoundCloud to host your track.

Step 4: Post your track in the following discussion thread at llllllll.co:

https://llllllll.co/t/disquiet-junto-project-0381-shared-system/

Step 5: Annotate your track with a brief explanation of your approach and process.

Step 6: If posting on social media, please consider using the hashtag #disquietjunto so fellow participants are more likely to locate your communication.

Step 7: Then listen to and comment on tracks uploaded by your fellow Disquiet Junto participants.

Additional Details:

Deadline: This project’s deadline is Monday, April 22, 2019, at 11:59pm (that is, just before midnight) wherever you are. It was posted in the morning, California time, on Thursday, April 18, 2019.

Length: The length is up to you.

Title/Tag: When posting your track, please include “disquiet0381” in the title of the track, and where applicable (on SoundCloud, for example) as a tag.

Upload: When participating in this project, post one finished track with the project tag, and be sure to include a description of your process in planning, composing, and recording it. This description is an essential element of the communicative process inherent in the Disquiet Junto. Photos, video, and lists of equipment are always appreciated.

Download: Consider setting your track as downloadable and allowing for attributed remixing (i.e., a Creative Commons license permitting non-commercial sharing with attribution, allowing for derivatives).

For context, when posting the track online, please be sure to include this following information:

More on this 381st weekly Disquiet Junto project — Shared System / The Assignment: make music using a free software synth assembled by Scanner — at:

https://disquiet.com/0381/

More on the Disquiet Junto at:

https://disquiet.com/junto/

Subscribe to project announcements here:

http://tinyletter.com/disquiet-junto/

Project discussion takes place on llllllll.co:

https://llllllll.co/t/disquiet-junto-project-0381-shared-system/

There’s also on a Junto Slack. Send your email address to twitter.com/disquiet for Slack inclusion.

Image associated with this project adapted (cropped, colors changed, text added, cut’n’paste) thanks to a Creative Commons license from a photo credited to Ananabanana:

https://flic.kr/p/7KW67U

https://creativecommons.org/licenses/by-nc-sa/2.0/

Also tagged , , , / / Leave a comment ]

Why Do We Listen Like We Used to Listen?

Or: When your phone teases an alternate present

This is a screen shot off my mobile phone. What it displays is the active interface for the Google Play Music app. Visible are the cover images from four full-length record albums, all things I’ve listened to recently: the new one from the great experimental guitarist David Torn (Sun of Goldfinger, the first track of which is phenomenal, by the way), an old compilation from the early jam band Santana (for an excellent live cover of Miles Davis and Teo Macero’s proto-ambient “In a Silent Way” – more tumultuous than the original, yet restrained on its own terms), and for unclear reasons not one but two copies of Route One, released last year by Sigur Rós, the Icelandic ambient-rock group.

If you look closely at the little icons on top of those four album covers, you’ll note two that show little right arrows. That’s the digital sigil we’ve all come to understand instinctively as an instruction to hit play. And you’ll note that both copies of Route One are overlaid with three little vertical bars, suggesting the spectrum analysis of a graphic equalizer.

What isn’t clear in this still image is those little bars are moving up and down – not just suggesting but simulating spectrum analysis, and more importantly telling the listener that the album is playing … or in this case the albums, plural. Except they weren’t. Well, only one was. While I could only hear one copy of the Sigur Rós record, the phone was suggesting I could hear two. Why? I don’t know. I felt it was teasing me – teasing me about why we still listen the way we used to listen, despite all the tools at our disposal.

Now, if any band could have its music heard overlapping, it’s Sigur Rós, since they generally traffic in threadbare sonic atmospherics that feel like what for other acts, such as Radiohead or Holly Herndon or Sonic Youth, might merely be the backdrop. All these musicians have hinted at alternate futures, though in the end what they mostly produce are songs, individual sonic objects that unfold in strictly defined time.

It’s somewhat ironic that Route One is the album my phone mistook as playing two versions simultaneously, since Route One itself originated as an experiment in alternate forms of music-making. It was a generative project the band undertook in 2016, described by the Verge’s Jamieson Cox as follows: “a day-long ‘slow TV’ broadcast that paired a live-streamed journey through the band’s native Iceland with an algorithmically generated remix of their new single ‘Oveour.'” The Route One album I was listening to contains highlights of that overall experience. An alternate version, with the full 24 hours, is on Google Play Music’s rival service, Spotify.

What this odd moment with my phone reminded me was that it’s always disappointing, to me at least, how little we can do – perhaps more to the point, how little we are encouraged and empowered to do – with the music on our phones.

Why don’t our frequent-listening devices, those truly personal computers we have come to call phones, not only track what we listen to but how we listen to it, and then play back on-the-fly medleys built from our favorite moments, alternate versions in collaboration with a machine intelligence?

Why can’t the tools time-stretch and pitch-match and interlace alternate takes of various versions of the same and related songs, so we hear some ersatz-master take of a favorite song, drawn from various sources and quilted to our specifications?

Or why, simply, can’t we listen easily to two things at the same time — add, for example, Brian Eno’s 1985 album Thursday Afternoon, an earlier document of an earlier generative system, to that of Route One? Or just add one copy of Route One to another, as my phone suggested was happening, one in full focus, the other a little hazy and out of sync.

Why aren’t these tools readily available? Why aren’t musicians encouraged to make music with this mode in mind? Why is this not how we listen today? Why do we listen like we used to listen?

Also tagged , / / Leave a comment ]

Guitar + Synth Learning: Ultomaton Software

This is a quick, initial attempt on my part with a new piece of software called Ultomaton. The name is a play on the word “automaton” because the software employs Conway’s Game of Life, the famed “cellular automation” simulation, as a source of triggers and other controls, such as volume and place in the stereo spectrum, for all manner of sonic processes. These effects include stutter, backwards audio, looping, and granular synthesis, several of which are heard in this test run.

What I’m playing on electric guitar as the source audio is the first of the 120 right-hand exercises by composer Mauro Giuliani (1781 – 1829). I’ve been working my way through these exercises for the past few weeks, and sometimes experimenting with ways to make practice even more enjoyable, such as prerecording the chords to supply accompaniment. The real-time processing of Ultomaton provides accompaniment as I play, built from the very things I am playing at that moment. The accompanying screenshot shows the Ultomaton controls as they were set for this recording.

The electric guitar went into the laptop, a Macbook Air (2013), via a Scarlett audio interface. After recorded, the audio was cleaned up in Adobe Audition: volume increased, bit of reverb added, and fade-in/fade-out implemented.

Track originally posted at soundcloud.com/disquiet. The Ultomaton software is the work of Benjamin Van Esser, who is based in Brussels, Belgium. The software is free at github.com/benjaminvanesser. More information at benjaminvanesser.be.

Also tagged , / / Leave a comment ]

Algorithmic Art Assembly

I'll be giving a talk at this two-day event in San Francisco on March 22

My friend Thorsten Sideb0ard is hosting Algorithmic Art Assembly, a new event in San Francisco on March 22nd and 23rd this year, “focused on algorithmic tools and processes.” I’ll be doing a little talk on the 22nd, which is a Friday.

Speakers include: Windy Chien, Jon Leidecker (aka Wobbly), Julia Litman-Cleper, Adam Roberts (Google Magenta), Olivia Jack; Mark Fell (a Q&A), Spacefiller, Elizabeth Wilson, M Eiffler, Adam Florin, Yotam Mann & Sarah Rothberg — and me. Performances include: Kindohm, Algobabez, Renick Bell, Spatial, Digital Selves, Wobbly, Can Ince; Mark Fell, W00dy, TVO, Shatter Pattern, William Fields, Sebastian Camens, Spednar. Here’s a bit more from the website, aaassembly.org:

Algorithmic Art Assembly is a brand new two day conference and music festival, showcasing a diverse range of artists who are using algorithmic tools and processes in their works. From live coding visuals and music at algoraves, to virtual reality, gaming, augmented tooling, generative music composition, or knot tying, this event celebrates artists abusing algorithms for the aesthetics.

Daytime talks will present speakers introducing and demonstrating their art, in an informal and relaxed setting, (very much inspired by Dorkbot).

Each day will feature one workshop in an intimate setting, creating an opportunity for you to learn how to create live coded music using two of the main platforms, SuperCollider and TidalCycles. Workshops are limited in space, with reservation required – details to come.

Evening performances will be heavily based upon the algorave format, in which the dancefloor is accompanied by a look behind the veil, with several artists projecting a livestream of their code on screen. Performers will play energetic sets back to back, with minimal switch-over time.”

It was a new year, so I cleaned up my bio a bit. Here’s how it reads currently:

Marc Weidenbaum founded the website Disquiet.com in 1996 at the intersection of sound, art, and technology, and since 2012 has moderated the Disquiet Junto, an active online community of weekly music/sonic projects that explore creative constraints. A former editor of Tower Records’ music magazines, Weidenbaum is the author of the 33 1⁄3 book on Aphex Twin’s classic album Selected Ambient Works Volume II, and has written for Nature, Boing Boing, Pitchfork, Downbeat, NewMusicBox, Art Practical, The Atlantic online, and numerous other periodicals. Weidenbaum’s sonic consultancy has ranged from mobile GPS apps to coffee-shop sound design, comics editing for Red Bull Music Academy, and music supervision for two films (the documentary The Children Next Door, scored by Taylor Deupree, and the science fiction short Youth, scored by Marcus Fischer). Weidenbaum has exhibited sound art at galleries in Dubai, Los Angeles, and Manhattan, as well as at the San Jose Museum of Art, and teaches a course on the role of sound in branding at the Academy of Art University in San Francisco. Weidenbaum has commissioned and curated sound/music projects that have featured original works by Kate Carr, Marielle V Jakobsons, John Kannenberg, Steve Roden, Scanner, Roddy Schrock, Robert Thomas, and Stephen Vitiello, among many others. Raised in New York, Weidenbaum lives in San Francisco.

More on the Algorithmic Art Assembly at aaassembly.org. The event will take place, both days, at Gray Area Foundation for the Arts grayarea.org.

Also tagged , / / Leave a comment ]

Live Coding the 100th Ambient Performances Video

A hand-typed drone sequence from musician Charlie Kramer

This video, a five-plus-minute exploration of pinging drones by musician Charlie Kramer, marks the 100th video in the ongoing playlist I’ve been maintaining of live performances of ambient music. The entry marks a milestone, and also a deviation, more about which in a moment.

First, a bit about the playlist itself. It began in April of 2016 “A YouTube Playlist of Ambient Performances,” front-loaded with a handful of pieces by such musicians as Andreas Tilliander, Christina Vantzou, Ryuicki Sakamoto, Nils Frahm (as a member of Nonkeen), and Jon Hassell. At the time I started it, I listed the following rules for its existence:

This “Ambient Performances” set is a playlist-in-progress of live performance videos on YouTube of ambient music by a wide variety of musicians using a wide variety of equipment.

Rule #1: I’m only including recordings I’d listen to without video.

Rule #2: I’m only including recordings where the video gives some sense of a correlation between what the musician is doing and what the listener is hearing.

Rule #3: By and large, the new additions to the playlist will simply be, reverse-chronologically, the most recent tracks added, but I’ll be careful to front-load a few choice items at the beginning.

Those rules summarized the filters that lead to video selection, but they don’t touch on the reasoning behind the playlist, nor did the initial post announcing the playlist’s existence. The underlying reasons included, certainly, curiosity on my part about how such music was made, and in particular about the creative tension at work in which effort was required to make music that seemed, by its categorical nature, to eschew the notion of effort — ambient music, that is.

But there was another reason, which was simply that the majority of videos featuring technology I found interesting (tutorials, live sets, peeks inside people’s studios, behind-the-scenes footage) had music I couldn’t stand listening to. This playlist of mine was an attempt to focus on the rare material that satisfied my ears, my eyes, and my imagination.

One hundred videos later, something had been surfacing in my thoughts, which was that while the videos all adhered to the initial rules, they had also come to focus often on mechanisms, along with video production, that was as beautiful as the music itself — synthesizers on fields and beaches, keyboards amid flowers and carefully placed objects. It’s no surprise that musicians who can achieve a certain aesthetic in the sonic realm might also be capable of carrying it over to the visual realm. However, I had come to wonder if I’d fallen for beauty, and if visual beauty had become something of a magnet rather than a mere byproduct of what I was after.

In any case, it was with that in mind that I began to actively pursue less visually compelling videos that still satisfied the rules that launched the playlist, and in the process I came to narrow and lightly edit the rules, since the third one only really applied at launch, yielding this amended list, which still applies to all the videos added to date:

This “Ambient Performances” set is an ongoing playlist-in-progress of live performance videos on YouTube of ambient music by a wide variety of musicians using a wide variety of equipment. There are two rules for it:

Rule #1: I’m only including recordings I’d listen to without video.

Rule #2: I’m only including recordings where the video gives some meaningful sense of a correlation between what the musician is doing and what the listener is hearing.

Note: The list appears in reverse-chronological order, which means that the video listed as #1 is the most recent. When a new video is added, the current #1 becomes #2.

Which brings us to Charlie Kramer’s piece. While all previous videos in this playlist involved physical equipment, with an emphasis on modular synthesizer, Kramer’s recording is a document of live coding — of computer programming as performance practice. The only instrument is his computer, seen here in footage of his screen. What he is doing throughout the piece is manipulating computer code in real time. As with the previous videos in this playlist, there is a direct, informative correlation between what Kramer is doing on screen — we don’t see his hands, but we see keystrokes being entered, and a mouse moving around — and what our ears are taking in. When he fixes some indents, as he does around 1:03 in the video, there is no commensurate change in sound. However, when, later, some integers are changed, we hear variations on what was sounding out previously.

As Kramer explains in the accompanying note, this piece is composed — is coded — in the language Chuck. Each time he hits the Add Shred button at the top of the window in which the Chuck code appears, the current instance of that code begins to be executed: new variables and new commands bringing to life new musical directions. When Kramer does so, a giant green plus sign appears briefly on the screen. That giant green plus a perfect depiction of the connection between precise action and subtle sound that this playlist was intended to explore.

Kramer’s track was recorded as part of the most recent weekly music compositional prompt project in the ongoing Disquiet Junto series. Kramer, who also goes by NorthWoods, posted the video and the code, along with some background on the piece, to the llllllll.co message board, where it’s still available for perusal.

The video is hosted at Kramer’s YouTube channel.

Also tagged , / / Comment: 1 ]