My 33 1/3 book, on Aphex Twin's Selected Ambient Works Volume II, was the 5th bestselling book in the series in 2014. It's available at Amazon (including Kindle) and via your local bookstore. • F.A.Q.Key Tags: #saw2for33third, #sound-art, #classical, #juntoElsewhere: Twitter, SoundCloud, Instagram

Listening to art.
Playing with audio.
Sounding out technology.
Composing in code.

tag: software

The Virtue of Virtual Cables

Andrew Belt talked about the VCV Rack software at Stanford on July 3.

Over the past two years, a remarkable piece of free software has helped make modular synthesis widely available. The software is called Rack, from the company VCV, which like many small software firms is essentially a single person serving and benefiting from the efforts of a far-flung constellation of developers. Andrew Belt, who develops VCV Rack, this past week visited the San Francisco Bay Area from Tennessee, where he lives and works, to give talks and demonstrations. I caught his presentation at the Stanford University’s CCRMA department this past Wednesday, July 3. It was a great evening.

Belt spoke for an hour, starting at around 5:30pm, about the origins and development of VCV Rack, how it began as a command-line effort, and how then he went back to a blank slate and started on a GUI, or graphic user interface, approach. That GUI is arguably what makes VCV Rack so popular. Rack provides emulations of synthesizer modules that look just like actual physical modules, including virtual cables you drag across the screen, much as you’d connect an oscillator and a filter in the physical world. The occasion of his visit is the release of version 1.0 of VCV Rack, following an extended beta honeymoon. He covered a lot of material during the talk and subsequent Q&A, and I’m just going to summarize a few key points here:

He talked about the “open core” business-model approach, in which the Rack software is free and open source, and how third parties (and VCV) then sell new modules on top of it. (This is a bit like a “freemium,” the difference being that the foundation here is open source.)

Belt went through various upcoming modules, including a “timeline” one, a “prototype” one, a “video-synthesis” one, a DAW-style “piano roll,” and one that is a bitcrusher emulating super low-grade MP3 encoding. He didn’t mention which existing synthesizer module companies are due to port theirs over to Rack, and no one asked, likely because, this being CCRMA, the conversation was way more deep in the DSP (digital signal processing) weeds — which was great, even if 90% of that material was way over my head. He showed tons of examples, including how the new polyphony (up to 16 voices) works.

There was a great moment midway through the talk. Belt was discussing the employment of a type of synthesis in Rack called FM synthesis, and he asked if anyone in the audience could remind him who had first developed FM synthesis. One of the senior CCRMA professors chimed in and explained that we were all in this room precisely because of FM synthesis: CCRMA was funded for many years thanks to profits on the patent for FM synthesis, which was developed by Stanford professor John Chowning. FM synthesis was what made the Yamaha DX7 synthesizer a massive success during in the 1980s. For many years to follow, Chowning’s FM synthesis patent was, reportedly, the single most profitable patent in all of Stanford’s existence. After drinking in the impromptu history lesson, Belt pulled up a DX7 emulation in Rack. Someone in the audience noted how things come full circle.

I highly recommend giving VCV Rack a try. It’s available at vcvrack.com.

This is lightly adapted from the July 7, 2019, issue of the free weekly Disquiet.com email newsletter This Week in Sound.

Also tagged / / Leave a comment ]

Speaking Privately to the Algorithm

What happens when we assume that always stating our opinion is in anyone's best interest

I spend a good amount of time watching YouTube videos by musicians. Not just of them, but generally by them: studio-journal videos that musicians make to show how they work. Not just recordings of their music, but videos of the process, of the effort, required.

And I marvel at (which is to say, more directly: am dismayed by) instances when a positively received video on YouTube also receives a small handful of dislikes. By this I specifically mean mute negative gestures, devoid of any comment, just a downward-facing thumb. Say what you will about haters, at least when they comment they leave some fingerprint on their dissenting opinion. There’s a uniquely buzz-killing pall cast by the unqualified, unidentified, anonymous thumbs down.

Certainly everything will have its detractors, but I wonder if something else might be going on here. (Now, by “popular” I don’t mean the given video has racked up hundreds of thousands of views. I just mean maybe a couple dozen accounts have given it a thumbs up, and the video is innocuous, not to say inconsequential, just a musician doing their thing.)

I wonder if the issue is that the YouTube interface should provide an opportunity for the watcher/user to say, privately to the algorithm, “I’m not interested in this.” That suggestion is in contrast to requiring, as YouTube currently does, that you register your disinterest publicly.

Right now it’s like the waiter asks how your meal was, and your only option is to stand up and announce it to your fellow diners. And the issue may not be the food; it may not be that you didn’t like the food. The issue may be that it just wasn’t your sort of food, or you would have liked this for lunch but it didn’t satisfy your dinner appetite.

As I’ve thought about this user-interface conundrum, I’ve become entranced by the concept of speaking “privately to the algorithm.” Perhaps that should be capitalized: “I’m speaking, privately, to the Algorithm.”

In that formulation, it’s like a confession, not a religious confession toward addressing your personal spiritual and all-too-human shortcomings, but a confession in the hopes of tailoring your reality. That is, toward addressing the shortcomings you perceive in (digital) reality.

And this is where the constant request for feedback can have (big surprise) unintended consequences. The tools have trained us to let them know what we think, because it’s in our best interest. But is it in anyone else’s interest that you found the given musician’s music uninteresting? While making your world better, have you yucked someone else’s yum? What is the good in that? What does it mean when acting to address the shortcomings you perceive in your digital reality has the direct effect (not merely a side effect, but a direct and immediate one) of negatively impacting the digital reality of other people?

Note the following three different scenarios on YouTube and how the user’s feedback is constrained, even directed, by the interface.

Below is a screenshot of the egregious situation I’m currently describing. If you’re on the page for a video, you have only the options to ignore, comment, or give it a thumbs up or thumb down, and of course to “Report” it, but that’s a different situation entirely:

Contrast that with the option you have for videos that YouTube serves up to your account based on what you’ve viewed before. Note that here, there is a plainly stated means to say “Not interested”:

And note that this isn’t merely a matter of whether you arrive at the video through your own actions or through the recommendations of YouTube. For example, if you subscribe to channels on YouTube, you can still, from the Subscriptions page, elect to Hide something:

Now, perhaps if you select “Hide” that is all that happens. Perhaps it just takes the video out of view. Perhaps YouTube doesn’t register your action as a means to adjust how its algorithm triangulates your viewing taste. But that seems unlikely, doesn’t it? We use these interfaces today with the impression that they will inform our future use of a given tool. Which is why when faced with no “Not interested” or “Hide” equivalent on a page, the user is, if not justified in registering their disinterest, forgiven a little for registering their dissatisfaction.

The issue is that the user’s dissatisfaction isn’t necessarily with the video. It is, indirectly and yet significantly, with YouTube.

Tag: / Comments: 3 ]

Disquiet Junto Project 0381: Shared System

The Assignment: make music using a free software synth assembled by Scanner.

Each Thursday in the Disquiet Junto group, a new compositional challenge is set before the group’s members, who then have just over four days to upload a track in response to the assignment. Membership in the Junto is open: just join and participate. (A SoundCloud account is helpful but not required.) There’s no pressure to do every project. It’s weekly so that you know it’s there, every Thursday through Monday, when you have the time.

Deadline: This project’s deadline is Monday, April 22, 2019, at 11:59pm (that is, just before midnight) wherever you are. It was posted in the morning, California time, on Thursday, April 18, 2019.

Tracks will be added to the playlist for the duration of the project.

These are the instructions that went out to the group’s email list (at tinyletter.com/disquiet-junto):

Disquiet Junto Project 0381: Shared System
The Assignment: make music using a free software synth assembled by Scanner.

Step 1: This week’s project involves all participants working with the same instrument. The instrument is a specific set of modules in a free software synthesizer. The software is Reaktor Blocks Base. It comes as part of the free Reaktor Komplete Start, which is available here:

https://www.native-instruments.com/en/products/komplete/bundles/komplete-start/

(Certainly if you’d prefer to emulate this week’s “shared system” using VCV Rack, or another piece of software, or your own hardware modules, that is totally fine.)

Step 2: The musician Scanner (aka Robin Rimbaud) graciously agreed to create a shared system based on Reaktor Blocks Base. It consists of these modules:

Bento Box Osc
Bento Box SVF
Bento Box VCA
Bento Box Mix
Bento Box Env
Bento Box LFO
Bento Box S&H
Bento Box 4 Mods

Here is some background on Scanner’s thought process in the development of this system: I think it would be interesting to present a limited package of blocks they can use, and to not use a traditional sequencer. Instead, people would consider how an LFO or modulation can move a sound or series of sounds around. (In some sense, this is a more West Coast than East Coast approach.) I’m concerned if we include the sequencer then it would suggest lots of decent pattern-oriented music wrapped around a similar theme or approach. This idea of such reductionism is basically about avoiding the obvious in these encounters and leaving the creator to think a little more than they might have to otherwise. It could perhaps be reduced further, but it’s enough to get people shaping sounds and creating shapes. Any less and it could potentially be too limiting and uninspiring. It’s truly a Bento Box Delight. I presume there’s a modest learning curve for some users, but there’s a guide that seems very clear on the NI website.

Step 3: Create a piece of music using only the modules (one of each) as described in Step 2 above.

Seven More Important Steps When Your Track Is Done:

Step 1: Include “disquiet0381” (no spaces or quotation marks) in the name of your track.

Step 2: If your audio-hosting platform allows for tags, be sure to also include the project tag “disquiet0381” (no spaces or quotation marks). If you’re posting on SoundCloud in particular, this is essential to subsequent location of tracks for the creation a project playlist.

Step 3: Upload your track. It is helpful but not essential that you use SoundCloud to host your track.

Step 4: Post your track in the following discussion thread at llllllll.co:

https://llllllll.co/t/disquiet-junto-project-0381-shared-system/

Step 5: Annotate your track with a brief explanation of your approach and process.

Step 6: If posting on social media, please consider using the hashtag #disquietjunto so fellow participants are more likely to locate your communication.

Step 7: Then listen to and comment on tracks uploaded by your fellow Disquiet Junto participants.

Additional Details:

Deadline: This project’s deadline is Monday, April 22, 2019, at 11:59pm (that is, just before midnight) wherever you are. It was posted in the morning, California time, on Thursday, April 18, 2019.

Length: The length is up to you.

Title/Tag: When posting your track, please include “disquiet0381” in the title of the track, and where applicable (on SoundCloud, for example) as a tag.

Upload: When participating in this project, post one finished track with the project tag, and be sure to include a description of your process in planning, composing, and recording it. This description is an essential element of the communicative process inherent in the Disquiet Junto. Photos, video, and lists of equipment are always appreciated.

Download: Consider setting your track as downloadable and allowing for attributed remixing (i.e., a Creative Commons license permitting non-commercial sharing with attribution, allowing for derivatives).

For context, when posting the track online, please be sure to include this following information:

More on this 381st weekly Disquiet Junto project — Shared System / The Assignment: make music using a free software synth assembled by Scanner — at:

https://disquiet.com/0381/

More on the Disquiet Junto at:

https://disquiet.com/junto/

Subscribe to project announcements here:

http://tinyletter.com/disquiet-junto/

Project discussion takes place on llllllll.co:

https://llllllll.co/t/disquiet-junto-project-0381-shared-system/

There’s also on a Junto Slack. Send your email address to twitter.com/disquiet for Slack inclusion.

Image associated with this project adapted (cropped, colors changed, text added, cut’n’paste) thanks to a Creative Commons license from a photo credited to Ananabanana:

https://flic.kr/p/7KW67U

https://creativecommons.org/licenses/by-nc-sa/2.0/

Also tagged , , , / / Leave a comment ]

Why Do We Listen Like We Used to Listen?

Or: When your phone teases an alternate present

This is a screen shot off my mobile phone. What it displays is the active interface for the Google Play Music app. Visible are the cover images from four full-length record albums, all things I’ve listened to recently: the new one from the great experimental guitarist David Torn (Sun of Goldfinger, the first track of which is phenomenal, by the way), an old compilation from the early jam band Santana (for an excellent live cover of Miles Davis and Teo Macero’s proto-ambient “In a Silent Way” – more tumultuous than the original, yet restrained on its own terms), and for unclear reasons not one but two copies of Route One, released last year by Sigur Rós, the Icelandic ambient-rock group.

If you look closely at the little icons on top of those four album covers, you’ll note two that show little right arrows. That’s the digital sigil we’ve all come to understand instinctively as an instruction to hit play. And you’ll note that both copies of Route One are overlaid with three little vertical bars, suggesting the spectrum analysis of a graphic equalizer.

What isn’t clear in this still image is those little bars are moving up and down – not just suggesting but simulating spectrum analysis, and more importantly telling the listener that the album is playing … or in this case the albums, plural. Except they weren’t. Well, only one was. While I could only hear one copy of the Sigur Rós record, the phone was suggesting I could hear two. Why? I don’t know. I felt it was teasing me – teasing me about why we still listen the way we used to listen, despite all the tools at our disposal.

Now, if any band could have its music heard overlapping, it’s Sigur Rós, since they generally traffic in threadbare sonic atmospherics that feel like what for other acts, such as Radiohead or Holly Herndon or Sonic Youth, might merely be the backdrop. All these musicians have hinted at alternate futures, though in the end what they mostly produce are songs, individual sonic objects that unfold in strictly defined time.

It’s somewhat ironic that Route One is the album my phone mistook as playing two versions simultaneously, since Route One itself originated as an experiment in alternate forms of music-making. It was a generative project the band undertook in 2016, described by the Verge’s Jamieson Cox as follows: “a day-long ‘slow TV’ broadcast that paired a live-streamed journey through the band’s native Iceland with an algorithmically generated remix of their new single ‘Oveour.'” The Route One album I was listening to contains highlights of that overall experience. An alternate version, with the full 24 hours, is on Google Play Music’s rival service, Spotify.

What this odd moment with my phone reminded me was that it’s always disappointing, to me at least, how little we can do – perhaps more to the point, how little we are encouraged and empowered to do – with the music on our phones.

Why don’t our frequent-listening devices, those truly personal computers we have come to call phones, not only track what we listen to but how we listen to it, and then play back on-the-fly medleys built from our favorite moments, alternate versions in collaboration with a machine intelligence?

Why can’t the tools time-stretch and pitch-match and interlace alternate takes of various versions of the same and related songs, so we hear some ersatz-master take of a favorite song, drawn from various sources and quilted to our specifications?

Or why, simply, can’t we listen easily to two things at the same time — add, for example, Brian Eno’s 1985 album Thursday Afternoon, an earlier document of an earlier generative system, to that of Route One? Or just add one copy of Route One to another, as my phone suggested was happening, one in full focus, the other a little hazy and out of sync.

Why aren’t these tools readily available? Why aren’t musicians encouraged to make music with this mode in mind? Why is this not how we listen today? Why do we listen like we used to listen?

Also tagged , / / Comments: 2 ]

Guitar + Synth Learning: Ultomaton Software

This is a quick, initial attempt on my part with a new piece of software called Ultomaton. The name is a play on the word “automaton” because the software employs Conway’s Game of Life, the famed “cellular automation” simulation, as a source of triggers and other controls, such as volume and place in the stereo spectrum, for all manner of sonic processes. These effects include stutter, backwards audio, looping, and granular synthesis, several of which are heard in this test run.

What I’m playing on electric guitar as the source audio is the first of the 120 right-hand exercises by composer Mauro Giuliani (1781 – 1829). I’ve been working my way through these exercises for the past few weeks, and sometimes experimenting with ways to make practice even more enjoyable, such as prerecording the chords to supply accompaniment. The real-time processing of Ultomaton provides accompaniment as I play, built from the very things I am playing at that moment. The accompanying screenshot shows the Ultomaton controls as they were set for this recording.

The electric guitar went into the laptop, a Macbook Air (2013), via a Scarlett audio interface. After recorded, the audio was cleaned up in Adobe Audition: volume increased, bit of reverb added, and fade-in/fade-out implemented.

Track originally posted at soundcloud.com/disquiet. The Ultomaton software is the work of Benjamin Van Esser, who is based in Brussels, Belgium. The software is free at github.com/benjaminvanesser. More information at benjaminvanesser.be.

Also tagged , / / Leave a comment ]