New Disquietude podcast episode: music by Lesley Flanigan, Dave Seidel, KMRU, Celia Hollander, and John Hooper; interview with Flanigan; commentary; short essay on reading waveforms. • F.A.Q.Key Tags: #saw2for33third, #field-recording, #classical, #juntoElsewhere: Twitter, SoundCloud, Instagram

Listening to art. Playing with audio. Sounding out technology. Composing in code. Rewinding the soundscape.

tag: software

Kenneth Goldsmith by way of PDQ Bach, and More

An early (April 2020) pandemic livestream review I wrote for The Wire

Happy Valley Band + Erin Demastes + Repairer of Reputations
Various locations/Twitch

The valley in the name of Happy Valley Band is for Silicon Valley. The happy today, April 18, 2020, is nominal, due to widespread Covid induced seclusion. Happy Valley Band, an AI-arbitered experiment, is headlining a live stream that features noises from Erin Demastes and synth flashbacks from Ryan Page, the latter performing as Repairer of Reputations. The live stream phenomenon, like the coronavirus itself, is still novel.

The concert, held by the experimental promoter Indexical, occurs on Twitch, a platform for watching other people play video games. The Twitch website is correspondingly colorful and antic. For those less engaged in gamer culture, it can also be confusing. Like a waiter missing the hint that you have no interest beyond the club’s minimum drink requirement, Twitch often pesters you about ways to level up, mystifyingly so.

Demastes’ opening set is brightly illuminated, and otherwise a stark contrast to the manic framing of Twitch’s interface. On screen, color fields shift slightly and meaningfully. She is patiently engaged in microsound, in closely miking textiles and other materials. Her audio is at first quiet, so much so that latecomers keep entering the Twitch chat room to ask if the sound is even on. It is. (One good thing about Twitch concerts is that musicians and audience can silence crowd chatter with a click.)

As the volume rises, more sounds are heard as she probes and amplifies things seen through a microscope. These are as curiosity-invoking as they are abrasive. An after-show interview sheds additional light. Demastes lists her tools. These include beads, Styrofoam, and corkboard (that “gross brown stuff,” she reminds us), as well as a Slinky, a lobster fork, and a doorstop.

Happy Valley Band go second. Like the audience, the group’s members have assembled, far and wide, from the comfort of private spaces. They appear in the all too familiar virtual-conference grid of torsos. David Kant, the band’s leader, sets a self-mocking tone: “We’re going to be here for the next … too long, destroying your favorite songs.” What Happy Valley do is play music as heard through artificial intelligence. The musicians — including Kant on tenor sax, Mustafa Walker on bass, Alexander Dupuis on guitar, and Pauline Kim on violin, among too many members to list here — play notation produced by software that listens to pop classics and spits out what the algorithms observe. The Happy Valley Band are Kenneth Goldsmith by way of PDQ Bach: cultural plundering in the service of joking forensic dismemberment. They churn through hits like Phil Collins’ “In the Air Tonight” and James Brown’s “It’s a Man’s Man’s Man’s World.” Much as synthesizers have an easier time inferring pitch from woodwinds than from multi-timbral instruments, the barebones nature of Patsy Cline’s “Crazy” yields the least frantic results of the show: the chords are anything but standard, but do leave space for the ear to focus on individual elements. The bombast of Bruce Springsteen’s “Born to Run,” however, yields frenzied mush.

Like Demastes, Page performs work where visuals and sounds are inseparable. Throughout his set, the screen fills with ancient cathode-ray images, snatches of what seems like a VHS tape of a forgotten Roger Corman horror flick. The occasional narration reads like the script to a text adventure (“You open the door. … As you enter, you are sure this is your house”). The eeriest thing, nonetheless, is just how period-perfect are the synth-score cues that Page plays to accompany the footage.

There’s some additional context in a post I made when I first announced the article’s publication (“This is the first freelance concert review I have ever written on the same device on which I witnessed the concert”).

This article I wrote originally appeared in the July 2020 issue (number 437) of The Wire.

Also tagged , , / / Leave a comment ]

Toward Bandcamp Playlists

Via a third party tool called BNDCMPR

This is pretty nifty. You can make Bandcamp playlists from multiple accounts with a third-party tool called BNDCMPR, available at I made this simple test pilot playlist just to give it a go. No, I’m not sure how the playlist function aligns with limits on unpaid plays. (The webapp’s developer, Lon Beshiri, replied on Twitter: “So there are no play limits yet, but it is something I’ve been going back and forth on. I’m still ultimately in the camp in that if someone is going to purchase music they’re going to regardless of play restrictions.”) Thanks, Nate Trier, for having introduced me to this.

Tag: / Leave a comment ]

The Code Is the Thing

My review of the February 2021 No Bounds Festival for The Wire

Chiho Oka + Kindohm + AFALFL
No Bounds Festival, Sheffield UK/YouTube

Kindohm is typing in a room different from the one I am in now. His screen is superimposed on my screen. Video of him typing is superimposed on what he himself types: lines of computer code in nested columns. These dual layered images he projects are color-reversed, leaving his skin dark, with a sickly blue tint. His beard, a resulting white fuzz, gives the illusion that he’s twice his actual age.

The music is euphorically broken. Kindohm’s beats — and this is almost entirely beats, not so much absent a vocalist as manifestly dissenting from such decoration — stagger and strut, rev up and evaporate, pounce and recoil. They promise a downbeat, then slyly renege on the fundamental club music social contract.

Based in Minneapolis, Minnesota, Kindohm (government name: Mike Hodnick) is participating in a mid-February 2021 livecoding livestream, under the Alpaca Sessions banner, part of the week-long No Bounds Festival, out of Sheffield, England. A trio of algorave performances constitute today’s 90 minute show. It’s hosted by Alex McLean, who helped coin the term algorave and created one of its leading languages, TidalCycles. We’re all used to musicians using laptops on stage but what’s different in algorave is those musicians aren’t running programs; they’re programming the music in real-time. Like Kindohm, they might employ external gear for support (today he expends more effort on his Midi Fighter Twister than on his laptop), but the code is the thing.

This No Bounds event also features both Chiho Oka (Tokyo, Japan) and AFALFL (Paris, France). Due to the pandemic, we’re all — audience, performers, and host alike — in our disparate locations. (Olivia Jack, who created the Hydra visual coding platform, even pops up in the chat window.) Yes, livestreams became widely familiar in 2020, but there’s something quite digitally native about a livecoding stream. Had algorave not already existed, Covid-19 would certainly have engendered this cultural variant.

Up first comes Oka, who is from the future, literally. While it’s still 13 February in Sheffield, the file name on her screen reveals it’s already Valentine’s Day where she is. Of the event’s three sets, Oka’s proves the most choreographed. Kindohm might adjust code and tweak equipment settings, but Oka presents something that’s deeply Rhizome-atic: a carefully honed breed of digital performance art. She jams at one point on nothing but her MacBook’s alert presets. At another, folders move under the guidance of a massive cursor, producing a sound-effects medley. And all along she’s present: a tiny figure in a red hoodie, as if her own mascot.

Closing the event is AFALFL (born Mamady Diarra), the one performer today hiding entirely from view. As white noise surfs left and right and back, he adjusts scripts onscreen in the “dark mode” color scheme familiar to software engineers around the globe. For AFALFL, however, dark mode is a full-on sonic aesthetic. The music is murky and chaotic, not just how it noisily veers, but how its components vary and jar, the sole constants being a kick drum and error beep.

Language within AFALFL’s code lends context: both obvious terms, like legato and speed, and seemingly project-specific ones, like 808bd, striate, and superimpose. It’s all there, naked for the audience to see, but true to the word “code,” what’s unfolding isn’t necessarily self-explanatory.

This article I wrote originally appeared in the June 2021 issue (number 448) of The Wire. It had the following header: “A historical exploration of foghorns sounding warnings to ships approaching the shore in a storm reflects on their sonic and cultural legacy.”

This article I wrote originally appeared in the April 2021 issue (number 446) of The Wire, which included the above graphic. Director’s cut alert: I reinserted a clause that had been deleted for space from the printed version. The concert is archived on YouTube:

Also tagged / / Leave a comment ]

The Refraction Context

When open source means open-ended melodies

Tired: album liner notes.

Wired: a link to the GitHub repository where the open-source software used to record the music is housed.

Case in point: Ambalek’s track “Lofi Snowflakes,” a sedate sequence of tones that follow a pace seemingly static but varying regularly throughout. The melody alters just enough to feel of a piece, but in fact it shifts continuously, effortlessly, notes occasionally warped in a manner that echoes the open-ended refraction context. The script, titled Raindrops, was written for the Norns, a device from Monome (see:

Track originally posted at Github at

Also tagged / / Leave a comment ]

This Week (or So) in Vocal Deepfakes:

Lightly annotated

  1. The director of a documentary film uses an AI engine so that his celebrated, deceased subject can speak from beyond the grave:

  2. A musician creates a business built around deepfake technology, letting other musicians engage with her voice:

  3. Bedroom producers make “fan fiction” songs featuring the AI-engineered voices of actual stars:

  4. Synthetic voices belatedly catch up with CGI, and all-digital animation may be in our near future:

Initial vaguely related thoughts:

  • All bands start as cover bands.

  • There’s a whole culture of nightclub performers, cover bands, and actors having careers (or partial careers) being other people.

  • There’s an uncanny valley between John Fogerty being sued for sounding like himself and the verdict against Robin Thicke and Pharrell Williams in the “Blurred Lines” case.

  • A lot of the voices of fictional robots and androids in film and television are the voices of humans (see: 2001: A Space Odyssey, WarGames, Max Headroom, Colossus: The Forbin Project, and so on).

  • The future is especially meaningful when viewed through the lens of the past.

Also tagged / / Leave a comment ]