February 13, 2014, is the official release date for my 33 1/3 book on Aphex Twin's 1994 album Selected Ambient Works Volume II, available at Amazon (including Kindle) and via your local bookstore. • F.A.Q.Key Tags: #saw2for33third, #sound-art, #classical, #juntoElsewhere: Twitter, SoundCloud, Instagram

Listening to art.
Playing with audio.
Sounding out technology.
Composing in code.

tag: software

Ambient at the Grey Lady

Charting the word "ambient" over time in the New York Times' Chronicle app

The paper of record has a feature called Chronicle that allows you to experiment with “Visualizing language usage in New York Times news coverage throughout its history,” as the service describes itself. You can compare frequency of multiple words, or just chart one. Apparently “ambient” is on a roll:

20140724-ambient-chronicle

I’m not sure that there’s actually been a downturn in the past year. A lot of words I checked tapered off at the end, making me wonder if they aren’t adjusting for 2014 being barely half through.

The service is similar to if more elegant than Google’s Ngram Viewer, where “ambient” charts about the same, aside from the downturn:

20140724-ambient-ngram

Try it out at chronicle.nytlabs.com. Here’s the announcement article from yesterday. If you come upon any interesting data, let me know. (Thanks to Ian Lewis Gordon for the tip.)

Tag: / Leave a comment ]

Tangents: Data Immersion, the Tuning of the Internet, Superloops, …

Plus: the emotional key of books, physical computer drums, quantum computer sounds, steampunk modular, and more

Tangents is an occasional collection of short, lightly annotated mentions of sound-related activities.

Data Immersion: Characteristically breathtaking video of a new work by Ryoji Ikeda, perhaps the leading installation poet of data immersion. This is of his piece “supersymmetry,” which relates to his residency at CERN, the supercollider. More at supersymmetry.ycam.jp:

In an interview he talks about the dark-matter research that informed his effort:

“Supersymmetry is being considered as a possible solution of the mystery of this dark matter. During the period I’m staying at CERN, there are experiments being carried out with the aim to prove the existence of as-yet undiscovered ‘supersymmetry particles’ that form pairs with the particles that make up the so-called ‘Standard Model’ catalogue of physical substances. Data and technologies of these experiments are not directly incorporated in the work, but I’m going to discuss a variety of things with the physicists at CERN, and the results of these discussions will certainly be reflected.”

Tones of the Internet: The tonal repository of the Internet is very different from the room tone of the Internet, which we explored in a recent Disquiet Junto project. Over at wired.com, Joseph Flaherty profiles Zach Lieberman, with an emphasis on his Play the World project, which scours the Internet for sounds — the music heard on radio stations — and then allows them to be played back. “Using the set-up,” Flagerty writes, “a person can literally turn the internet into a musical instrument.” What makes that sentence more than hyperbole is that the source audio is played at the note triggered by the user, though it’s by no means “the Internet” being played, and instead a fairly well-circumscribed and specific subset of the Internet. (The effort brings to mind the title of R. Murray Schafer’s classic book of sound studies, The Tuning of the World.) It’s part of DevArt, a Google digital art endeavor that has nothing to do with Deviant Art, the longstanding web forum for (largely) visual artists, or with Devart, the database software company. “Play the World, and several other DevArt projects,” reports Flaherty, ” will make their debut at the Barbican Gallery of Art in London in July, but the code is available on Github today.” There’s something intriguing about an art premiere that is preceded by the materials’ worldwide open-source availability. Here’s audio of the note A being played for 20 minutes based on a wide array of these sound sources. It appears to be from Zieberman’s own SoundCloud account, which oddly has only 15 followers as of this writing. Well, 16, because I just joined up:

The Singing Book: At hyperallergic.com, Allison Meier writes about an effort to extract the emotional content from writing and turn it into music. It’s a project by Hannah Davis and Saif Mohammad. Below is an example based on the novel Lord of the Flies. More at Davis and Mohammad’s musicfromtext.com. A few weeks back, the Junto explored a parallel effort to listen to the rhythm inherent in particular examples of writing, and to make music based on that rhythm:

Everyday Drum: The divisions between words like “analog” and “digital,” and “electric” and “acoustic,” are far more blurred than they get credit for, as evidenced by this fine implementation of an iPad triggering not just physical beats, but whimsically innovative ones made from bottle caps, buttons, grains tacks, and other everyday objects (found via twitter.com/Chris_Randall). The project is by Italy-based Lorenzo Bravi, more from whom at lorenzobravi.com:

LED Modular: Vice Motherboard’s DJ Pangburn interviews Charles Lindsay (the SETI artist-in-residence, who invited me to give that talk last month) on his massive LED installation, which involves the chance nature of modular synthesis applied to recordings of the Costa Rica rainforest. Says Lindsay:

“I love modular synthesis, the unpredictable surprises, the textures and wackiness,” he said of his heavily-cabled Eurorack modular synthesizer. “My rig is populated by a lot of SNAZZY FX’s modules. I’m part of the company, which is essentially Dan Snazelle, a wonderful genius, inventor and musician. We share an approach that says ‘let’s build these things and see what happens.’”

Also part of the LED exhibit, titled Carbon IV, is audio sourced from the quantum artificial intelligence laboratory at NASA Ames. Here’s audio from Linday’s SoundCloud account:

Superloops: Rob Walker shifts attention from the “supercut” of related material — like the “yeahs” of Metallica’s James Hetfield — to the superloop of standalone elements. “The opposite of a supercut,” writes Walker at Yahoo! Tech, “the superloop condenses nothing. To the contrary, it takes one brief moment of sound or video and repeats it.” It was an honor to be queried, along with Ethan Hein, in Walker’s research. I pointed him to the great sounds of the Star Trek enterprise on idle. … And in somewhat related news, in Walker’s “The Workologist” column in The New York Times, in which he responds to “workplace conundrums” from readers, he has some advice for someone bothered by an office mate’s gum chewing (“Other than the clicking of keys and occasional phone calls, it’s the only sound in an otherwise quiet office”); he writes, in part:

Because you’ve ruled out music, maybe a comfortable set of noise-canceling headphones — tuned to nothing — would be enough to blunt the irritating sounds. Or you could consider any number of “white noise” generators that are available free online. Noisli.com, for example, generates forest sounds, coffee-shop noise and the like. You also could do a little research on “ambient” music and use a service like Pandora to construct a nondistracting sound stream. Such approaches may be inoffensive enough that you can simply play the sound at low volume from your computer — no earbuds required.

Steampunk Modular: By and large, I tend to keep the threshold of coverage above the level of “things that look neat,” but sometimes that neat is neat enough that I can’t resist, especially when it’s tied to a fine achievement by a talented sound practitioner. Richard Devine has posted on Instagram this shot of steampunk-style effects module, encased in an old book, that he got from the makers of the Xbox One video game Wolfenstein: The New Order:

Synesthesia Robots: And here’s one from Kid Koala of his lofi visual interface for his sampler. Koala is a talented cartoonist as well as an ace downtempo DJ. Those efforts have collided in a score he’s made for a graphic novel, and in various staged performances he’s put together, and this achieves a functional correlation in a very simple manner:

Also tagged , , , , , , / / Leave a comment ]

Is There Such a Thing as a Sonic QR Code?

One needn't watch the new Spider-Man movie for a possible answer.

20140430-shazampdf

There are at least two things that Sony Pictures marketing executives did not consider when preparing a cross-promotion between its new Spider-Man film and the song-identification app Shazam. I first read about this promotion this morning on io9.com, because pretty much the first thing I read every morning is Morning Spoilers on io9.com. The film in question, The Amazing Spider-Man 2, opens this Friday, May 2, in the United States. Expecting extended discussion about Peter Parker’s doomed romance with Gwen Stacy or the rise of his frenemy Harry Osbourne to lead the high-tech firm founded by his father, instead there was news of an intriguing little digital-audio phenomenon.

The Sony-Shazam promotion involves viewers of the Spider-Man movie waiting until the end credits, during which the Alicia Keys song “It’s On Again” is heard. Viewers can then use the Shazam app to identify the song. Doing so brings up a special opportunity to add, for free, photos that hint at members of the Sinister Six — villain characters from Sony’s rapidly expanding Spider-Man franchise — to their personal photo galleries. (It should be noted that the Keys song is itself a sort of cross-promotion. It’s full credit is: Alicia Keys feat. Kendrick Lamar – “It’s On Again.”)

The first of these things that Sony Pictures may not have considered is that Shazam shares a name with a superhero from a rival comics publisher, DC. Would it have been too difficult to sign up, instead, with Soundhound, or MusixMatch, or the elegantly named Sound Search for Google Play, among other song-identification services? Perhaps none of this matters. Sony is already engaged in a cold war with other studios among whom the Marvel universe of characters is subdivided. A second-tier, if beloved, character from another universe entirely means nothing when there are already two Quicksilvers running around in your own. For reference, below is an uncharacteristically stern Shazam, drawn by Jeff Smith (best known for his work on Bone):

20140430-shazam

In any case, the second and more pressing matter is that one needn’t stay until the end credits of the new Spider-Man film to activate the Shazam code with the Alicia Keys song. One needn’t even see the Spider-Man film, let alone wait for it to open in a theater near you. Right now, two full days before the film’s release in the United States, you can pull up the Alicia Keys video on YouTube, and the Shazam app on your phone will recognize that as the correct song, and your phone will, indeed, then provide you with the prized photos. In fact, at this point you don’t even need to do that, since the photos have already proliferated around the Internet. (See them at comingsoon.net and at the above io9.com link.)

But an interesting question arises, which is: How different would the Alicia Keys song played during the end credits have to be from the original version of the song for only the credits rendition to be recognized by Shazam as the correct one to cough up the Sinister Six photos? More to the point, can a specific version of a song function as the sonic equivalent of a QR code. QR codes are those square descendents of zebra codes, such as the one shown below. The “QR” stands for “quick response.” They can contain information such as a URL, which when activated by a phone’s camera can direct the phone’s browser to a particular web page. This QR code links, only semi-helpfully, to the web page on which this article originally appeared:

20140430-qrsonic

Of course, from a procedural standpoint, Sony could have gotten around this alternate-version approach by having the song only be available in the credits, but that would have cut into sales of the soundtrack album — which would either have to lack the song entirely, or have its release delayed until several weeks after the film’s debut.

The recipes of these different song-identification apps, such as Shazam and its arch enemy Soundhound, are closely guarded secrets. Enough information is provided to allow for developer-level discussion, but ultimately the apps’ success (both in terms of successful-identification statistics and user adoption) depend on the how-to being at least semi-obscured. But there is quite a bit of information out there, including a 2003 academic paper by Shazam co-founder Avery Li-Chun Wang outlining the company’s approach at the time (PDF), which I found thanks to a October 2009 article by Farhad Manjoo on Slate.com. The summary at the opening of the paper reads as follows:

We have developed and commercially deployed a flexible audio search engine. The algorithm is noise and distortion resistant, computationally efficient, and massively scalable, capable of quickly identifying a short segment of music captured through a cellphone microphone in the presence of foreground voices and other dominant noise, and through voice codec compression, out of a database of over a million tracks. The algorithm uses a combinatorially hashed time-frequency constellation analysis of the audio, yielding unusual properties such as transparency, in which multiple tracks mixed together may each be identified. Furthermore, for applications such as radio monitoring, search times on the order of a few milliseconds per query are attained, even on a massive music database.

The gist of it, as summarized in handy charts like the one up top, appears to be that an entire song is not necessary for identification purposes, that only key segments — “higher energy content,” he calls it — are required. At least in part, this allows for songs to be recognizable above the din of everyday life: “The peaks in each time-frequency locality are also chosen according amplitude, with the justification that the highest amplitude peaks are most likely to survive the distortions listed above.” It may also explain why much of my listening, which being ambient in nature can easily be described as “low energy content,” is often not recognized by Shazam or any other such software. As a side note, this gets at how the human ear listens differently than a microphone. The human ear can listen through a complex noise and locate a a particular subset, such as a conversation, or a phone ringing, or a song for that matter.

Now, of course, there’s a difference between the unique attributes of emerging technologies and the desired results of marketing initiatives. Arguably all that Sony wanted to come out of its Shazam cross-promotion was to get word out about Spider-Man, and to buy some affinity for the Sinister Six with a particular breed of fan, and to that end it has certainly succeeded. Perhaps it also hoped to gain a little tech cred in the process, even if that cred is more window dressing than truly innovative at a technological level.

Still, the idea of a song as a true QR code lingers. Perhaps Harry Osbourne and Peter Parker could team up and develop a functional spec.

Also tagged , / / Leave a comment ]

disquiet.gizmodo.com

On Disquiet.com now participating in the Gizmodo ecosystem

These are two things that I think Geoff Manaugh, editor-in-chief of the technology and design blog Gizmodo.com, didn’t know about me when he asked if I’d consider bringing Disquiet.com beneath his website’s expanding umbrella.

1: My “to re-blog” bookmark file has been packed in recent months with scores of items from pretty much all of the Gizmodo-affiliated sites — not just Gizmodo, but io9.com, Lifehacker, Jalopnik, Gawker, and Kotaku. Probably Jezebel and Deadspin, too, but the file is too thick for me to tell.

2: Pretty much the first thing that I read every morning with my coffee — well, every weekday morning — is the “Morning Spoilers” at io9.com, the great science fiction website that is part of the Gawker network that also contains Gizmodo.

I knew Manaugh’s work from BLDGBLOG and, before that, Dwell Magazine. He’d previously invited me to involve the weekly experimental music/sound project series that I run, the Disquiet Junto, in the course on the architecture of the San Andreas Fault that he taught in spring 2013 at Columbia University’s graduate school of architecture. And I am excited to work with him again.

And so, there is now a cozy disquiet.gizmodo.com subdomain URL where I’ll be syndicating — simulposting — material from Disquiet.com, as well as doing original straight-to-Gizmodo writing. I’m hopeful that members of the Gizmodo readership might further expand the already sizable ranks of the Disquiet Junto music projects (we just completed one based on a post from Kotaku), and I’ll be posting notes from the course I teach on “sound in the media landscape” at the Academy of Art here in San Francisco.

For new readers of Disquiet, the site’s purview is as follows:

* Listening to Art.

* Playing with Audio.

* Sounding Out Technology.

* Composing in Code.

I’ll take a moment to break that down:

Listening to Art: Attention to sound art has expanded significantly this year, thanks in no small part to the exhibit Soundings: A Contemporary Score at the Museum of Modern Art in Manhattan. That exhibit, which ran from August 10 through November 3, featured work by such key figures as Susan Philipsz (whose winning of the Turner Prize inspired an early music compilation I put together), Carsten Nicolai (whom I profiled in the new Red Bull Music Academy book For the Record), and Stephen Vitiello (whom I’ve interviewed about 911 and architectural acoustics, and who has participated in the Disquiet Junto). But if “sound art” is art for which music is both raw material and subject matter, my attention is just as much focused on what might better be described as the role of “sound in art,” of the depictions of audio in various media (the sound effects in manga, for example) and the unintended sonic components of art beyond sound art, like the click and hum of a slide carousel or the overall sonic environment of a museum. Here’s video of Tristan Perich’s “Microtonal Wall” from the MoMA exhibit:

/

Playing with Audio: If everything is, indeed, a remix, that is a case most clearly made in music and experimental sound. From the field recordings that infuse much ambient music to the sampling of hip-hop to the rapturous creative reuse that proliferates on YouTube and elsewhere, music as raw material is one of the most exciting developments of our time. Terms like “remix” and “mashup” and “mixtape” can been seen to have originated or otherwise gained cachet in music, and as they expand into other media, we learn more about them, about the role such activities play in culture. And through the rise of audio-game apps, especially in iOS, such “playing with sound” has become all the more common — not just the work of musicians but of audiences, creating a kind of “active listening.” This notion of reuse, of learning about music and sound by how it is employed after the fact, plays a big role in my forthcoming book for the 33 1/3 series. My book is about Aphex Twin’s album Selected Ambient Works Volume II, and it will be published on February 13, 2014, just weeks ahead of the record’s 20th anniversary. As part of my research for the book, I spoke with many individuals who had come to appreciate the Aphex Twin album by engaging with it in their own work, from composers who had transcribed it for more “traditional” instruments (such as chamber ensembles and solo guitar), to choreographers and sound designers, to film directors.

Sounding Out Technology: A briefer version of the Disquiet.com approach is to look at “the intersection of sound, art, and technology.” The term “technology” is essential to that trio, because it was only when I learned to step back from my fascination with electronically produced music and to appreciate “electronic” as a subset of the vastly longer continuum of “technology” that connections became more clear to me — say, between the sonics of raves and the nascent polyphony of early church music, or between creative audio apps like Brian Eno and Peter Chilvers’ Bloom and what is arguably the generative ur-instrument: the aeolian harp. With both Bloom and the aeolian harp, along with its close relative the wind chime, music is less a fixed composition than a system that is enacted. As technology mediates our lives more and more, the role that sound plays in daily life becomes a richer and richer subject — from voice-enabled devices, to the sounds of consumer product design, to the scores created for electric cars:

Composing in Code: Of all the technologies to come to the fore in the past two decades, perhaps none has had an impact greater than computer code. This is no less true in music and sound than it is in publishing, film, politics, health, or myriad other fields. While the connections between mathematics and music have been celebrated for millennia, there is something special to how, now, those fields are combining, notably in graphic systems such as Max/MSP (and Max for Live, in Ableton) and Puredata (aka Pd), just to name two circumstances. Here, for reference, is a live video of the Dutch musician and sound artist Edo Paulus’ computer screen as he constructs and then performs a patch in Max/MSP. Where the construction ends and the performance begins provides a delightful koan:

All of which said, I’m not 100-percent clear what form my disquiet.gizmodo.com activity will take. I’m looking forward to experimenting in the space. I’ll certainly be co-posting material from Disquiet.com, but I’m also planning on engaging with Gizmodo itself, and with its broader network of sites. I’ve already, in advance of this post, begun re-blogging material from Gizmodo and from Gizmodo-affiliated sites: not just “sharing” (in the UI terminology of the Kinja CMS that powers the network) but adding some contextual information, thoughts, tangents, details. I’m enthusiastic about Kinja, in particular how it blurs the lines between author and reader. I like that a reply I make to a post about a newly recreated instrument by Leonardo Da Vinci can then appear in my own feed, leading readers back to the original site, where they themselves might join in the conversation. Kinja seems uniquely focused on multimedia as a form of commentary — like many CMS systems, it allows animated GIFs and short videos to serve as blog comments unto themselves, but it goes the step further of allowing users to delineate rectangular sub-sections of previously posted images and comment on those. I’m intrigued to see how sound can fit into that approach. (It’s no surprise to me that Kinja is innovative in this regard — it’s on Lifehacker that I first learned about the syntax known as “markdown.”) I think that all, cumulatively, makes for a fascinating media apparatus, and I want to explore it.

While I typed this post, it was Tuesday in San Francisco. I live in the Outer Richmond District, just north of Golden Gate Park and a little over a mile from the Pacific Ocean. The season’s first torrential rain has passed, and so the city sounds considerably more quiet than it did just a few days ago. No longer is the noise of passing automobiles amplified and augmented by the rush of water, and the roof above my desk is no longer being pummeled. But where there is the seeming peace of this relative quiet, there is also an increased diversity of listening material. The ear can hear further, as it were — not just to conversations in the street and to passing cars, but to construction blocks away, to leaf blowers, to a seaplane overhead, to the sound of a truck backing up at some considerable distance, and to the many birds that (unlike what I was accustomed to, growing up on the north shore of New York’s Long Island) do not all vacate the area come winter. It is shortly past noon as I hit the button to make this post go live. Church bells have sung a duet with the gurgling in my belly to remind me it is time for lunch. And because it is Tuesday, the city’s civic warning system has rung out. 

Dim sum, anyone?

Also tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , / / Leave a comment ]

Code to Decode

Software and other insights on unpacking the "Ford Madox Ford Page 99 Remix"

The latest Disquiet Junto finds music hidden in everyday books. The project began Thursday evening, November 7, and ends this coming Monday, November 11, at 11:59pm. It ends not at midnight but at 11:59pm, as do all Junto projects, because early on in the Junto series it became clear to me that when you type “midnight Monday” sometimes people don’t know if you meant the midnight that began Monday or the midnight that ended Monday. These sorts of distinctions are important, because the framing structure of the Junto is as much a set of rules as are the rules of a given project.

If there were two key rules about writing rules they would probably be:

  1. Make sure the rules work.

  2. Make sure the rules aren’t likely to be misinterpreted.

Each of the weekly projects has its own vibe, its own likely/intended audience of participants, and its own surprises, and when it comes to surprises — especially in the form of generous contributions of code from participants — this week is no exception. A few notes follow regarding this week’s project, which involves transforming into music 80 characters selected from page 99 a book selected by the musician. The page number, 99, was selected from a comment by the author Ford Madox Ford (more details at the project page).

1: Shortly after project’s announcement, I got a note from Junto member David Wilkins, who has done text->music work in the past. He directed me to his website wilkinsworks.net, from which this is excerpted:

The earliest known version of this system appeared in the Renaissance as a technique called soggetto cavato, first used by Josquin des Prez around 1500, and later named by Zarlino in his 1558 treatise Le institutioni harmoniche as soggetto cavato dalle vocali di queste parole, or literally, a subject ‘carved out of the vowels from these words.’ des Prez only used the vowels, mapping them to the solmization syllables, and using the resulting notes as the cantus firmus for the Missa Hercules dux Ferrariae and other works. …

My first foray into this idea occurred in 1976 while an undergrad music student, waiting for a recital to begin. There is a famous organ work by Bach, based on his own name, which gave me the idea in the first place. In German, B is B flat and H is B natural. Being an American I wanted to use the A through G as is, so had to start with H as something else. Being a trombonist I tend to favor flats over sharps, so assigned H to A flat, I to B flat, and up to the first twelve notes. Start over again with M assigned to A, and so on for the remaining letters.

2: Junto participant Mutagene posted to github.com a script in the Ruby language to help automate the process of changing letters and punctuation into notes:

3: And Junto member Defaoieclan wrote a piece of software in Processing that would likewise assist in the transform. Full piece at the track’s page. Here’s the opening part:

20131109-juntocode

4: And Junto member Inlet wrote something in Supercollider, available at the track’s page. Here’s the opening part:

20131109-superc

The 97th Disquiet Junto project is housed here.

Also tagged / / Leave a comment ]