February 13, 2014, is the official release date for my 33 1/3 book on Aphex Twin's 1994 album Selected Ambient Works Volume II, available at Amazon (including Kindle) and via your local bookstore. • F.A.Q.Key Tags: #saw2for33third, #sound-art, #classical, #juntoElsewhere: Twitter, SoundCloud, Instagram

Listening to art.
Playing with audio.
Sounding out technology.
Composing in code.

tag: software

The Hamlet of CMS Cross-pollination

I've turned off the sound.tumblr.com -> Disquiet.com autofeed.

There’s probably no one who cares about this but me, but I wanted to mention that for the time being I’ve turned off the IFTTT “recipe” that automatically would take new posts from my sound.tumblr.com site and then post the material here at Disquiet.com. The reason is simple: there’s a lot published at sound.tumblr.com on a daily basis, because it’s a linkblog, and it can overwhelm Disquiet.com. I came to this realization this month: my sensitivity to not overwhelming the Disquiet.com editorial balance was actually keeping me from posting more frequently to sound.tumblr.com site. And the point of the sound.tumblr.com site is to have as little in the way of a filter as possible — to just use it as a repository for lightly annotated links about the role of sound in the media landscape. On occasion I’ll do roundups here at Disquiet.com of highlights from sound.tumblr.com, and if a given sound.tumblr.com takes on a little heft, I’ll cross-post it here, as I did earlier today with the piece on the sound of dining.

Also tagged , , / / Leave a comment ]

Why There’s a @djunto Twitter Account

To encourage communication among participants

I don’t generally post from my @djunto Twitter account. I post from my @disquiet account. I will respond to messages sent to @djunto, but even then I try to redirect the response account so it comes from @disquiet. My general plan for @djunto is as follows: the way Twitter works is that any two people who follow the same account, in this case @djunto, will see any communication either one makes to @djunto. That’s an encouragement for people who participate in the Disquiet Junto to pay attention to each others’ posts, and to discover each other. Ultimately, a core component of the Disquiet Junto is communication among participants, and the @djunto account is one leg of that table, along with the request that members comment on each others’ tracks, perhaps check out the Faceboook page (facebook.com/disquiet.fb), and that they participate in the join in the discussion that occurs on Disquiet.com posts and in the disquiet.com/forums threads.

Also tagged / / Leave a comment ]

Ambient at the Grey Lady

Charting the word "ambient" over time in the New York Times' Chronicle app

The paper of record has a feature called Chronicle that allows you to experiment with “Visualizing language usage in New York Times news coverage throughout its history,” as the service describes itself. You can compare frequency of multiple words, or just chart one. Apparently “ambient” is on a roll:

20140724-ambient-chronicle

I’m not sure that there’s actually been a downturn in the past year. A lot of words I checked tapered off at the end, making me wonder if they aren’t adjusting for 2014 being barely half through.

The service is similar to if more elegant than Google’s Ngram Viewer, where “ambient” charts about the same, aside from the downturn:

20140724-ambient-ngram

Try it out at chronicle.nytlabs.com. Here’s the announcement article from yesterday. If you come upon any interesting data, let me know. (Thanks to Ian Lewis Gordon for the tip.)

Tag: / Leave a comment ]

Tangents: Data Immersion, the Tuning of the Internet, Superloops, …

Plus: the emotional key of books, physical computer drums, quantum computer sounds, steampunk modular, and more

Tangents is an occasional collection of short, lightly annotated mentions of sound-related activities.

Data Immersion: Characteristically breathtaking video of a new work by Ryoji Ikeda, perhaps the leading installation poet of data immersion. This is of his piece “supersymmetry,” which relates to his residency at CERN, the supercollider. More at supersymmetry.ycam.jp:

In an interview he talks about the dark-matter research that informed his effort:

“Supersymmetry is being considered as a possible solution of the mystery of this dark matter. During the period I’m staying at CERN, there are experiments being carried out with the aim to prove the existence of as-yet undiscovered ‘supersymmetry particles’ that form pairs with the particles that make up the so-called ‘Standard Model’ catalogue of physical substances. Data and technologies of these experiments are not directly incorporated in the work, but I’m going to discuss a variety of things with the physicists at CERN, and the results of these discussions will certainly be reflected.”

Tones of the Internet: The tonal repository of the Internet is very different from the room tone of the Internet, which we explored in a recent Disquiet Junto project. Over at wired.com, Joseph Flaherty profiles Zach Lieberman, with an emphasis on his Play the World project, which scours the Internet for sounds — the music heard on radio stations — and then allows them to be played back. “Using the set-up,” Flagerty writes, “a person can literally turn the internet into a musical instrument.” What makes that sentence more than hyperbole is that the source audio is played at the note triggered by the user, though it’s by no means “the Internet” being played, and instead a fairly well-circumscribed and specific subset of the Internet. (The effort brings to mind the title of R. Murray Schafer’s classic book of sound studies, The Tuning of the World.) It’s part of DevArt, a Google digital art endeavor that has nothing to do with Deviant Art, the longstanding web forum for (largely) visual artists, or with Devart, the database software company. “Play the World, and several other DevArt projects,” reports Flaherty, ” will make their debut at the Barbican Gallery of Art in London in July, but the code is available on Github today.” There’s something intriguing about an art premiere that is preceded by the materials’ worldwide open-source availability. Here’s audio of the note A being played for 20 minutes based on a wide array of these sound sources. It appears to be from Zieberman’s own SoundCloud account, which oddly has only 15 followers as of this writing. Well, 16, because I just joined up:

The Singing Book: At hyperallergic.com, Allison Meier writes about an effort to extract the emotional content from writing and turn it into music. It’s a project by Hannah Davis and Saif Mohammad. Below is an example based on the novel Lord of the Flies. More at Davis and Mohammad’s musicfromtext.com. A few weeks back, the Junto explored a parallel effort to listen to the rhythm inherent in particular examples of writing, and to make music based on that rhythm:

Everyday Drum: The divisions between words like “analog” and “digital,” and “electric” and “acoustic,” are far more blurred than they get credit for, as evidenced by this fine implementation of an iPad triggering not just physical beats, but whimsically innovative ones made from bottle caps, buttons, grains tacks, and other everyday objects (found via twitter.com/Chris_Randall). The project is by Italy-based Lorenzo Bravi, more from whom at lorenzobravi.com:

LED Modular: Vice Motherboard’s DJ Pangburn interviews Charles Lindsay (the SETI artist-in-residence, who invited me to give that talk last month) on his massive LED installation, which involves the chance nature of modular synthesis applied to recordings of the Costa Rica rainforest. Says Lindsay:

“I love modular synthesis, the unpredictable surprises, the textures and wackiness,” he said of his heavily-cabled Eurorack modular synthesizer. “My rig is populated by a lot of SNAZZY FX’s modules. I’m part of the company, which is essentially Dan Snazelle, a wonderful genius, inventor and musician. We share an approach that says ‘let’s build these things and see what happens.'”

Also part of the LED exhibit, titled Carbon IV, is audio sourced from the quantum artificial intelligence laboratory at NASA Ames. Here’s audio from Linday’s SoundCloud account:

Superloops: Rob Walker shifts attention from the “supercut” of related material — like the “yeahs” of Metallica’s James Hetfield — to the superloop of standalone elements. “The opposite of a supercut,” writes Walker at Yahoo! Tech, “the superloop condenses nothing. To the contrary, it takes one brief moment of sound or video and repeats it.” It was an honor to be queried, along with Ethan Hein, in Walker’s research. I pointed him to the great sounds of the Star Trek enterprise on idle. … And in somewhat related news, in Walker’s “The Workologist” column in The New York Times, in which he responds to “workplace conundrums” from readers, he has some advice for someone bothered by an office mate’s gum chewing (“Other than the clicking of keys and occasional phone calls, it’s the only sound in an otherwise quiet office”); he writes, in part:

Because you’ve ruled out music, maybe a comfortable set of noise-canceling headphones — tuned to nothing — would be enough to blunt the irritating sounds. Or you could consider any number of “white noise” generators that are available free online. Noisli.com, for example, generates forest sounds, coffee-shop noise and the like. You also could do a little research on “ambient” music and use a service like Pandora to construct a nondistracting sound stream. Such approaches may be inoffensive enough that you can simply play the sound at low volume from your computer — no earbuds required.

Steampunk Modular: By and large, I tend to keep the threshold of coverage above the level of “things that look neat,” but sometimes that neat is neat enough that I can’t resist, especially when it’s tied to a fine achievement by a talented sound practitioner. Richard Devine has posted on Instagram this shot of steampunk-style effects module, encased in an old book, that he got from the makers of the Xbox One video game Wolfenstein: The New Order:

Synesthesia Robots: And here’s one from Kid Koala of his lofi visual interface for his sampler. Koala is a talented cartoonist as well as an ace downtempo DJ. Those efforts have collided in a score he’s made for a graphic novel, and in various staged performances he’s put together, and this achieves a functional correlation in a very simple manner:

Also tagged , , , , , , / / Leave a comment ]

Is There Such a Thing as a Sonic QR Code?

One needn't watch the new Spider-Man movie for a possible answer.

20140430-shazampdf

There are at least two things that Sony Pictures marketing executives did not consider when preparing a cross-promotion between its new Spider-Man film and the song-identification app Shazam. I first read about this promotion this morning on io9.com, because pretty much the first thing I read every morning is Morning Spoilers on io9.com. The film in question, The Amazing Spider-Man 2, opens this Friday, May 2, in the United States. Expecting extended discussion about Peter Parker’s doomed romance with Gwen Stacy or the rise of his frenemy Harry Osbourne to lead the high-tech firm founded by his father, instead there was news of an intriguing little digital-audio phenomenon.

The Sony-Shazam promotion involves viewers of the Spider-Man movie waiting until the end credits, during which the Alicia Keys song “It’s On Again” is heard. Viewers can then use the Shazam app to identify the song. Doing so brings up a special opportunity to add, for free, photos that hint at members of the Sinister Six — villain characters from Sony’s rapidly expanding Spider-Man franchise — to their personal photo galleries. (It should be noted that the Keys song is itself a sort of cross-promotion. It’s full credit is: Alicia Keys feat. Kendrick Lamar – “It’s On Again.”)

The first of these things that Sony Pictures may not have considered is that Shazam shares a name with a superhero from a rival comics publisher, DC. Would it have been too difficult to sign up, instead, with Soundhound, or MusixMatch, or the elegantly named Sound Search for Google Play, among other song-identification services? Perhaps none of this matters. Sony is already engaged in a cold war with other studios among whom the Marvel universe of characters is subdivided. A second-tier, if beloved, character from another universe entirely means nothing when there are already two Quicksilvers running around in your own. For reference, below is an uncharacteristically stern Shazam, drawn by Jeff Smith (best known for his work on Bone):

20140430-shazam

In any case, the second and more pressing matter is that one needn’t stay until the end credits of the new Spider-Man film to activate the Shazam code with the Alicia Keys song. One needn’t even see the Spider-Man film, let alone wait for it to open in a theater near you. Right now, two full days before the film’s release in the United States, you can pull up the Alicia Keys video on YouTube, and the Shazam app on your phone will recognize that as the correct song, and your phone will, indeed, then provide you with the prized photos. In fact, at this point you don’t even need to do that, since the photos have already proliferated around the Internet. (See them at comingsoon.net and at the above io9.com link.)

But an interesting question arises, which is: How different would the Alicia Keys song played during the end credits have to be from the original version of the song for only the credits rendition to be recognized by Shazam as the correct one to cough up the Sinister Six photos? More to the point, can a specific version of a song function as the sonic equivalent of a QR code. QR codes are those square descendents of zebra codes, such as the one shown below. The “QR” stands for “quick response.” They can contain information such as a URL, which when activated by a phone’s camera can direct the phone’s browser to a particular web page. This QR code links, only semi-helpfully, to the web page on which this article originally appeared:

20140430-qrsonic

Of course, from a procedural standpoint, Sony could have gotten around this alternate-version approach by having the song only be available in the credits, but that would have cut into sales of the soundtrack album — which would either have to lack the song entirely, or have its release delayed until several weeks after the film’s debut.

The recipes of these different song-identification apps, such as Shazam and its arch enemy Soundhound, are closely guarded secrets. Enough information is provided to allow for developer-level discussion, but ultimately the apps’ success (both in terms of successful-identification statistics and user adoption) depend on the how-to being at least semi-obscured. But there is quite a bit of information out there, including a 2003 academic paper by Shazam co-founder Avery Li-Chun Wang outlining the company’s approach at the time (PDF), which I found thanks to a October 2009 article by Farhad Manjoo on Slate.com. The summary at the opening of the paper reads as follows:

We have developed and commercially deployed a flexible audio search engine. The algorithm is noise and distortion resistant, computationally efficient, and massively scalable, capable of quickly identifying a short segment of music captured through a cellphone microphone in the presence of foreground voices and other dominant noise, and through voice codec compression, out of a database of over a million tracks. The algorithm uses a combinatorially hashed time-frequency constellation analysis of the audio, yielding unusual properties such as transparency, in which multiple tracks mixed together may each be identified. Furthermore, for applications such as radio monitoring, search times on the order of a few milliseconds per query are attained, even on a massive music database.

The gist of it, as summarized in handy charts like the one up top, appears to be that an entire song is not necessary for identification purposes, that only key segments — “higher energy content,” he calls it — are required. At least in part, this allows for songs to be recognizable above the din of everyday life: “The peaks in each time-frequency locality are also chosen according amplitude, with the justification that the highest amplitude peaks are most likely to survive the distortions listed above.” It may also explain why much of my listening, which being ambient in nature can easily be described as “low energy content,” is often not recognized by Shazam or any other such software. As a side note, this gets at how the human ear listens differently than a microphone. The human ear can listen through a complex noise and locate a a particular subset, such as a conversation, or a phone ringing, or a song for that matter.

Now, of course, there’s a difference between the unique attributes of emerging technologies and the desired results of marketing initiatives. Arguably all that Sony wanted to come out of its Shazam cross-promotion was to get word out about Spider-Man, and to buy some affinity for the Sinister Six with a particular breed of fan, and to that end it has certainly succeeded. Perhaps it also hoped to gain a little tech cred in the process, even if that cred is more window dressing than truly innovative at a technological level.

Still, the idea of a song as a true QR code lingers. Perhaps Harry Osbourne and Peter Parker could team up and develop a functional spec.

Also tagged , / / Leave a comment ]