Realtime Sonification

A KQED interview with Mahmoud Hashemi about Listen to Wikipedia

20151127-listentowik

Someone adds an entry about a cooking magazine from the 1950s? Boom …

Someone corrects the release date in the discography of James Taylor? Bleep …

Someone undoes a spelling correction in an entry about an undersecretary from a mid-century U.S. presidential administration? Bong …

Audible tones and expanding, colored circles are used in tandem to announce changes to the vast collaborative encyclopedia thanks to the great online tool Listen to Wikipedia (listen.hatnote.com), one of the best examples of realtime sonification on the web. Developed by Stephen LaPorte and Mahmoud Hashemi, it’s the subject of a short recent interview from radio station KQED. The conversation with Hashemi goes into the background of the tool. He talks about the software’s actions, and how it serves both as an expression of Wikipedia and as a response to the economic focus of Silicon Valley.

There’s something very pleasing and centering about the placid surveillance of Listen to Wikipedia, all that communal and often rancorous activity transformed into dulcet tones. Sometimes I just let it run on a side screen as I work. Sometimes I also run this pure geographic visualizer, at rcmap.hatnote.com:

20151127-rcmap

Up at the top of this post is a sample still frame of Listen to Wikipedia in action. Here is an example of the sort of realtime information that Listen to Wikipedia parses:

20151127-wikiapi

This documentation summarizes how the sounds and related images of Listen to Wikipedia correlate with actual edits:

Bells indicate additions and string plucks indicate subtractions. Pitch changes according to the size of the edit; the larger the edit, the deeper the note. Green circles show edits from unregistered contributors, and purple circles mark edits performed by automated bots.

Here’s a short video of Listen to Wikipedia in action:

Track originally posted at soundcloud.com/kqed. The KQED story was produced by Sam Harnett, of the podcast The World According to Sound (theworldaccordingtosound.org). Check out Listen to Wikipedia at listen.hatnote.com. It’s also available as a free iOS app (itunes.apple.com).

This Week in Sound: Superheroes, Maps, Freesound(s), …

A lightly annotated clipping service

”¢ Heroic Jingle: Kudos to readers of ign.com for noticing the small text on the poster for the Avengers: Age of Ultron movie and discerning from it that Spider-Man may very well be in the film. Why? Because there’s a credit for composer Danny Elfman, who wrote the theme for the modern Spider-Man films:
http://www.ign.com/articles/2015/02/24/behold-the-new-poster-for-marvels-avengers-age-of-ultron

”¢ Sound Trip: My friends Nick Sowers and Bryan Finoki are now using sound to investigate the urban environment with a series at Design Observer. The first takes them to San Francisco’s Mission District:
http://designobserver.com/feature/infringe-01-the-new-mission-soundtrack/38775

Ӣ Tracking Sound: This is a bit old, dating from late December, but I just came across the news that Freesound.org, a massive shared database of field recordings and other sounds, now allows users to track specific tags and users. Useful if you have a fetish for creaking doors, foghorns, or particular species of bird:
http://blog.freesound.org/?p=532

”¢ Mapping Sound: The National Park Service has mapped the quietest places in the United States of America. The word “sonification” is a useful one in discussing the way sound can be employed to explain data, but in this case it is, in turn, a simple visualization that best depicts how the west is far more quiet than the east:
http://www.citylab.com/commute/2015/02/the-quietest-places-in-america-mapped/385620/

This first appeared in the February 24, 2015, edition of the free Disquiet email newsletter: tinyletter.com/disquiet.

The Dark Side of the Moon: Sonified Astrophysics

The Studio 360 podcast recently focused its microphones on “The Blind Astrophysicist,” as the episode was titled. Its subject is a sight-impaired astrophysicist named Wanda Diaz-Merced, who is from Puerto Rico. According to the reportage, Diaz-Merced began to lose her sight as she was pursuing her studies. She talks in the interview (MP3) about how all her fellow students were straining their eyes to read all the tiny markings in their research data, while she was much more aggressively losing her own vision.

[audio:http://www.podtrac.com/pts/redirect.mp3/audio.wnyc.org/studio/studio050611e.mp3|titles=”The Blind Astrophysicist (Studio 360)”|artists=Astrophysicist Wanda Diaz-Merced]

To hear this woman’s voice is to meet someone who doesn’t know of despair, or certainly has an unusually high threshold for it. She tells the story of how she came to explore sonification (alternate terminology: she calls it audification, while the host, Ari Epstein, calls it ensonification), in which data is presented in a way that can be likened to a musical composition. There’s a particularly good anecdote about how during a visit to a computer lab she heard the squeal of data as it was being crunched, and recognized a burst in the static that turned out to be a sunburst. And an even better one when we learn that her computer-programming collaborator is … deaf.

Above is an image from the Studio 360 blog, showing “a graph marked with Braille tags on a pegboard to plot the intensity of light versus frequency for a spiral galaxy. She can figure out the mass of the galaxy by calculating the area under the curve.” It’s just one example of how Diaz-Merced pursues her research not just despite her diminished sense, but because of the manner in which she has learned to make all the more of her retained senses.

The broadcast, of course, focuses less on the tactile than on the aural. Things tagged as or referenced as “sonification” are often transformations into sound art or music information that originated as data that had no express intent to serve as art. Diaz-Merced’s astrophysics projects straddle that line, in that she’s working on music-production software that feeds on data and produces tunes. Still, host Epstein summarizes well the practical research benefits of her efforts: “Even people who can see just fine are better at detecting patterns if they can hear a soundtrack while they’re watching the computer draw a graph. their ears are working in sync with their eyes, to immerse them in the data.”

Original post at studio360.org. The Studio 360 site provides this link for additional info on Diaz: icad.org. Subject originally located via “Senses Working Overtime,” from the blog of my old friend Andrew Jaffe (we went to college together), who is an astrophysicist himself, at Imperial College in London; he also provided this link to Diaz-Merced’s research at Harvard: harvard.edu.