The Leading Use Case for Sonification Is Clickbait

Something's rotten in the state of science journalism

Another week, another flurry of news coverage that involves the concept of sonification. “Listen to the terrifying rumble of Earth’s magnetic field being assaulted by a solar storm,” reads a headline at space.com, even though we’re not actually listening to what the headline says. What we’re listening to is data — data collected by the European Space Agency from the event subsequently transformed, back here on Earth, into human-listenable sound.

Engadget.com’s title on the same topic: “Listen to the eerie sounds of a solar storm hitting the Earth’s magnetic field” — though the article, at least, clarifies more explicitly that microphones weren’t harmed (let alone utilized) in the making of the recording:

“You can’t exactly point a microphone at the sky and hear the magnetic field (nor can we see it). Scientists from the Technical University of Denmark converted data collected by the ESA’s three Swarm satellites into sound, representing both the magnetic field and a solar storm.”

Even a member of the ESA project team, Klaus Nielsen, describes it, in a post at the agency’s website, as “a sonic representation of the core field.” Then again, that ESA post bears a title no less mistaken than are the others: “The scary sound of Earth’s magnetic field.”

Arguably, none of those titles — or myriad others like them that have proliferated in recent years — are, in fact, mistaken. What they are is misleading. These titles are clickbait: using fantastical statements to lure readers, and leaving for later the the dirty work of dialing back the overstatement at some point deeper into the actual article (if the reader even gets that far).

Now, it’s a common scenario in journalism, science or otherwise, that writers don’t write their own headlines. Articles might even have multiple headlines depending on the medium and other factors: one for the print edition, another for online, maybe an alternate for mobile. I cherish to this day several headlines written by editors for stories I’ve written, like a long-ago piece about Rudy Vanderlands and Zuzana Licko, the typographers at Emigre; the editor at the alt-weekly titled it “About Face.” And there’s an interview I did with science fiction author and, more pertinent to the title of the piece, climate-science ambassador Kim Stanley Robinson: “The Man Who Fell for Earth.”

Tellingly, both those headlines were for publications that prioritized print. They were intended to be playful and lend a sensibility to the broader coverage. The online headlines serve a very different purpose, a transactional one, which is to get people to click through. The problem is, editors don’t trust that the topics they have selected are interesting enough unto themselves, so headlines are produced that have the linguistic equivalent of artificial flavor added. And the problem doesn’t end there. Once certain types of stories prove clickable, they appear again and again.

The concept of “representation” that Nielsen raises in the ESA post was entirely lost on (or ignored by) the headline writers at both Science and Engadget, and they’re not alone. Back in May, Popular Science had an article titled “NASA recorded a black hole’s song, and you can listen to it” — and then, after the author claimed “we can finally listen to a black hole scream into the void,” the reader is told that’s not actually the case: “scientists can create parameters for all kinds of numerical data by assigning those values to higher or lower pitches, or vice versa, to turn them into musical notes.”

I don’t know what will break this ongoing cycle, though I worry what will happen is that sonification will become such a routine source of unfulfilled promises — the science-journalism equivalent of empty calories — that it won’t ever really have a chance to become the useful tool it could be.

Back in May of this year I wrote an article for The Wire about sonification, and in it I interviewed Sara Lenzi from the Center for Design at Northeastern University in Boston, Massachusetts, where she helps run the excellent sonification.design online database of sonification projects. Lenzi herself argues against sonic purism in sonification, arguing that sound works best when we’re “combining it with other sensory modalities,” such as data visualization.

Sonification clickbait articles are the precise opposite of what Lenzi encourages, because they actively isolate the sound from the facts at hand. The point of sonification, in the context of the popular press, is to lend meaning and approachability to data by rendering it in sound. But by repeatedly tricking readers into thinking they’re hearing the actual source of the data and not a representation of the data, the online publications making advertising-adjacent slivers of pennies for each click are undermining the science they’re purportedly promoting.

Sound Ledger¹ (Cars & Satellites)

Audio culture by the numbers

1: Percent of people who said they would “call the police upon hearing a car alarm”

95: Estimated percent of alarms set off by “vibrations of passing trucks or glitches in the car’s electrical system”

31: Number of days of satellite imagery in a NASA sonificaton project that resulted in a “a waltz-inspired melody.”

________
¹Footnotes

Alarms: clivethompson.medium.com. NASA: nasa.gov.

Originally published in the June 20, 2022, edition of the This Week in Sound email newsletter. Get it in your inbox via tinyletter.com/disquiet.

Exploring Sonification.Design

A column I wrote for The Wire

The current issue of The Wire features a column I wrote, under the Unofficial Channels heading, about the website sonification.design. If you’re a subscriber now, you can read it in the magazine (the issue with Reynols on the cover). When the next issue of The Wire comes out, I’ll post the full text to Disquiet.com.

Stellar Catalogue Sonification

From the ancient Greek astronomer Hipparchus to the ESA's Hipparcos satellite

Musician and computer science PhD candidate Jamie Ferguson teamed with the European Space Agency to develop a unique sonification of early and contemporary maps of our sky. As described at the ESA’s website, “The improvement in the quantity and precision of data, as well as the increased information content and dimensions contained in each catalogue, are palpable as the sound clip evolves from the ancient Hipparchus to the modern Hipparcos.” Hipparchus is the ancient Greek astronomer, while Hipparcos is the name of the ESA satellite. The post goes into great detail about how each of the “stellar catalogues” was translated into sound, noting what parameters were paid attention to, and how they were transposed — pitch to star brightness, for example, and volume to distance.

Audio originally posted at soundcloud.com/esa. More from Ferguson at jfergusoncompsci.co.uk.

Realtime Sonification

A KQED interview with Mahmoud Hashemi about Listen to Wikipedia

20151127-listentowik

Someone adds an entry about a cooking magazine from the 1950s? Boom …

Someone corrects the release date in the discography of James Taylor? Bleep …

Someone undoes a spelling correction in an entry about an undersecretary from a mid-century U.S. presidential administration? Bong …

Audible tones and expanding, colored circles are used in tandem to announce changes to the vast collaborative encyclopedia thanks to the great online tool Listen to Wikipedia (listen.hatnote.com), one of the best examples of realtime sonification on the web. Developed by Stephen LaPorte and Mahmoud Hashemi, it’s the subject of a short recent interview from radio station KQED. The conversation with Hashemi goes into the background of the tool. He talks about the software’s actions, and how it serves both as an expression of Wikipedia and as a response to the economic focus of Silicon Valley.

There’s something very pleasing and centering about the placid surveillance of Listen to Wikipedia, all that communal and often rancorous activity transformed into dulcet tones. Sometimes I just let it run on a side screen as I work. Sometimes I also run this pure geographic visualizer, at rcmap.hatnote.com:

20151127-rcmap

Up at the top of this post is a sample still frame of Listen to Wikipedia in action. Here is an example of the sort of realtime information that Listen to Wikipedia parses:

20151127-wikiapi

This documentation summarizes how the sounds and related images of Listen to Wikipedia correlate with actual edits:

Bells indicate additions and string plucks indicate subtractions. Pitch changes according to the size of the edit; the larger the edit, the deeper the note. Green circles show edits from unregistered contributors, and purple circles mark edits performed by automated bots.

Here’s a short video of Listen to Wikipedia in action:

Track originally posted at soundcloud.com/kqed. The KQED story was produced by Sam Harnett, of the podcast The World According to Sound (theworldaccordingtosound.org). Check out Listen to Wikipedia at listen.hatnote.com. It’s also available as a free iOS app (itunes.apple.com).