This Week in Sound: A Sonic Health Exam from 1857

A lightly annotated clipping service

Note: I’m on vacation this week, so there may not be a TWiS email on Friday, November 25th.

These sound-studies highlights of the week originally appeared in the November 22, 2022, issue of the free weekly email newsletter This Week in Sound:

FINANCIAL HEALTH: In “Coin-Sound, or the Bruit d’Arain of Armand Trousseau,” Dr. Jesse Kraft describes a sonic “diagnostic test” involving coins “to determine whether or not an individual suffers from a punctured lung.” Here’s some detail:

“[A] coin is held flat against the side of the patient’s chest that is thought to be punctured, and tapped with a second coin. … With a stethoscope on the direct opposite side of the patient, if there is fluid or air in the pleural cavity, the practitioner will hear a sound resonate, as opposed to quickly mute. … The sound itself is not produced by the pressure of the air or fluid that has entered the pleural cavity, nor is it the sound from the coins themselves. Rather, the sound comes from tension that is created on the bounding walls of the pressurized cavity.”

The Trousseau reference in the title is the individual credited with having first “observed and described coin-sound,” around 1857. Trousseau (1801-1867) called it “bruit d’arain” which translates as “brazen noise.” Kraft, who earned a PhD in Americana Studies at the University of Delaware in 2019, is the Resolute Americana Assistant Curator of American Numismatics at American Numismatic Society. (Thanks, Mike Rhode!)

WAYNE MANOR-ISMS: When I worked in Japanese publishing, my duties and natural inclination involved manga, but I collaborated regularly with the anime side of the business. One thing that always struck me was — due to the industry’s prominence in its country of origin — just how well-known were the Japanese voice actors, stars in their own right. American anime fans — and more broadly animation fans — have steadily raised the profiles of voice actors here, even if few have achieved the national notoriety of their Japanese counterparts in terms of name recognition (putting aside movie and television stars who are hired by studios like Pixar to lend familiar voices to animated roles). One individual who stood high on the list of major talents was Kevin Conroy, who died earlier this month at age 66. Conroy portrayed Batman for 30 years in TV series, feature-length animated films, and video games, starting in 1992 with Batman: The Animated Series. As James Whitbrook notes, “Conroy even went on to play a live-action version of Bruce Wayne in the CW DC TV show crossover event Crisis on Infinite Earths.” (And he was great in it.)

ACT NATURALLY: “BookBaby, one of the leading players in the audiobook segment announced it has entered into a collaboration with Speechki to create audiobooks using artificial intelligence-powered synthetic voice narration. … Speechki said they support 77 languages at the moment along with up to 50 synthetic voice actors.”

BAD ROBOT: The FCC has a plan to deal with “ringless voicemail spam” that goes straight to one’s voice mailbox. Writes Jon Fingas: “The Federal Communications Commission has determined that these silent voicemails are covered by the same Telephone Consumer Protection Act (TCPA) rules that forbid robocalls without consent.”

ENTRY LEVEL: Now YouTube has its own start-up cue, or sound logo, developed by the agency Antfood: “The initial idea behind the sound was to have something vibrant, engaging and easily recognizable, so that as soon as you hear it – even if you’re turned away from your TV or device – you know that something’s about to pop up on YouTube.” There’s more detail about the process at the official blog of YouTube in a post by Andrew Lebov.

DEAD RINGER: We’ve pretty much all seen some thriller where a dead person’s eye or fingerprint is used to help the hero (or villain!) access something important. Real life has caught up with fiction, and is generally the case, things aren’t anywhere as easy as they seem. In fact, quote the contrary. Allison Engel writes on the difficulty that loved ones have accessing the accounts of their dead relatives: “Face recognition, voice recognition and fingerprint recognition speed up access when someone’s alive but present tremendous barriers for survivors trying to wind down accounts.” (You can read it for free, thanks to my gift link.)

VOLUME CONTROL: Spotify has continued to broaden its scope by adding audiobooks and podcasts to its app, making the service about more than “just” music. “Now, Spotify is rolling out an update to the dedicated Anchor app on iPhone with a new feature it says can drastically improve the audio of your podcast with just one click,” writes Chance Miller. It’s called “Podcast Audio Enhancement” and it can “reduce background noise and level your audio – supposedly so much so that podcasts can now be ‘recorded in a loud coffee shop, on the subway, or with babies crying in the background.’”

BAD VIBES: Our phones can sense a bridge span’s “unique vibrations” and help reveal “hidden structural problems,” writes Matt Simon. (Thanks, Glenn Sogge!)

Every bridge has its own “modal frequency,” or the way that vibrations propagate through it—then subsequently into your car and phone. (Tall buildings, which sway in the wind or during an earthquake, have modal frequencies too.) “Stiffness, mass, length—all these pieces of information are going to influence the modal frequency,” says Thomas Matarazzo, a structural and civil engineer at MIT and the United States Military Academy. “If we see a significant change in the physical properties of the bridge, then the modal frequencies will change.” Think of it like taking a bridge’s temperature—a change could be a symptom of some underlying disease.

ALL HANDS: “Microsoft has made it easier for users of its video conferencing platform Microsoft Teams to use sign language through a new meeting experience called ‘Sign Language View.’

This Week in Sound: “Extending the Musical Worlds of the Films”

A lightly annotated clipping service

A friend asked how I can tell when the newsletter is going well. I mentioned how cool it’d be if I had an issue where every recommended This Week in Sound item came from a reader. That, as it turns out, is this issue (which apparently came close to maxing out Substack’s allowed length). Thanks, folks!

These sound-studies highlights of the week originally appeared in the November 15, 2022, issue of the free weekly email newsletter This Week in Sound:

CLIMATE MEDIATION: More from Karen Bakker (mentioned here in recent weeks), supporting her recent book, The Sounds of Life: “Digital technology is so often associated with our alienation from nature, but I wanted to explore how digital technology could potentially reconnect us, instead, and offer measured hope in a time of environmental crisis.” (Thanks, Jason Richardson!)

MUFFLIATO!: I’m not a Potterite by any means, but I am certainly fascinated by the hold those stories have on people. An article (“‘A Magic Beyond All We Do Here’: Musical and Sonic Worldbuilding at Harry Potter Tourist Attractions”), by Daniel White, looks at four in-person spinoffs of the books (a concert series, a studio tour, the Universal Orlando tourist destination, and the Cursed Child theatrical play) for how they use “music and sound in distinct ways, drawing on or extending the musical worlds of the films, or creating worlds of their own.” It includes this interesting chart about different types of experiences — as I understand it, originally from The Experience Economy, by B. Joseph Pine II and James H. Gilmore. (Thanks, Mike Rhode!)

CLOUD ATLAS: This project looks unlikely, at the moment, to get funded, but it’s an admirable attempt to translate the beauty, the presence, of clouds for those lacking sight. The creator hopes for funds to “build a working prototype of a handheld device called a cloud scanner which reads clouds and converts the signal into sound which is then converted to a haptic signals which can be felt.” (Thanks, Daniel Weir!)

CAPTION CRUNCH: “Television today is better read than watched,” writes Matt Schimkowitz: “Huge scores and explosive sound effects overpower dialogue, with mixers having their hands tied by streamer specs and artist demands. There is very little viewers can do to solve the problem except turn on the subtitles.(Thanks, Rich Pettus!)

This Week in Sound: He Used to Bite His Music Boxes

A lightly annotated clipping service

These sound-studies highlights of the week originally appeared in the November 8, 2022, issue of the free weekly email newsletter This Week in Sound:

THE WHOLE TOOTH: Robert Friedman, the owner of a piano that once belonged to Thomas Edison, spoke with NPR’s Scott Simon about its bite marks — which are reputed to be those of Edison himself. (Thanks, Rich Pettus!)

Friedman: He used to bite his music boxes, and he bit his piano. …

Simon: I’m trying to imagine anyone, much less Thomas Edison, with their mouth clamped on a piano.

Friedman: The sensation is amazing. It goes up through your skull, your head resonates like a tuning fork. It’s an amazing feeling. It goes through your shoulders, but you get the true vibration of the instrument, and you hear the piano equal, if not better, than if you just hear it through your ears.

HEY BALE: Mark Gurman, Bloomberg reporter, foresees Apple simplifying the voice command for Siri:

“The company is working on an initiative to drop the ‘Hey’ in the trigger phrase so that a user only needs to say ‘Siri’ — along with a command. While that might seem like a small change, making the switch is a technical challenge that requires a significant amount of AI training and underlying engineering work. … The complexity involves Siri being able to understand the singular phrase ‘Siri’ in multiple different accents and dialects. Having two words — ‘Hey Siri’ — increases the likelihood of the system properly picking up the signal.”

FIELD’S RECORDING: Details on the sound design of Todd Field’s new film Tár, starring Cate Blanchett as a conductor-composer: the director wants the audience “to feel [Hildur Guðnadóttir’s score] but almost not hear it”; Blanchett’s title character has misophonia (“which means she’s very sensitive to certain sounds”); omnidirectional microphones were used to record the symphony orchestra, “leaving the sound team more than 50 tracks to work with.”

QUIET TIME: Check out this gallery, on the Dezeen (as in “design”) website, of 10 different “noise-regulating acoustic products for communal interiors.”

This is a product shot of a long table in an stark, white, modern office, each seat with its own computer, and above them the felt lamp shades that are the central point of the image.
Felt Up: The Fost Bulb PET Felt acoustic lamp from De Vorm “provides both sound dampening and illumination.”

Included in the Dezeen feature are office booths, felt light shades, a sensory nook (called the “Sensory Nook,” natch), panels, and more.

TAP DANCE: “After continually deflecting accusations that it surveilled droves of politicians and journalists using invasive phone-tapping software, the Greek government has decided to ban the sale of spyware altogether. But the government also wants everybody to know that this is in no way an admission of guilt and that it definitely didn’t do anything wrong” — is how Gizmodo sums up a recent scandal.

WATER TORTURE: “In the US border town of Niagara Falls, residents accustomed to the soothing roar of the famous waterfalls recently discovered a much less pleasant sound: the ‘haunting hum’ of bitcoin mining farms.” The miners were reportedly attracted by the area’s “cheap hydroelectric power.” Comments from residents:

“It’s very mentally daunting. It’s like having a toothache for 24 hours a day every day.”

“I get four hours of sleep, maybe, because of that constant noise.”

“I’m going to be protesting till the hum is gone, basically, till I get the roar of the falls back because that’s what I used to hear.”

VISUAL HEARING AID: There was a great multimedia piece in the Washington Post that explores what hearing loss is like — and it does so by visualizing the experience. The article is credited to Amanda Morris, a reporter, and Aaron Steckelberg, a “senior graphics reporter” (what a cool job), who did the visuals. No subscription necessary to see it, as I can share this gift link. Audiograms and other graphic aids tell the story, such as how the siren is situated on the chart reproduced below.

This is a chart from The Washington Post. It depicts a bird's eye view of the side of a building, and in front passes a vehicle. Overlaid are diagrams showing relative pitch and volume. A circle marks the vehicle as being quite loud and towards the middle of the pitch zone, as it were.
Freq Out: This example of a Washington Post chart situates the pitch and volume of a passing siren

The horizontal axis “maps the pitches that are audible to your ears, from low-pitched sounds, shown on the left, to high-pitched sounds, shown on the right.” The vertical axis is the volume level in decibels.

AUTO MOTIVE: SlashGear’s Alistair Charlton is not excited about the broadening array of voice recognition systems in cars: “[C]ar manufacturers’ own voice recognition systems? They’re less than stellar. These are often summoned by saying ‘hey’ and the vehicle manufacturer’s name. … Siri is a made up name, and no one goes about their day saying ‘Okay Google’ unless they want to talk to the Google Assistant. But in the car? You’re quite likely to mention the brand of the vehicle you’re in when talking to a passenger, or when you see another one out on the road. Before you know it, your music is muted and the car is listening when you don’t want it to. We might forgive all this if car manufacturers made decent voice assistants, but it’s the tech firms who have the upper hand here. Please, automakers of the world, stick to Alexa, Google Assistant or Siri and leave it at that.” (Just as a side note: I’m not remotely likely to say the name of my car to a passenger, but I’m not much of a car person.)

This Week in Sound: Weaponized Rickrolling

A lightly annotated clipping service

These sound-studies highlights of the week originally appeared in the November 1, 2022, issue of the free weekly email newsletter This Week in Sound:

VIRAL HIT: “Spreading Deadly Pathogens Under the Disguise of Popular Music” is the catchy title of the article in question (read the PDF). Its authors designed music to trick the sensors in a biolab to leak hazards. The resulting music triggers resonant-frequency thresholds used in the safety system. (That’s my poor paraphrase.) This is, in a manner of speaking, weaponized rickrolling.

This is a flow chart depicting, in a series of steps, how the attack described in the article might occur.
That’s Entertainment: From the pop charts to a flow chart

“The attacker selects music and inserts segments of resonant frequencies within the music … using a software named Adobe Audition. Though someone who has listened to the music many times before may identify the change in the music, the vast majority of people will either be oblivious of the change or will incorrectly ascribe the change in the music to a speaker issue.” The song used as an example? “Hello” by Adele. The authors are a professor (Mohammad Abdullah Al Faruque) and two researchers (Anomadarshi Barua and Yonatan Gizachew Achamyeleh) from the University of California, Irvine. (Found via Geoff Manaugh. The original image, from which this one was extracted, was published with a Creative Commons Attribution International 4.0 License.)

EUPHONIA 2.0: Dorothy R. Santos notes, on Slate, that “conversational feminine A.I. are embedded in many of our visions of the future.” This phenomenon long predates the contemporary virtual companions named Alexa, Siri, and Cortana (“a quasi-phalanx of helpful, cheerful A.I. women, ever-ready for our commands”). Santos notes examples from Joseph Faber’s Euphonia (“a mid-19th century analog voice synthesizer”), to the 1999 Disney movie Smart House, to the (excellent) BBC series Humans. And she recounts all this in the process of responding to Ysabelle Cheung’s science fiction short story “Galatea.” Writes Santos: “Her story provokes us to contemplate what women might desire from feminine A.I. figures originally programmed to please, entice, and serve a male-coded user. In Cheung’s ‘Galatea,’ the female characters, human and A.I. alike, model irreducible nuance in their utterances and speech, despite being programmed (digitally) by their creators and through (analog) social forces, gendered expectations, and norms.”

BIG D̶A̶T̶A̶ EARS: McGill University professor Jonathan Sterne, author of MP3: The Meaning of a Format, and colleagues ask “Is Machine Listening Listening?” (Available as a PDF.) That question contains other questions: “[W]hat do the researchers who build machine listening systems think they do? What do the corporations and states who deploy them think they are doing? Do their users treat their listening machines as listeners?”

BACKGROUND NOISE JUNGLE: “Noise in preschools primarily affects speech intelligibility. Linguistic information is because of background noise masking, which has a particular impact on young children, children with a first language other than the language of instruction and children with language deficits” — a report from Acoustic Bulletin, a publication of Ecophon, which “manufactures and markets acoustic panels, baffles and ceiling systems.”

This graph shows how as the volume in a room rises, so too does the average heart rate of the people in the room.
Quiet Time: This graph shows how as the sound pressure level, in decibels (horizontal axis), rises, so too does the heart rate of individuals present (vertical axis).

The key recommendations are: (1) reduction of group size, (2) introduction of sound absorbing measures, and (3) an “activity-based” approach to room design. (Image from the original Acoustic Bulletin article.)

LOWERING THE BAR: “With the once hyper-active clubs forced into ‘silence’, party lovers from the city remain disappointed with the ban on loud music after 10 pm. Not all of them were comfortable with the idea of pubs handing out headphones if patrons requested loud music.” Noise pollution laws kick in in Hyderabad, India.

IN C: “Nightmares Can Be Silenced With a Single Piano Chord, Scientists Discover,” via Science Alert: “A study conducted on 36 patients diagnosed with a nightmare disorder showed that a combination of two simple therapies reduced the frequency of their bad dreams. Scientists invited the volunteers to rewrite their most frequent nightmares in a positive light and then playing sound associated with positive experiences as they slept.” And the piano chord in question, in case you’re wondering, is C69. (Thanks, Glenn Sogge!)

HI-DEF JAM: Blind media artist Andy Slater talks about how his sensory impairment “informs” his work: “I realized that with the screen reader on my iPhone, there were some recording apps that actually were accessible. If you got microphones to plug into phone, you could do hi-def field recordings. That’s when I started recording in different spaces. I became more aware of my surroundings, and was finally able to capture the echolocation and sound in a room that lets you know how big the room is, what might be in the room, what the floor, the walls, or the ceiling are made of.” (Via

This Week in Sound: “Deepfake Birds” & “Oenesthesia”

A lightly annotated clipping service

These sound-studies highlights of the week originally appeared in the October 25, 2022, issue of the free weekly email newsletter This Week in Sound:

PUT A CORK IN IT: “[In Charles] Spence and Janice Wang’s 2017 study, 140 tasters with a range of wine expertise were asked to rate a pour. After hearing the sound of a cork popping, their quality ratings went up 15% and their celebratory ratings rose 20% — even though they were drinking the exact same sparkling.” Spence heads the Crossmodal Research Laboratory at Oxford, and Rebecca Deurlein at Wine Enthusiast looks into how sound influences taste. “As multisensory and experiential wine research continues, the terms ‘sonic seasoning’ and ‘oenesthesia’ have entered scientists’ conversations”

FLIGHT CLUB: Brian Eno explains to Wired interviewer Sophie Charara that some of the birds heard on his new album, ForeverAndEverNoMore, are, in fact, faked — or, to use a more current term, deepfaked: “I just listen to bird sounds a lot and then try to emulate the kinds of things they do. Synthesizers are quite good at that because some of the new software has what’s called physical modeling. This enables you to construct a physical model of something and then stretch the parameters. You can create a piano with 32-foot strings, for instance, or a piano made of glass. It’s a very interesting way to try to study the world, to try to model it. In the natural world there are discrete entities like clarinets, saxophones, drums. With physical modeling, you can make hybrids like a drummy piano or a saxophone-y violin. There’s a continuum, most of which has never been explored.”

DR. WHOOSH: I love local news. I love hyper-local news. I love hyper-local news about noise. I love hyper-local news about noise that seems to suggest there was a mysterious noise that caused substantial confusion (“Residents living around the Croydon Flyover spent much of the weekend wondering what the eerily strange, out-of-this-world noise was coming from one of the new-build towers”) only, upon solving the mystery, to clarify maybe not (“Staff at the site say that they received only half a dozen or so calls about the noise, and that they apologise for the inconvenience”). Says one resident of the location where the noise originated: “It almost sounded like it might have been a helicopter that had landed, maybe the air ambulance. But it just went on and on, a whooshing noise, all night long.” Another witness suggests it “sounded like something off Doctor Who, when the aliens land.” Those investigating the situation did learn at least one thing in the process: “the council no longer has a 24-hour reporting line for noise pollution, despite having a phone number on their online form.” And here’s what appears to be the final word: “Investigations by Inside Croydon have found that a fire alarm in Kindred House had been set off inadvertently. Sources at the site suggest that it could have been something as innocent as a pigeon getting into the building.”

CRUNCH TIME: Coverage of a panel discussion about the development of a sonic logo for Tostitos, the popular snack food: look past generic buzzwords like “authentic” (Tostitos? “Authentic”?), and note both some research-informed self-awareness on the part of the parent company, Frito-Lay (which “found evidence that consumers go for their dip first and the ‘carrier,’ or chip, second”), and some significant usage data (“Ads with sonic branding elements see an uplift in attention by 8.5 times those that don’t” and “[T]here was a 38% increase in brand recall stemming from the audio addition”) — all with a sound element that lasts barely 1.5 seconds.

GAME ON: Even casual video games benefit from considered sound design, according to Azur, the studio that created such titles as Stack Ball, Worms Zone, and Bottle Jump 3D: “[T]he company has found out that over 50% of hyper-casual gamers play with the sound turned on.” It’s also refreshing to hear such practical concerns leavening the user-data analysis: “The sound design in games should be practical first, but at the same time, you shouldn’t forget about the artistic value. This is the main challenge of working in game development: finding a sound that complements the gameplay and doesn’t annoy the players after they hear it for a few hours.”

REMEMBER THE AL-GORITHM: “The Texas attorney general filed a privacy lawsuit against Google on Thursday, accusing the internet company of collecting Texans’ facial- and voice-recognition information without their explicit consent.” The law has been on the books since 2009. “Until this year, Texas” — which must, per the legislation, sue on the behalf of consumers — “had not enforced its law.” This year is an election year, and the attorney general is up for re-election.

LIFE LINE: “[S]ound is a universal and perhaps older mechanism of communicating information in nature than sight. When life evolved on Earth, before creatures had eyes, they had cilia. Cilia are essentially one of the major mechanisms that are used to send sound. If you think about it, it makes perfect sense. There’s a great evolutionary advantage to being sensitive to other creatures and sound as a primordial way of conveying that information.” That’s from an interview with Karen Bakker about her new book, The Sounds of Life: How Digital Technology Is Bringing Us Closer to the Worlds of Animals and Plants.