This Week in Sound: Scream Studies + Ankle Eavedropping + …

+ Voice Recognition + Why Gorillas Sing + More ...

A lightly annotated clipping service

If you find sonic news of interest, please share it with me, and (except with the most widespread of news items) I’ll credit you should I mention it here.

Scream Therapy: “Recently, however, I have witnessed two cinematic screams that were neither sexy nor Snow White-y, but instead guttural and visceral and bizarre — and so vulnerable that I felt like a bit of a creep watching them,” writes Rachel Handler. The screams are Meryl Streep’s in Big Little Lies (anything to get that theme song out of my head) and Florence Pugh’s in Midsommar. Here’s to more scream studies, an essential branch of sound studies.

Insert Spinal Tap Joke: This is the second experimental archeology story in as many weeks: “A diminutive model of Stonehenge could help crack the acoustic secrets of the ancient site, according to scientists who have built a version of the megaliths at a 12th of their size.” (via Trevor Cox)

Leg Up: A little known fact about ankle monitors used by law enforcement: “officers wouldn’t just be able to track his location, as most electronic monitors do. They would also be able to speak — and listen — to him.” For context, the “him” in this example if a 15-year-old Chicago resident. (via Matthew Kenney)

Listen Up: After the “Belgian leak” brought renewed attention to the privacy issues surrounding voice assistants, Forbes weighed the weaknesses and norms within the system. On the one hand, “No part of this story indicates Google is listening surreptitiously to find out what people are saying.” On the other, “the fact that the leak occurred indicates data security for Assistant voice recordings is inadequate,” and: “Recording when the Assistant activates without hearing the wake-up command is a more serious problem.” You can put a piece of tape over your laptop’s camera, not so easily the microphones around you. Voice recognition, far less attended to by the press than is facial recognition, is a brave new territory, a story that is just getting started.

Let’s Buzz: “City officials in Philadelphia are under attack for their increasing use of an acoustic deterrent — described by a local councilwoman as a ‘sonic weapon’ — to keep the city’s children and young adults away from certain recreational areas at night.” This is the device known for years as “the Mosquito.”

Keeping Score: I’m kind of addicted to‘s detailed coverage of who is composing the music to which TV shows, movies, and (occasionally) video games. This week we learned that Dustin O’Halloran (of A Winged Victory for the Sullen) and Hauschka are scoring The Old Guard, adapting the Greg Rucka graphic novels (I kinda want Rucka’s Lazarus more, but hey, it’s something). Tyler Bates is scoring Primal, the highly anticipated forthcoming animated series from Genndy Tartakovsky (Samurai Jack, Star Wars: Clone Wars). Max Richter is scoring Temple, a UK medical drama. (And since I missed this last week: tomandandy are working on Lucky Day, a movie from Roger Avary, who worked on a lot of early Quentin Tarantino movies.) There’s a huge glut of video entertainment these days, and a good composer is as much a cue (so to speak) for me as to what to watch as are the writers, actors, or directors. Furthermore, in our current moment of streaming-entertainment overload, it’s clear the studios have better access to great mood-setting cinematographers and composers than to great writers (or they aren’t affording the writers time and resources to get the stories right). Even if the shows aren’t great, however, those scores are available to us to lend a soundtrack to our daily lives.

Monkey Business:Gorillas sing and hum when eating, a discovery that could help shed light on how language evolved in early humans. … Singing seems to be a way for gorillas to express contentment with their meal, as well as for the head of the family to communicate to others that it is dinner time.”

Hall Mary: “The silence of this place used to fill me with joy. Now it’s all I hear.” So says the disillusioned, and at the moment sauced, priest in the first episode of the new, fourth season of the TV series Grantchester.

Reading around the web

Space Is the Place: Jason Richardson picks up the Robert Fripp blog quote from last week’s issue of This Week in Sound (“The primary factor in choosing a setlist is the performance space”), and connects dots back to Bach, then back further to Gregorian chant, then on back to the recent past in the form of Bertolt Brecht, eventually coming around to a consideration of the role of digitally simulated reverb in today’s music: “Given how those reverbs impart a famed character and can be used to connote an atmosphere, it seems like we’re getting back to writing music with specific ambiences in mind.” I love this idea.

Air Play: Dan Carr at Reverb Machine breaks down Brian Eno’s classic 1978 album Music for Airports into its constituent parts and talks in detail about how the album was recorded, including Eno’s employment of graphic scores, a detail of which appears above: “There are no real melodies present, and the voices occasionally form chords, but there is no discernible structure. This song is composed of seven loops, all of different lengths, with each loop playing back a single, sung note. In the graphic score, you can see Eno’s use of rectangles to represent looped tracks, with the spaces between them varying.” The Carr piece even includes loops if you want to play with them yourself. (via Robin Rimbaud on Twitter)

Having Words: Tom Armitage goes into detail on “Building the world’s most advanced subtitling platform,” CaptionHub, which “allows teams to generate and edit captions inside a web browser, previewing them in a real-time editor.” Particularly interesting among the features that took hold: “adding a visible audio waveform on the timeline, generated by our encoder tool. This made cutting captions to video much easier — it was instantly possible to see speech starting or stopping, and mark the ins and outs of captions to match.”

Feed Bag: And in this ongoing discussion of blogs, Patrick Howell O’Neill has a simple proposal: “reconsider something that feels lost in this era of algorithm-fueled newsfeeds and timelines: RSS.”

This is lightly adapted from the July 14, 2019, issue of the free weekly email newsletter This Week in Sound.

Disquiet Junto Project 0393: Mix Master

The Assignment: Make a new composition from your favorite parts of three of your previous recordings.

Each Thursday in the Disquiet Junto group, a new compositional challenge is set before the group’s members, who then have just over four days to upload a track in response to the assignment. Membership in the Junto is open: just join and participate. (A SoundCloud account is helpful but not required.) There’s no pressure to do every project. It’s weekly so that you know it’s there, every Thursday through Monday, when you have the time.

Deadline: This project’s deadline is Monday, July 15, 2019, at 11:59pm (that is, just before midnight) wherever you are. It was posted in the evening, California time, on Thursday, July 11, 2019.

Tracks will be added to the playlist for the duration of the project.

These are the instructions that went out to the group’s email list (at

Disquiet Junto Project 0393: Mix Master
The Assignment: Make a new composition from your favorite parts of three of your previous recordings.

Step 1: Choose three pieces of music you recorded previously.

Step 2: From each of those three pieces, choose one element you particularly like.

Step 3: Create a new piece of music combining the three elements whose selection resulted from Step 2. (Additional elements may be added, certainly.)

Seven More Important Steps When Your Track Is Done:

Step 1: Include “disquiet0393” (no spaces or quotation marks) in the name of your track.

Step 2: If your audio-hosting platform allows for tags, be sure to also include the project tag “disquiet0393” (no spaces or quotation marks). If you’re posting on SoundCloud in particular, this is essential to subsequent location of tracks for the creation a project playlist.

Step 3: Upload your track. It is helpful but not essential that you use SoundCloud to host your track.

Step 4: Post your track in the following discussion thread at

Step 5: Annotate your track with a brief explanation of your approach and process.

Step 6: If posting on social media, please consider using the hashtag #disquietjunto so fellow participants are more likely to locate your communication.

Step 7: Then listen to and comment on tracks uploaded by your fellow Disquiet Junto participants.

Additional Details:

Deadline: This project’s deadline is Monday, July 15, 2019, at 11:59pm (that is, just before midnight) wherever you are. It was posted in the evening, California time, on Thursday, July 11, 2019.

Length: The length is up to you. Shorter is often better.

Title/Tag: When posting your track, please include “disquiet0393” in the title of the track, and where applicable (on SoundCloud, for example) as a tag.

Upload: When participating in this project, post one finished track with the project tag, and be sure to include a description of your process in planning, composing, and recording it. This description is an essential element of the communicative process inherent in the Disquiet Junto. Photos, video, and lists of equipment are always appreciated.

Download: Consider setting your track as downloadable and allowing for attributed remixing (i.e., a Creative Commons license permitting non-commercial sharing with attribution, allowing for derivatives).

For context, when posting the track online, please be sure to include this following information:

More on this 393rd weekly Disquiet Junto project — Mix Master / The Assignment: Make a new composition from your favorite parts of three of your previous recordings — at:

More on the Disquiet Junto at:

Subscribe to project announcements here:

Project discussion takes place on

There’s also on a Junto Slack. Send your email address to for Slack inclusion.

Image associated with this project adapted (cropped, colors changed, text added, cut’n’paste) thanks to a Creative Commons license from a photo credited to splityarn:

This Week in Sound: Experimental Archeology + Defiling Asimov + …

Sonic Fictions + Robert Fripp's set-list advice + more

A lightly annotated clipping service

If you find sonic news of interest, please share it with me ([email protected]), and (except with the most widespread of news items) I’ll credit you should I mention it here.

Choral Culture: “Experimental archeology at its finest.” That’s how Andrew Henry describes efforts to (re)experience chants in the sorts of ancient spaces where they were first performed. Henry hosts the Religion for Breakfast YouTube channel, and talks on a video for the 12tone YouTube channel about “How Music Shaped Roman Cities.” It’s less than seven minutes long, and well worth your time. Particularly interesting are observations about just how quiet life was before the invention of gun powder, how far such chants would travel in the relative silence of the era’s cities, providing a constant background sound to daily life: “Music would be heard hundreds of meters away as you went about your daily life.”

Cock Up: Put September 5 on your calendar. That’s when a French court will rule whether or not the now famous rooster Maurice is producing “abnormal noise.” Some background: “In 1995, faced with a similar case that led to a death notice being served on a cockerel, a French appeal court declared it was impossible to stop a rooster crowing. ‘The chicken is a harmless animal so stupid that nobody has succeeded in training it, not even the Chinese circus,’ that judgment said.”

Nuke Chords: Just to follow up on an item from two weeks ago about Hildur Guonadottir’s employment of nuclear-reactor field recordings for her score to the HBO series Chernobyl: the sound designers behind the video game Tom Clancy’s The Division 2 ventured to the infamous region to get audio for its production. The following article is a detailed overview of the audio development for the game, in particular about the use of “impulse responses” to provide a sense of different spaces: “In the exteriors the team had a system called ‘Bubblespace’, which constantly checks a player’s surroundings, and change the sounds of the ambience and reverbs based on where the player currently is. ‘For every single tree we have a specific leaves in wind sound (tied to the wind speed), as well as a chance to play D.C. specific bird-calls tied to the correct time of day.'”

Bad Robot: You know how voice assistants don’t always understand what you’re saying? Well, apparently even when Alexa understands you’ve requested to delete your voice recordings, it reportedly doesn’t actually follow through entirely with your request. If true, this seems to mean that Alexa is breaking the second of Isaac Asimov’s Three Laws of Robotics: “Amazon last week confirmed that it keeps transcripts of interactions with Alexa, even after users have deleted the voice recordings. Based on reports that Amazon retains text records of what users ask Alexa, Sen. Chris Coons in May sent a letter to CEO Jeff Bezos, demanding answers.”

AVAS, Matey: To paraphrase Pavement, as I often do, “Sound scene is crazy / acronyms start up each and every day / I saw another one just the other day / a special new acronym.” Or at least new to me: “AVAS” stands for “acoustic vehicle alert system,” which means adding sounds to those vehicles that, due to the welcome retirement of internal combustion engines, no longer make the sounds to which humans have become accustomed. In the UK, AVAS has found a natural supporter in the Guide Dogs UK, a charity for the blind and partially sighted. As mentioned here last week, BMW hired film composer Hans Zimmer (The Dark Knight, Inception) to make sounds for its latest future car. In the UK, some of the vehicular sounds apparenlty suggest the ghost of the BBC Radiophonic Workshop is haunting newer-model driving machines: “At the presentation, the transportation organization reportedly played six sounds. Welsman [a Guide Dogs UK representative] assessed that the sounds were ‘all very spaceshippy,’ and suggested the electric buses instead use audio recordings of the classic Routemaster buses. ‘As a blind person I could spot the old Routemaster a mile off, because it was so distinctive, but that’s not what they are suggesting.'”

Children’s Revolution: Are hand dryers damaging to the ears of children? “To investigate that question, Nora Keegan, the study’s author, spent more than a year taking hundreds of measurements in public restrooms throughout Calgary, her hometown.” The key factor in this story: Keegan is, herself, just 13 years old.

Bird Brains: In Emergence Magazine, both text and podcast, David G. Haskell has a beautifully written piece on the languages of birds, and the many reasons that humans can’t comprehend them: “The same sound vibration is received and understood in profoundly different ways by birds and mammals.”

Out-Bopped: “Researchers from Queen’s University in Northern Ireland discovered that human background noise disrupts how robins hear aggressive warning calls, which could lead to population declines in urban areas.”

Play Time: The current edition of the American Theatre website contains a plethora of articles about the role of sound design in theatrical productions. Particularly informative is a piece singling out a half dozen plays for sonic excellence.

Reading around the web

“The primary factor in choosing a setlist is the performance space,” writes guitarist Robert Fripp, now on tour for the 50th anniversary of his band, King Crimson. The blog post continues: “Only part of this is the acoustics. Each performance space / venue / auditorium has its particular spirit of place: churches, burlesque theatres, rock clubs, classical halls small and large; with performance and listening practices, determined mainly by the culture and history of the region. All these situated within the wider social / cultural traditions and conventions of the locality; and, in Italy, also the idiosyncratic nature of how the business works.”

Georgi Marinov has some concerns about cassettes: “I keep thinking about cassette tapes. Specifically about their environmental impact. … Not sure how well known is that tapes have a nasty habit of shedding after a couple of decades (as in the particles falling off the carrier tape to which they’re ‘glued’). All that dirt ends up on the cassette deck transport and it starts malfunctioning with otherwise healthy tapes.”

Much of what I read is and has always been science fiction, and I’m becoming something of an obsessive for genres of music invented for future and alternate realities, such as those in Malka Older’s three Centenal Cycle books, as well as Ramez Naan’s Nexus Arc books. I just started reading Fonda Lee’s Jade City, and came across this in the fifth chapter:

As he drove away from the Kaul estate, Hilo rested an arm out of the open window and drummed his fingers in time to the beat from the radio. Shotarian club music. When it wasn’t Epsenian jiggy — or worse, Kekonese classical — it was Shotarian club.

There’s also an excellent bit earlier on in Lee’s novel, about how the titular jade enhances the listening powers of those who are capable of not being driven mad by its influence.

This is lightly adapted from the July 7, 2019, issue of the free weekly email newsletter This Week in Sound.

The Virtue of Virtual Cables

Andrew Belt talked about the VCV Rack software at Stanford on July 3.

Over the past two years, a remarkable piece of free software has helped make modular synthesis widely available. The software is called Rack, from the company VCV, which like many small software firms is essentially a single person serving and benefiting from the efforts of a far-flung constellation of developers. Andrew Belt, who develops VCV Rack, this past week visited the San Francisco Bay Area from Tennessee, where he lives and works, to give talks and demonstrations. I caught his presentation at the Stanford University’s CCRMA department this past Wednesday, July 3. It was a great evening.

Belt spoke for an hour, starting at around 5:30pm, about the origins and development of VCV Rack, how it began as a command-line effort, and how then he went back to a blank slate and started on a GUI, or graphic user interface, approach. That GUI is arguably what makes VCV Rack so popular. Rack provides emulations of synthesizer modules that look just like actual physical modules, including virtual cables you drag across the screen, much as you’d connect an oscillator and a filter in the physical world. The occasion of his visit is the release of version 1.0 of VCV Rack, following an extended beta honeymoon. He covered a lot of material during the talk and subsequent Q&A, and I’m just going to summarize a few key points here:

He talked about the “open core” business-model approach, in which the Rack software is free and open source, and how third parties (and VCV) then sell new modules on top of it. (This is a bit like a “freemium,” the difference being that the foundation here is open source.)

Belt went through various upcoming modules, including a “timeline” one, a “prototype” one, a “video-synthesis” one, a DAW-style “piano roll,” and one that is a bitcrusher emulating super low-grade MP3 encoding. He didn’t mention which existing synthesizer module companies are due to port theirs over to Rack, and no one asked, likely because, this being CCRMA, the conversation was way more deep in the DSP (digital signal processing) weeds — which was great, even if 90% of that material was way over my head. He showed tons of examples, including how the new polyphony (up to 16 voices) works.

There was a great moment midway through the talk. Belt was discussing the employment of a type of synthesis in Rack called FM synthesis, and he asked if anyone in the audience could remind him who had first developed FM synthesis. One of the senior CCRMA professors chimed in and explained that we were all in this room precisely because of FM synthesis: CCRMA was funded for many years thanks to profits on the patent for FM synthesis, which was developed by Stanford professor John Chowning. FM synthesis was what made the Yamaha DX7 synthesizer a massive success during in the 1980s. For many years to follow, Chowning’s FM synthesis patent was, reportedly, the single most profitable patent in all of Stanford’s existence. After drinking in the impromptu history lesson, Belt pulled up a DX7 emulation in Rack. Someone in the audience noted how things come full circle.

I highly recommend giving VCV Rack a try. It’s available at

This is lightly adapted from the July 7, 2019, issue of the free weekly email newsletter This Week in Sound.

This Week in Sound: Heartbeat Surveillance + Classical Metadata + …

A lightly annotated clipping service

This is lightly adapted from the June 30, 2019, issue of the free weekly email newsletter This Week in Sound. I don’t usually wait a full week to post the material on, but the July 4 holiday messed with my schedule. The July 7 issue of This Week in Sound went out a few minutes ago.

If you maintain a blog related to music and/or sound, please reply to this email to let me know. Thanks. Some recent favorite posts:

The bass player Steve Lawson ponders the pluses and minuses of making album-length recordings: “I just want to keep making the music that matters to me. And the few hundred people I need to be interested in what I’m doing in order to make it viable are statistically insignificant in terms of the wider music industries.”

Ethan Hein on developing an introductory course to music theory: “If you read this blog, you know that I take a dim view of traditional music theory pedagogy, which tends to present the aesthetic preferences of Western European aristocrats of the 17th and 18th centuries as if they’re a universally valid and applicable rule system.

This isn’t quite a blog, but Susanna Caprara, who goes by La Cosa Preziosa, has a great monthly newsletter that’s called The Secret Soundscape Club, and that’s what it’s about. It’s wonderful. (She also has a blog.)

A lightly annotated clipping service

Beat Surrender: Your heartbeat has a signature, and the Pentagon has developed technology to read it with a laser from a distance. “The system is 95 percent accurate and can be used at distances of at least 200 meters, making them useful at locations such as military checkpoints.”

Space Music: Brian Eno is now the name of an asteroid. The day prior to the announcement, he earned the Stephen Hawking Medal.

Cloud Cover: Bitcoin may be the “native currency” of the internet, but its diehards are looking for redundant forms of communication, should the internet fail them, so that their accounts are always accessible. They are now tapping satellites and even ham radio to do their bidding.

Speak & Teller: There’s a new version of the board game Monopoly that comes with a “voice-controlled AI” that manages players’ finances and transactions.

Blast from the Past: “A long time ago, in a galaxy far, far away, something mysterious launched a burst of radio waves into the cosmos. Last September, that powerful pulse collided with an array of radio telescopes in the western Australian Outback. Though the fleeting barrage lasted mere milliseconds, scientists were able to trace the radio burst back to its source: A galaxy roughly four billion light-years away.”

Mic Drop: ProPublica and Wired reported on “an aggression detector that’s used in hundreds of schools, health care facilities, banks, stores and prisons worldwide, including more than 100 in the U.S.” The tool uses sound as a detector of suspicious activity. “Yet ProPublica’s analysis, as well as the experiences of some U.S. schools and hospitals that have used Sound Intelligence’s aggression detector, suggest that it can be less than reliable.”

Bat Mobile: Now that cars can be nearly silent, due to electric and hybrid engines, they require sounds to be added. BMW has tapped composer Hans Zimmer (Inception, The Dark Knight) for its forthcoming BMW Vision M NEXT concept car.

Right Stuff: The song of the North Pacific right whale has long eluded researchers, until now, reports Smithsonian Magazine. Why might this scarce species of whale sing? Same reason humans generally do: “the rarity of the whales has led to the animals becoming more vocal to find mates.” (via Subtopes)

Rock Lobster: It’s been said that the Chinese-American dish known as chop suey helped keep miners from getting scurvy. Madeline Leung Coleman argues persuasively that Chinese food later fueled West Coast punk rock. (via the NextDraft newsletter)

Within the Context of No-Context, Part 358: The information associated with streaming classical music, such as conductor, composer, and performers, is often found to be lacking: “critics of the status quo argue that the basic architecture of the classical genre — with nonperforming composers and works made up of multiple movements — is not suited to a system built for pop,” writes Ben Sisario. It’s worth nothing that the system fails pop music, too. Streaming services often leave out the record label, liner notes, band members, and song/production credits. Just this past week I was trying to find a 1996 album from the Lo Recordings label of collaborations (it’s titled Collaborations) that teamed pairs of musicians, only to find that Google Play Music leaves out half of each pair in the track listing.

Summer School: Ableton, the German software-development company behind the widely used digital audio workstation called Live, launched a fun web-based tool for teaching the basis of audio synthesis.

Concierge DJ: A year or so ago while searching for a hotel in New York City, I selected the Ace because it provided free loaner guitar to anyone staying the night, and I wanted to be able to practice while away from home. Were I to stay at the lower Manhattan hotel Sister City, created by the folks behind Ace, I might never even spend much time in my room, as musician Juliana Barwick has created a score for its lobby, and it’s interactive. Lobby music is the new elevator music.

Psych Out: This recent research may not bode well for my mental future, but let’s wait until its findings have been verified several times: “A machine-learning method discovered a hidden clue in people’s language predictive of the later emergence of psychosis — the frequent use of words associated with sound.” (via Jason Richardson)