Sound Course, Week 2 of 15

A Brief History of Listening


Each week I summarize the lecture and discussion from my course on the role of sound in the media landscape. In cases where I’ve already documented the discussion fairly thoroughly, as with week two, I’ll link to the full summary, and do a more concise one here.

The second week of sound class is the first full lecture, the first week of sound class having been a combination of an extensive overview of the syllabus and a compacted run through the use of music in the work of JJ Abrams, from the “un-theme” of Lost’s opening credits to the highly “originalist”(“ur-theme”) adherence to John Williams’ modus operandi in Star Wars: The Force Awakens.

The second week’s lecture takes a long view. Titled A Brief History of Listening, it covers in less than three hours about 200,000 some odd years of human development, physiologically (the development of hearing and speech), technologically (from homing pigeons to moveable type to recorded sound), and culturally. The latter bit, the cultural facet, focuses on two subjects. The first discussion is about how Socrates’s anxiety regarding the move from oral to written culture can be mapped to contemporary concerns about transitioning into a digital world. The second discussion is on John Cage’s 4’33”, about the work’s conception and reception, about the idea of an anechoic chamber, and about the way Cage connects, in his book Silence, the ideas inherent in 4’33”beyond music to architecture and sculpture.

As I state occasionally in the early weeks of this course, I’m not trying to convert students to work in sound full time. I don’t need a single student ever to decide to go into sound design or sound engineering to feel that I’ve accomplished something. Quite the contrary, I’m trying to develop sleeper agents who will bring a creative conscientiousness in regard to sound to whatever field they choose to pursue — art direction, design, and so forth.

The big challenge early on in the course is shepherding the students’ off-site work, specifically in the sound journals they’re required to maintain, four days a week, for the full length of the course. For the first entries I ask that they simply list the sounds around them. Inevitably these come back not as sounds but as sources of sounds: door, not door creaking; fan, not fan whirring; baby, not baby cooing. Moving from source to sound, from sound to description, from description to meaning is where we’re headed. It can be painstaking, but learning about sound is like learning a language or achieving a significant improvement in an athletic pursuit. It’s all about dedication and persistence. It’s about practice.

Today’s class (week 3, more on which in next week’s This Week in Sound newsletter) narrowed the scope: last week was 200,000 years; this week was just about 100 years, as the subject was the role of sound in film and television. The timing of today’s class may have been fairly timely, because I was just approached by an organization to give a talk about the past and future of sound in film, and I’m now piecing together an approach for the talk. Here’s a first-draft summary:

Eyes are forgiving, ears less so. Eyes want to be seduced. Ears are sensitive to incongruity, discontinuity, artifice. How can sound reinforce narrative? How can sound be narrative? How can sound design serve as score? We’ll explore the past and the technologically enabled promise of film sound.

And, yeah, when I say “promise” I’m using alliteration as a way to get out of saying “future.” More on this as it comes together.

This first appeared, in slightly different form, in the February 17, 2016 (it went out a day late), edition of the free Disquiet “This Week in Sound”email newsletter:

Sound Course, Week 1 (of 15)

Listening to media

February 3 was the first class meeting for the new semester of the course I’ve been teaching for several years now about the role of sound in the media landscape. Taking off last semester turned out to be unfortunate timing, due to the release of Star Wars: The Force Awakens. See, my opening lecture each semester has focused in some detail on the role of music in the films and television of J.J. Abrams, from the various tweaks on Fringe‘s theme, to the virtual non-theme of Lost’s opening credits, to his decision to employ a new theme for Star Trek, to his teasing extenuation of the Mission: Impossible theme in the film in that franchise he directed.

Abrams is so prolific in his directing and his producing that there has, each semester, been a new project to tag onto the sequence, sometimes to even include as homework viewing. After Abrams was announced as the head of the new, Disney-era Star Wars films, my lectures began to speculate what Abrams’ take on John Williams’ score would be. We now know, of course, that like the film itself, he has opted for an originalist scenario, going back to the first trilogy (that is, the “Luke trilogy” not the “Anakin trilogy”) and building on that framework.

There’s some notable sound design in the new film. The intense daymare experienced by Rey in the forest on Takodana has gotten a lot of attention for how, among other things, it manages to include the late Alec Guinness saying the character’s name by snipping a syllable from another word — all the more potently, the word “Rey” was culled from is “afraid,” very much Rey’s state of mind in that sequence. More impressive, or at least less fleeting, was the audible breath of Darth Vader heard when the camera shows that his grandson, Kylo Ren, maintains a shrine of Vader’s melted mask.

The class will proceed weekly through May 18, aside from spring break on March 23. I won’t be summing up all the early lectures each week, because I’ve already documented them fairly well, but I’ll link to the previous summaries here (week one), and make note of any new developments. I have been lining up some great guests, including a technology lead from a major streaming service and a curator at a major art institution.

This first appeared, in slightly different form, in the February 9, 2016, edition of the free Disquiet “This Week in Sound”email newsletter:

Teaching Sound / Spring 2016

I'll be doing my sound course in San Francisco for 15 weeks starting February 3.


I’ll be teaching my course on “the role of sound in the media landscape” — aka “Sounds of Brands / Brands of Sounds” — again this coming spring 2016 at the Academy of Art here in San Francisco.

The semester runs from February 1, 2016, through May 21, 2016. The class meets on Wednesdays from noon to 2:50pm, which means the first class meeting is February 3 and the final class meeting will be on May 18. There’s no class on March 23, which is spring break, for which I’ll probably assign a close-listening analysis of Cliff Martinez’s work on the score to Spring Breakers. Just kidding. Well, maybe not kidding.

Last semester we had one of the heads of the Mutek festival (Patti Schmidt) address the class, as well as someone from the software developer Cycling ’74 and someone from Facebook’s virtual-reality team, among others. The previous semester we had someone from BitTorrent and someone from SoundCloud, and we took a field trip to an anechoic chamber at the local research lab of an audio company. The guest speakers aren’t generally lecturers; I usually interview them in front of the students, who also ask questions. The semester prior both the sound artist Robin Rimbaud (Scanner) and the voice actor Phil LaMarr (Samurai Jack, Static Shock) visited via Skype.

Here’s the course outline from last year:


I teach the course to a mix of MFA and BA students. This is the seventh semester that I’ve taught the course, after taking off last semester with the intention of teaching it once a year rather than twice a year, to leave room for loads of other projects.

You can read summaries and documentation from past semesters using the “brands of sounds” and “sounds of brands” tags here at

Listeners on Listening

Four questions about sharing one's personal experience with music and sound.

2015-10-11 17.15.20 copy

Steve Ashby, who teaches at Virginia Commonwealth University, recently asked me four questions about listening. Ashby is posing these same four questions to a variety of people. The questions are all about listening, and the answers are intended to inform a music-appreciation course that he teaches at VCU. As I worked on my responses to his questions I asked him some questions — yeah, interview an interviewer and you inevitably get interviewed back — for some background on his teaching. He explained:

I’ve noticed in the classes that I teach, as soon as I start playing a piece of music, say Mozart, Bach, what have you, the students’ attention drifts back to their phone, or other distraction. For all intents and purposes, making the music essentially white noise. … I thought maybe getting perspectives on listening from the music community might be useful. With a handful of perspectives from people in different realms of the music industry, we might be able to find a common thread that opens up new avenues of what it means to listen to music.

The first couple years teaching the course I followed the standard blueprint of an overview lecture through music history, from Chant to Stravinsky. Glazed over eyes, and too many PowerPoints, made me realize I need to rethink this. What’s the point of talking about specific music forms and terminology, when students’ ears aren’t tuned in or turned on to the sound surrounding them. Sounds and music outside of their comfort zone. As one of my guitar teachers used to say, you’ve got to create the space, before you can fill it.

Ashby also mentioned the musician Lawrence English, Simon Scott’s Below Sea Level album, the field recorder Gordon Hempton, and acoustic ecologist R. Murray Schafer as influences on his thinking, and in particular how a recent read of Peter Szendy’s Listen — this line in particular: “Can one make a listening listened to? Can I transmit my listening, unique as it is?”— had woken his ears to the sounds around him:

Bitten by the field recording bug, I began recording my walks to work, around town, outside my apartment, and noticed a big difference in what I heard at the time versus what was recorded. Remembering each step, but hearing my breath and surroundings differently. (The cicadas were loud this summer.)

Below are his four questions and my responses. If you’re interesting in participating, certainly feel free to in the comments below. Ashby has been part of the Guitar Faculty at Virginia Commonwealth University for a decade, and has taught music appreciation for nearly half of that.

1. You buy a new album. Describe your ritual/experience of its first listening.

I’m not sure I have a ritual, aside from listening to it as soon as possible. Most of the music I purchase I do so digitally, and when I purchase physical albums (CD, vinyl, cassette) I generally do so online. The latter often come with a download code, which means I have a digital copy before the physical copy even arrives in the mail. In fact, the most recent cassette tape I purchased — at a record store across town — came with a download code, and I downloaded it to my phone while taking the bus back home. Come to think of it, I only have a cassette player at my office, not at home, so it’s a darn good thing the cassette had that download code. Otherwise I would have had to wait a few days. Anyhow, I purchase music in such varied circumstances, I can’t say I have much of a ritual, again aside from listening to it as soon as possible. I will add that the more excited I am about a release, the more I try to diminish my expectations in advance of hitting the play button. One genre-specific ritual I have is that I listen to a lot of film music, and I try to listen to a movie’s score before I go to the theater to see it. A side note: I get an enormous amount of music for free, because I write about music and work with musicians, which means my inbox and my mailbox are inundated with, respectively, zip files and packages. I have music playing most of the day, less so in the evening. When I’m intrigued by a piece of music, I’ll often put it on repeat, sometimes for hours. The most extreme version of this is when I wrote my 33 1/3 book on Aphex Twin’s Selected Ambient Works Volume II, which took about a year, just about every day of which I listened to one track off the record over and over.

2. On subsequent listens to that same record, which aspects of the music do you focus your listening on. Does this change over time? How?

If there’s a track I like a lot at first, I’ll try to avoid it for awhile. After a few listens to an album, I’ll often put the album on shuffle, so I can listen to the tracks more remotely, more apart from each other. The biggest influence on how my listening changes over time is physical circumstances. I am amazed how different headphones, different speakers, different moods can change how a record sounds. (In the photo at the top of this post, the large black headphones are the ones I use at home, and the metal earbuds are the ones I use when I’m out and about. I also have some noise-canceling headphones for plane flights.)

3. If you could choose your favorite listening environment, what would it be? What draws you to that place to hear the music you’re listening to?

I like to listen to music in lots of different contexts. The primary places I listen to music are at my desk at home, at my desk at my office, in my living room at home, in my kitchen at home, while walking, and while on the bus. If I had to choose one favorite, it’d be wearing headphones while alone on the bus, enjoying the clarity that headphones provide, and the way music shapes everyday experience into a narrative. There’s nothing like taking a mundane bus trip while listening to a score fom a science fiction film or a thriller.

4. How does one make their listening listened to?

To take a step back, I should clarify my sense of the word “listening,” because it happens to be a word I use a lot. I teach a course on the role of sound in the media landscape at an art school, and I spend the first three weeks of the 15-week course discussing listening. To me, listening, clearly, applies broadly to the everyday experience of being in the world, of hearing the world. In fact, it’s hard for me to separate that sense of the word from the more specific context we’re working with here, where we’re mostly talking about listening to music. That said, there is, I think, a helpful transition from the “active listening” that I think of in regard to everyday life, to listening to music. There are lots of ways to make one’s listening listened to. I’ll describe four here, the first three of which I participate in, and the last being one I like to observe.

A. Dorm Space: For me, the single best social scenario for listening to listening to music — when my listening was listened to — was back in college, and it’s probably not repeatable in my daily life as an adult. I had a single dorm room my junior and my senior year, and I was always listening to music when I was in it. It became my habit in senior year to just leave my door open, and invariably people would walk through the hallway, hear something, and come in. There were frequently two or more people in my room in addition to me, listening to whatever I was listening to, sometimes while I was doing my homework. I think my listening then was kind of “performative.” I would talk about what was playing, move back and forth between records. My dorm room was like the world’s smallest radio station, one that broadcast only a few feet beyond the station’s doorway.

B. Radio DJ: I also DJ’d in college, in the radio sense of the word “DJ,” and I think DJing in that radio sense of the word is a fine example of having your listening listened to. I had a jazz show that was pretty straightforward, and a classical show, which was a mix of contemporary music and ancient vocal music, and where the two things often met — Pauline Oliveros’s Deep Listening and Steve Reich’s Tehilim feel very comfortable next Byrd and Palestrina. I also did a more freeform show, which would broaden the classical material to add in pop and rock that smacked of minimalism: My Life in the Bush of Ghosts, Brian Eno’s solo ambient stuff, lots of Robert Fripp, Fela, King Sunny Ade, and so on. Making connections between those records, whether simply by playing them in sequence or commenting on them after one track ended and before the next began, was a way of putting those connections in the listener’s head as to what I heard in the music, what I was listening to, listening for, in the music.

C. Music Criticism: I have written about music since I was in high school, and I think of writing about music as a means to express what I hear. It’s the primary way that I express my listening. This is recursive. To write about music, I need to think about my own listening — I need to listen to my listening — and that reflection then becomes the raw material for what I write. The single best advice I ever got in regard to writing about music was to use the writing to help explain how to listen to the music — not that there’s necessarily one way to listen to a piece of music.

D. Music About Music: All bands, the saying goes, begin as cover bands. This isn’t to say that every band is literally a cover band, performing some other band’s songs. What it means is that all bands begin with their influences plainly apparent, perhaps as homage, often as denied imitation, and then the good ones proceed over time to develop their own identity. Much music is built from pre-existing sound: sample-based hip-hop, quotations in jazz, electronic music that employs field recordings and presets (presets being audio and other tools that come as part of digital instruments). Just because the source audio remains evident in some of this work doesn’t mean that the artist has not fully consumed the material. But stepping back from even the most artfully assembled piece, like Steve Reich’s “It’s Gonna Rain” or the Dust Brothers’ production of the Beastie Boys’ Paul’s Boutique, one has the opportunity to hear how the musicians hear, what it is they listen for, what sorts of sounds register with their ears and align with their creative impulses. If you listen closely to their acts of sampling you can listen to them listening.

Sound Class, Week 7 of 15: Explicit vs. Implicit

Vocabulary refresher, a useful series of quadrants, breakfast cereal, OS startup sounds


On the very first day of class I share this sequence:

Hearing → Listening → Discerning → Describing → Analyzing → Interpreting → Implementing →

That is, in a handful or so of words, a map of the 15-week course that I teach on the role of sound in the media landscape at the Academy of Art in San Francisco.

The first semester I taught the course, back in 2012, a student raised a hand from the back of the room and asked, in effect, if I am making up any of the words we use. I suppose hearing “anechoic” and “acoustemology,” among other less esoteric terms, over and over takes its toll, and I replied that I did not make up any of the words. I did, however, take responsibility for two familiar words used in a particular context. Those words, and that context, are the subject of week 7.

First some background on the course, in case this is the first week you’ve read one of these summaries: Each week of the 15-week course my plan is to summarize the previous class session here. Please keep in mind that three hours of lecture and discussion is roughly 25,000 words; this summary is just an outline, in this case less than 10 percent of what occurs in class. Some class meetings emphasize more discussion than others. Week 7 this semester is especially discussion-heavy, and hence the lecture outline here is fairly cursory.

I start off week 7 by reviewing recent vocabulary. When this goes well, we don’t stop with the words I initially reprise, words like “soundscape” and “soundmark,” and, yes, “anechoic” and “acoustemology.” We discuss how the first two develop out of the work of R. Murray Schafer, how the third relates to John Cage, and how the fourth comes out of the work of Steven Feld. To revisit the previous week’s class meeting, on the role of sound in retail space, we discuss Ray Oldenburg’s concept of the “Third Place.” In turn, student queries lead to additional vocabulary refreshes, among them sonic equivalents of so-called “skeuomorphism” design (the shutter sound of digital cameras serves as a good example), “haptic” feedback, and the difference between a “neologism” and a “retronym.”

Then we proceed to those two fairly common terms I mentioned up above, “explicit” and “implicit,” which we employ in a specific context. For the purposes of discussion, an “explicit” sound related to a subject is one closely tied, in the public imagination, to it, such as the “pop pop, fizz fizz” of Alka-Seltzer, or the anthropomorphized Snap, Crackle, and Pop of Rice Krispies. In contrast, “implicit” sounds are those that are to some extent inherent in a given subject, but that are not fully, for lack of a more nuanced term, branded. Different makes of door lock, for example, will sound different upon close inspection, but it would be hard to make a case that to anyone other than a discerning thief those sounds are closely associated with the locks.

We begin by drawing a grid, two by two, and we put those two words on the Y axis. On the X axis, horizontally, we write “category” and “product.” The remainder of week 7 involves working through how sounds can be oriented in those four quadrants. This plays out in various ways, largely as a result of group discussion, and thus it doesn’t translate particularly well to summary. So, I’ll just emphasize some things I’ve learned when teaching this class:

  • It’s important to keep top of mind that the quadrants in this two-by-two grid are along a continuum. Students often mistake them as four independent if interrelated categories. That’s not the case.

  • An operating system startup sound is a useful example. The startup sound itself began deep in the implicit/category zone, and was later elevated to explicit/product when Apple and Windows, just to note two examples, developed unique audio logos.

Homework: The homework for week 8 is to take another pass on the research from week 7, which involved the development of a “sonic audit.” This week in class we take time, in small groups, to compare notes about how to apply the explicit/implicit grids to the students’ chosen topics, which range from Oreo cookies to Nike sneakers to Rolex watches. The assignment is as follows: Do a “sonic audit”of a specific brand/product of your choosing.

Your brand/product should not be inherently sonic; that is, for example, it should be a candy bar, not a headphone — a clothing store, not an MP3 player — an airline, not a mobile music app. You will explore the role of sound in the brand/product that you select. (You can, alternately, elect to focus on an industry/category, such as the Got Milk? and National Pork Board campaigns.)

In the process of developing your sonic audit you should look deeply at the brand/product from numerous viewpoints, such as, but not exclusive to, the following: (a) sounds inherent in the category, (b) sounds exclusive to the brand/product, (c) cultural references (e.g., song lyrics), (d) brand history (e.g., jingles, concert sponsorships, musician spokespeople), etc. Your presentation of your findings should consist not only of exhaustive examples you locate, but of the “cultural meaning”of what you discern. How you present this material is up to you, but it should be substantial. We’ve used short essays in assignments and four-quadrant grids in class, and those are particularly recommended. In the end, the documentation should state and support a specific point of view about the sonic properties of the brand.

Next week: The software tools of sound, with an emphasis on Audacity and, just to nudge things a little, Max/MSP.

Note: I’ve tried to do these week-by-week updates of the course in the past, and I’m hopeful this time I’ll make it through all 15 weeks. Part of what held me up in the past was adding videos and documents, so this time I’m going to likely bypass that.

This first appeared in the March 24, 2015, edition of the free Disquiet “This Week in Sound”email newsletter: