One of the great pleasures of teaching a course on sound is reading students’ sound journals. This semester there are 17 students enrolled in the course that I teach about the role of sound in the media landscape. Each student is required to keep a sound journal in which they write — at least four times a week — about their experience of sound. These student journals begin simply, at the start of the semester, with lists of overheard things, not even full sentences, just clauses, descriptive bits, reference points. For example, I just opened one of these journals and saw a recent entry about a date at a Japanese restaurant. The first four observations in this student’s journal entry, out of a dozen or so such line items, read as follows:
“Coins are putting in parking meter”
“Customers writing on the waiting list, the pen and paper made a soft sound”
“People zipping off their jackets”
By tonight at 11pm (San Francisco time), the due date for the first week’s homework, there will be almost 70 such entries among the 17 students’ collective journals. I have them write the journals in reverse chronological order as Google Drive documents, so I can pull them up anywhere and start reading immediately. I’ll often enough find myself on a bus reading about a student’s experience on a different bus, or while waiting for a movie to start I’ll read a student’s comments about a film they had just seen. This list-making only lasts so long. In two weeks, maybe three, the assignments will be pushed toward more considered, toward more reflective observations. The journal instruction at that stage will read something along the lines of this: “Write more about less.” We’ll move from clauses to sentences, from sentences to paragraphs, from paragraphs to essays.
The first day of class I put a slide up that shows the trajectory I plan for their sound studies to follow. This is how it reads: Hearing → Listening → Discerning → Describing → Analyzing → Interpreting → Implementing.
We start with hearing. The course begins like the journals, with list-making. The first day of class, as we did this past Wednesday, I have students sit in (near) silence and write down everything that they hear. I post the instructions on the television monitor at the front of the room and I wait patiently as they write on the provided pieces of paper. They have 15 minutes to do this, and the situation usually plays out as follows. For the first 5 minutes there is much to write about, for the second 5 minutes they are asking what the heck they’ve gotten themselves into, and for the final 5 minutes the world opens itself up a little bit, and they find renewed energy to observe the sounds around them: the muffled discussion in a neighboring classroom, a passing siren, their classmates’ scribbling, me coughing, footsteps in the hallway.
Thus begins the first class. The course, which is held at the Academy of Art downtown, runs for 15 weeks straight, excepting spring break, with one class meeting per week, three hours each class starting at noon on Wednesdays, plus about nine hours of related homework assignments each week.
The course is divided into three arcs, to borrow a term from serial television and comic books. The first three weeks are about “Listening to Media.” The second arc runs from week four through week 10, and is about “Sounds of Brands”: how companies, products, and people express themselves through sound; we’ll talk about jingles, and the sound of retail spaces, and product design, and public address systems. The third arc runs from week 11 through 15; it’s titled “Brands of Sounds,” and it’s about how things related to sound — headphones, streaming services, bands, albums — express themselves in the marketplace.
If workload allows, I plan to outline each week’s class meeting here. This isn’t the lecture; it’s a summary of the class meeting. Much of the first day is explaining to the students what I wrote above, about the outline of the syllabus. I mention that we will have occasional guests — not guest speakers so much as people I will interview in front of them. In past semesters we’ve had representatives of SoundCloud and BitTorrent, sound designers and voice actors, music publicists and app developers. And we’ll take field trips. We’ve visited backstage at the opera, we’ve entered the ambisonic workspace of a major engineering consultancy, and we’ve stood in the anechoic chamber of a consumer-audio company’s local research laboratory.
There is a second listening exercise halfway through the first class meeting. This time they don’t listen, and instead they write down what listening they associate with a specific time: what sounds they associate with waking on a Tuesday morning. They make a list of the sounds that come to mind. Inevitably this consists of things like the burbling of a coffee maker, the white noise of a running shower, the grinding of a garbage truck’s receptacle. They submit this to me, and I hold on to it for the time being.
We then focus in on the familiar: popular entertainment. I play the opening credits to the television show Fringe, developed by JJ Abrams, and we talk about what the credits have in them, what triggers and nuances and reference points that the music contains, from its sense of mystery, to its touches of modern classical minimalism, to its dreamy apparitions. We talk through the variations that credit sequence underwent as the TV series Fringe unfolded — the “red” episodes that meant it would take place primarily in an alternate universe, the 80s rendition that meant a major flashback. And I explain that every class I teach has at its heart some question I to which don’t even begin know the answer. For this first class meeting what I don’t know is to what extent JJ Abrams’ close, career-long attention to sound plays a role in the popular success of his work.
We then talk about a variety of themes associated with JJ Abrams, from the coy reworking of the theme to Mission: Impossible, to the replaced open-credit music in his first Star Trek film, to the tone-as-jingle that constituted the brief title card score for Lost, and we close on the classic John Williams opening to Star Wars, and consider what a JJ Abrams version of Star Wars, which will debut later this December, might sound like. We spend a lot of time on each of these TV and film credit sequences, playing them over and over, listening for and discussing details, pausing every few seconds.
That all takes close to the allotted three hours. Then I talk through the grading procedures, how to share homework, and what the homework will be for the coming week. The first class met last Wednesday, January 28. Due tonight by 11pm is the first week’s homework. They will write four sound journal entries. They will read an essay by neuroscientist Seth S. Horowitz and an interview with acoustic ecologist and composer R. Murray Schafer. And they will have, this morning, woken up and written down all the sounds they heard. We will then, in class, compare and contrast what they actually heard with what they thought they would hear on a Tuesday morning. But that’s not until the second week of class, which happens tomorrow.
Note: I’ve tried to do these week-by-week updates of the course in the past, and I’m hopeful this time I’ll make it through all 15 weeks. Part of what held me up in the past was adding videos and documents, so this time I’m going to likely bypass that.
This first appeared in the February 3, 2015, edition of the free Disquiet email newsletter: tinyletter.com/disquiet.