Quick question: How many microphones are in the room you are in right now?
That’s the fun and instructive little parlor game I like to play with my students a few weeks into the sound course I teach. They each share their guess with me, and then I talk through all the technology in the room, and rarely is anyone even close to the right number. The students are often a little surprised, at first, to learn that many of their phones have three microphones, and that the most recent MacBook Air laptops have two microphones, to allow for noise cancellation of ambient sound from the environment. They forget that many of their earbuds come with microphones, as do their iPads, their iPods, their Bluetooth headpieces. We’re meeting in a classroom, so there’s no concern about their game consoles, or their smart thermostats, or their home-security system. By the end of the exercise, they are a little anxious, which is productive because this week we discuss Francis Ford Coppola’s masterpiece thriller, The Conversation. By the end of the exercise, we’re all a bit like Gene Hackman at the end of the film: wondering where the surveillance is hidden.
There is almost every week of the class a question at its heart to which I do not have an answer. The question this week is: How is the increasing ubiquity of recording equipment in everyday life transforming the role of sound in film and television?
This is third week of the course I teach at the Academy of Art on the role of sound in the media landscape. The third week marks the close of the first of three arcs that comprise the course. First come three weeks of “Listening to Media,” then seven weeks of “Sounds of Brands,” and then five weeks of “Brands of Sounds.” If the first week of the course is about the overall syllabus and the second week looks back on 150,000 years of human history, how hearing has developed biologically, culturally, and technologically, then the third week focuses on just barely 100 years: We look at how film and, later, television have shaped the way sound is employed creatively.
Each week my plan is to summarize the previous class session here. Please keep in mind that three hours of lecture and discussion is roughly 25,000 words; this summary is just an outline, maybe 10 percent of what occurs in class.
As with last week’s “Brief History of Listening,” this week uses a timeline as the spine of the discussion, or in this case a “trajectory.” For most of this class meeting, this “trajectory” appears on the screen at the head of the room:
ӢӢ A Brief Trajectory of Film Sound
”¢ filmed theater ”¢ moving camera / editing ”¢ synchronized sound (1927, Jazz Singer) ”¢ talkie → term “silent film”(c.f. “acoustic guitar”) ”¢ orchestral score (classical tradition) ”¢ electronic (tech, culture, economics) ”¢ underscore (Cliff Martinez, Lisa Gerrard, Clint Mansell, David Holmes, Underworld, Nathan Larson) ”¢ sound design as score ”¢ side note: score as entertainment (audio-games, sound toys, apps)
I’ll now, as concisely as possible, run through what we discuss for each of those items.
ӢӢ A Brief Trajectory of Film Sound
Ӣ filmed theater
We begin at the beginning of film, and discuss how when new media arise they often initially mimic previous media, in this case showing how early film was often filmed theater.
Ӣ moving camera / editing
In time the combination of moving cameras and editing provide film with a visual vocabulary of narrative tools that distinguish it from filmed theater.
Ӣ synchronized sound (1927, Jazz Singer)
We talk about how the introduction of sound to film doesn’t coincide with the introduction of recorded sound. The issue isn’t recording sound. It is the complexity of synchronization.
”¢ talkie → term “silent film”(c.f. “acoustic guitar”)
The word “retronym” is useful here. A retronym is a specific type of neologism. A “neologism” is a newly coined word. A retronym is a new word for an old thing required when a new thing arises that puts the old thing in new light. The applicable joke goes as follows:
Q: What was an acoustic guitar called before the arrival of the electric guitar?
A: A guitar.
We also discuss the brief life of the term “cameraphone,” which was useful before cameras became so ubiquitous that a consumer no longer makes a decision about whether or not to buy a phone with a camera. Given the rise of social photography, it’s arguable that cellphones are really cameras that also have other capabilities.
In any case, that tentative sense of technological mid-transition is at the heart of this part of the discussion, about how films with sound were initially as distinct as phones with cameras, and how in time the idea of a movie without sound became the isolated, unusual event. We talk about how the “silent” nature of “silent film” is a fairly popular misunderstanding, and that silent films in their heyday were anything but, from the noise of the projectors, to the rowdiness of the crowds, to the musical accompaniment (often piano).
Ӣ orchestral score (classical tradition)
We discuss how the orchestral nature of film scores was not unlike the way films originated in large part as filmed theater. The orchestral score connected the audience experience to mass entertainments, like theater and and opera and musicals, in which orchestras and chamber ensembles were the norm. Long after the notion of filmed theater had been supplanted by a narrative culture unique to film, the norm of the orchestral score lingered.
Ӣ electronic (tech, culture, economics)
We discuss the rise of the electronic score, how the transition from orchestral to electronic involved a lot of difference forces. Technology had to become accessible, and changes in pop culture eventually required music that no longer felt outdated to couples out on a date, and finally economics meant that large Hollywood studios, which often had their own orchestras and production procedures, needed incentives to try something new.
Ӣ underscore (Cliff Martinez, Lisa Gerrard, Clint Mansell, David Holmes, Underworld, Nathan Larson)
The broad-strokes sequence of how movie scores changed since the rise of the talkie has three stages, from traditional orchestral scores, to early electronic scores that mimic orchestral scoring, to electronic scores that have their own unique vocabularies. (That’s leaving aside groundbreaking but also way-ahead-of-their-time efforts such as Bebe and Louis Barron’s Forbidden Planet.) I highlight the work of a handful of composers, all of whom to varying degrees employ what can be called “underscoring”: scores that rarely reach the crescendos of old-school melodramatic orchestral scores, and that often meld with the overall sound design of the filmed narrative they are part of. (I also note that all of these folks came out of semi-popular music: Cliff Martinez played with the Dickies, Captain Beefheart, and the Red Hot Chili Peppers, Lisa Gerrard with Dead Can Dance, Clint Mansell with Pop Will Eat Itself, and Nathan Larson with Shudder to Think. Underworld is a band, and David Holmes is a DJ and solo electronic musician.)
Ӣ sound design as score
Where I’ve been heading with this “trajectory” discussion — I call it a trajectory rather than a “timeline” because I feel the sense of momentum in this particular topic — is to focus on contemporary work in which sound design is the score. To emphasize the transition, I show a series of short videos. We watch the opening few minutes of a 1951 episode of Dragnet and then the opening portion of an episode of Southland, which closely follows the model of Dragnet: the martial score, the civic-minded officer’s point of view, the spoken introduction, the emphasis on “real” stories. The difference is that the melodramatic score of Dragnet is dispensed with in Southland, as is the notion of a score at all. Southland, which aired from 2009 through 2013, has no music once its filmic opening credits were over. Well, it’s not that there’s no music in Southland. It’s that any music one hears appears on screen, bleeding from a passing car, playing on the stereo in a doctor’s office, serving as the ringtone on someone’s cellphone. All sound in the show collectively serves the role once reserved largely for score. When there’s a thud, or a gunshot, or a droning machine, it touches on the psychology of the given scene’s protagonist.
To make my point about the means in which sound design serves as a score, I play an early clip from I Love Lucy, and contrast the early employment of laugh tracks in that show with portions of MAS*H, another sitcom, that lacked laugh tracks. I talk about the extent to which much movie scoring is often little more than a laugh track for additional emotions.
We then spend about 15 or 20 minutes watching over and over the same very brief sequence from David Fincher’s version of The Girl with the Dragon Tattoo, which I dissect for the gray zone between where the movie’s sound ends and the score by Trent Reznor and Atticus Ross begins. (If I have time in the next few weeks, I may do a standalone post with screenshots and/or video snippets that break down the sequence.)
In the work of Fincher, Reznor, and Ross we have a masterpiece of underscoring. The film isn’t quite in Southland‘s territory, but it is closing in on it. I then show two videos that work well together. These are promotional interviews, mini-documentaries, one of Jeff Rona talking about his work on the submarine movie Phantom and the other of Robert Duncan talking about his work on the submarine TV series Last Resort. The videos are strikingly similar, in that both show Rona and Duncan separately going into submarines, turning off the HVAC, and banging on things to get source audio for their respective efforts. All the better for comparison’s sake, the end results are quite different, with Duncan pursuing something closer to a classic orchestral sound, and Rona in a Fourth World vibe, more electronic, more pan-cultural, more textured. What is key is that the sounds of the scores then lend a sense of space, of real acoustic space, to the narratives whose actions they accompany.
Some semesters I also play segments from The Firm, to show the rare instance of a full-length, big-budget Hollywood film that has only a piano for a score, and Enemy of the State, to show references to The Conversation, and an interview with composer Nathan Larson, who like Rona and Duncan speaks quite helpfully about using real-world sounds in his scoring.
In advance of the class meeting, the students watch Francis Ford Coppola’s 1974 masterpiece, The Conversation. This is a core work in sound studies, thanks to both Walter Murch’s sound design in the film, and to the role of sound in the narrative. Gene Hackman plays a creative and sensitive private eye, who uses multiple microphones to capture an illicit conversation. Sorting out what is said on that tape causes his undoing. It’s helpful that the building where I teach my class is just a few blocks from Union Square, where the opening scene of the film is shot. We discuss Walter Murch and his concept of “worldizing,” of having sound in the film match the quality experience by the characters in the film. For class they read a conversation between Murch and Michael Jarrett, a professor at Penn State, York. They also are required to choose three characters other than Gene Hackman’s, and talk about the way sound plays a role in the character development. After the discussion, we listen in class to a short segment from Coppola’s The Godfather, released two years before The Conversation, in which Al Pacino kills for the first time, and discuss how there is no score for the entire sequence, just the sound of a nearby train that gets louder and louder — not because it is getting closer, but because its sound has come to represent the tension in the room, the blood in Pacino’s ears, the racing of his heart. This isn’t mere representation. It is a psychological equivalent of Murch’s worldizing, in which the everyday sounds take on meaning to a character because of the character’s state of mind. Great acoustics put an audience in a scene. Great sound design puts an audience in a character’s head.
The students also do some self-surveillance in advance of the class meeting. The exercise works well enough on its own, but it’s especially productive when done in parallel with The Conversation, which at its heart is ambivalent at best about the ability of technology to yield truth. The exercise, which I listed in full in last week’s class summary here, has them take a half-hour bus trip, and then compare what they recall from the trip versus the sound they record of the trip: what sounds do they miss, what sounds do they imagine.
When there is time, I like to close the trajectory/timeline with “score as entertainment (audio-games, sound toys, apps),” and extend the learnings from film and television into other areas, like video games, but there was not enough time this class meeting.
Ӣ Homework
For the following week’s homework, there are three assignments. In their sound journals students are to dedicate at least one entry to jingles (the subject of the following week’s class) and one to the sound in film or television. They are to watch an assigned episode of Southland and detail the role of sound in the episode’s narrative, the way sound design serves as score. And they are to locate one song that has been used multiple times in different TV commercials and discuss how it means different things in different contexts.
Note: I’ve tried to do these week-by-week updates of the course in the past, and I’m hopeful this time I’ll make it through all 15 weeks. Part of what held me up in the past was adding videos and documents, so this time I’m going to likely bypass that.
This first appeared in the February 17, 2015, edition of the free Disquiet email newsletter: tinyletter.com/disquiet.