My 33 1/3 book, on Aphex Twin's Selected Ambient Works Volume II, was the 5th bestselling book in the series in 2014. It's available at Amazon (including Kindle) and via your local bookstore. • F.A.Q.Key Tags: #saw2for33third, #sound-art, #classical, #juntoElsewhere: Twitter, SoundCloud, Instagram

Listening to art.
Playing with audio.
Sounding out technology.
Composing in code.

tag: sounds-of-brands

Sound Class, Week 7 of 15: Explicit vs. Implicit

Vocabulary refresher, a useful series of quadrants, breakfast cereal, OS startup sounds

20150324-class7

On the very first day of class I share this sequence:

Hearing → Listening → Discerning → Describing → Analyzing → Interpreting → Implementing →

That is, in a handful or so of words, a map of the 15-week course that I teach on the role of sound in the media landscape at the Academy of Art in San Francisco.

The first semester I taught the course, back in 2012, a student raised a hand from the back of the room and asked, in effect, if I am making up any of the words we use. I suppose hearing “anechoic” and “acoustemology,” among other less esoteric terms, over and over takes its toll, and I replied that I did not make up any of the words. I did, however, take responsibility for two familiar words used in a particular context. Those words, and that context, are the subject of week 7.

First some background on the course, in case this is the first week you’ve read one of these summaries: Each week of the 15-week course my plan is to summarize the previous class session here. Please keep in mind that three hours of lecture and discussion is roughly 25,000 words; this summary is just an outline, in this case less than 10 percent of what occurs in class. Some class meetings emphasize more discussion than others. Week 7 this semester is especially discussion-heavy, and hence the lecture outline here is fairly cursory.

I start off week 7 by reviewing recent vocabulary. When this goes well, we don’t stop with the words I initially reprise, words like “soundscape” and “soundmark,” and, yes, “anechoic” and “acoustemology.” We discuss how the first two develop out of the work of R. Murray Schafer, how the third relates to John Cage, and how the fourth comes out of the work of Steven Feld. To revisit the previous week’s class meeting, on the role of sound in retail space, we discuss Ray Oldenburg’s concept of the “Third Place.” In turn, student queries lead to additional vocabulary refreshes, among them sonic equivalents of so-called “skeuomorphism” design (the shutter sound of digital cameras serves as a good example), “haptic” feedback, and the difference between a “neologism” and a “retronym.”

Then we proceed to those two fairly common terms I mentioned up above, “explicit” and “implicit,” which we employ in a specific context. For the purposes of discussion, an “explicit” sound related to a subject is one closely tied, in the public imagination, to it, such as the “pop pop, fizz fizz” of Alka-Seltzer, or the anthropomorphized Snap, Crackle, and Pop of Rice Krispies. In contrast, “implicit” sounds are those that are to some extent inherent in a given subject, but that are not fully, for lack of a more nuanced term, branded. Different makes of door lock, for example, will sound different upon close inspection, but it would be hard to make a case that to anyone other than a discerning thief those sounds are closely associated with the locks.

We begin by drawing a grid, two by two, and we put those two words on the Y axis. On the X axis, horizontally, we write “category” and “product.” The remainder of week 7 involves working through how sounds can be oriented in those four quadrants. This plays out in various ways, largely as a result of group discussion, and thus it doesn’t translate particularly well to summary. So, I’ll just emphasize some things I’ve learned when teaching this class:

  • It’s important to keep top of mind that the quadrants in this two-by-two grid are along a continuum. Students often mistake them as four independent if interrelated categories. That’s not the case.

  • An operating system startup sound is a useful example. The startup sound itself began deep in the implicit/category zone, and was later elevated to explicit/product when Apple and Windows, just to note two examples, developed unique audio logos.

Homework: The homework for week 8 is to take another pass on the research from week 7, which involved the development of a “sonic audit.” This week in class we take time, in small groups, to compare notes about how to apply the explicit/implicit grids to the students’ chosen topics, which range from Oreo cookies to Nike sneakers to Rolex watches. The assignment is as follows: Do a “sonic audit” of a specific brand/product of your choosing.

Your brand/product should not be inherently sonic; that is, for example, it should be a candy bar, not a headphone — a clothing store, not an MP3 player — an airline, not a mobile music app. You will explore the role of sound in the brand/product that you select. (You can, alternately, elect to focus on an industry/category, such as the Got Milk? and National Pork Board campaigns.)

In the process of developing your sonic audit you should look deeply at the brand/product from numerous viewpoints, such as, but not exclusive to, the following: (a) sounds inherent in the category, (b) sounds exclusive to the brand/product, (c) cultural references (e.g., song lyrics), (d) brand history (e.g., jingles, concert sponsorships, musician spokespeople), etc. Your presentation of your findings should consist not only of exhaustive examples you locate, but of the “cultural meaning” of what you discern. How you present this material is up to you, but it should be substantial. We’ve used short essays in assignments and four-quadrant grids in class, and those are particularly recommended. In the end, the documentation should state and support a specific point of view about the sonic properties of the brand.

Next week: The software tools of sound, with an emphasis on Audacity and, just to nudge things a little, Max/MSP.

Note: I’ve tried to do these week-by-week updates of the course in the past, and I’m hopeful this time I’ll make it through all 15 weeks. Part of what held me up in the past was adding videos and documents, so this time I’m going to likely bypass that.

This first appeared in the March 24, 2015, edition of the free Disquiet “This Week in Sound” email newsletter: tinyletter.com/disquiet.

Also tagged / / Leave a comment ]

Sound Class, Week 6 of 15: Retail Space

Musique concrète, Ray Oldenburg's Third Place, sonic audits, coffee, homework

20150310-week6

I was once eating dinner at a Japanese restaurant that was, truly, a mom and pop operation. Pop was in the back, preparing the food, and Mom was the sole waiter. There were no other evident employees, not even a dishwasher. Between the two of them they managed the tiny space, which had maybe six small tables in it. The mood was relaxed, the room quiet, the diners committed to an unwritten agreement to keep their conversations private. A light bit of music could be heard at a low volume, tasteful bits of ancient French pop songs, elegant pre-fusion jazz, and atmospheric cues from post-orchestral movie soundtracks. Nothing in the evening’s music sounded Japanese in origin, nor did it sound out of place. I asked Mom what we were listening to. She signaled that she would let me know soon, but that she was busy with all the tables. Later in the meal she appeared at my side and handed me, without comment, a thin, flimsy square of paper. A pink sleeve, it was the envelope that contained the CD we were all hearing. I turned the square around in my hands and read the cover text. It was a music sampler CD from a large chain of retail clothing stores.

It is generally understood that music can change the mood of a place. What became clear to me that evening was that a place can influence the appreciation of music. The role of sound in public commercial settings is a symbiotic one.

The sonic capacity of retail space is the subject of week 6 of the 15-week course that I teach on the role of sound in the media landscape at the Academy of Art in San Francisco. After three weeks spent studying listening, we then spend seven weeks on the second arc of the course: “sounds of brands.” (A third and final arc, “brands of sounds,” begins week 11.) After spending week 4 on the history of the jingle and week 5 on the role of sound in product design, we proceed to “retail space.” Each week of the 15-week course my plan is to summarize the previous class session here. Please keep in mind that three hours of lecture and discussion is roughly 25,000 words; this summary is just an outline, in this case less than 10 percent of what occurs in class.

We begin week 6 by revisiting the previous week’s class, and related recent vocabulary. We talk about how the terms “soundscape” and “soundmark” and “acoustemology” inform our understanding of sound in product design. While R. Murray Schafer did not develop the first two terms with product design in mind, a consideration of the use case for a given product certainly would include its sonic context, and in turn the sounds directly associated with, unique to, a given product would certainly constitute its soundmarks, akin therefore not only to “landmark” but to “trademark.” I talk a bit more about Schafer’s work in acoustic ecology, and while correcting my accidental misspelling of his family name I connect him to his near-namesake, Pierre Schaeffer, who developed musique concrète. If Schafer wanted to preserve the sounds around us, then Schaeffer wanted to make something new of them: music constructed from everyday and other pre-recorded sound.

In part I revisit these terms from the week prior to reinforce them, but also to reference how we first discuss philosophical matters before proceeding to practical ones. Only after talking about soundscapes do we talk about how Audi uses an anechoic chamber to construct sounds for its electric cars, and how Harley-Davidson failed in its attempt to register a trademark of its motorcycle engine noise.

This week only after the introduction of new terminology will we proceed to how coffee shops and clothiers use music to construct their environments, and connect to consumers. This week we talk about “the third place,” the phrase developed by Ray Oldenburg to describe the place that is neither the first place, home, or the second place, work. As Oldenburg writes, the third place is one “in which people relax in good company and do so on a regular basis. Some have coffee there before work. Some have a beer there after work. … Some drop by whenever it’s convenient.” I am quoting from Oldenburg’s introduction to Celebrating the Third Place: Inspiring Stories About the “Great Good Places” at the Heart of Our Communities, a collection he edited. Using his theories as a starting point, I make his connection to Tocqueville sense of Americans’ “habit of association.”

We work through the idea of the third place, discussing as a group examples, and probing outlying cases. If a cafe and a barbershop and bar are classic third places, what then of a gym, or a place of worship, or a sports arena? What of public transportation? Most semesters someone asks about online spaces, like message boards and the comments of favorite websites and email discussion lists.

And only then do we move from generalities to specifics, from theory to application. I walk through a variety of examples, discussing first a major coffee retailer, and how its CEO emphasizes the role of third place in his development of the ubiquitous chain. I talk about the various ways in which the chain reverse-outsources its environment, letting you bring home, after purchase, not only its coffee beans, but the cups it serves coffee in, and how somewhat inevitably, after paying a lot of attention to the music in its stores, the company got into the music business. Just this month this chain announced it was going to stop selling CDs in its stores, but that is not a reflection of its attention to music, just of the marketplace for physical recordings. We dissect a coffee-shop television commercial, how the music and everyday sounds combine to make a certain impression. We then continue on through examples of music in retail spaces, and from the samplers of franchises to the online listening stations of clothing stores, an idea that connects back to the first week of this arc of the class, about how today’s branded playlists on streaming-music services nod to the sponsored radio hours of a century ago.

And since we didn’t have the time last week, we talk about the Jacques Tati movie Playtime. While it was assigned in advance of a class meeting on the role of sound in product design, it just as well serves the purpose of discussion of the role of music, and more broadly sound, in public spaces.

We have recently completed one “sonic audit” project and are about to embark on another one, and so we talk through the components of an audit, how one can thoroughly extract from a given subject the way sound plays a role in it. For a given retailer, what songs are associated with it, what is the sonic nature of its physical environment, how is it, perhaps, itself the subject of musical cultural references, such as song lyrics? These and other lines of inquiry comprise the sonic audit of a given subject. (Also, I’ve been myself recently working on a public-space project related to music. It’s not quite ready for me to talk about here, but I hope to soon. In class I discuss in some detail how the project came to be, and how music is being employed to reinforce the public space that it is filling.)

I revisit the quadrants we discussed last week, the ones in which we have “explicit” and “implicit” along one axis and “product” and “category” along the perpendicular axis. That means of coordinating sonic matters, of placing them in relative positions, will inform their new homework assignments. We’ll be discussing these quadrants in detail next week.

  • Homework

They will continue their sound journals, in which they write four times a week. And they will do the first of a two-part project, titled “Sonic Audit.” The instructions are as follows: The first part is to do a “sonic audit” of a specific brand/product of your choosing.

Your brand/product should not be inherently sonic; that is, for example, it should be a candy bar, not a headphone — a clothing store, not an MP3 player — an airline, not a mobile music app. You will explore the role of sound in the brand/product that you select. (You can, alternately, elect to focus on an industry/category, such as the Got Milk? and National Pork Board campaigns.)

In the process of developing your sonic audit you should look deeply at the brand/product from numerous viewpoints, such as, but not exclusive to, the following: (a) sounds inherent in the category, (b) sounds exclusive to the brand/product, (c) cultural references (e.g., song lyrics), (d) brand history (e.g., jingles, concert sponsorships, musician spokespeople), etc. Your presentation of your findings should consist not only of exhaustive examples you locate, but of the “cultural meaning” of what you discern. How you present this material is up to you, but it should be substantial. We’ve used short essays in assignments and four-quadrant grids in class, and those are particularly recommended. In the end, the documentation should state and support a specific point of view about the sonic properties of the brand.

Next week: the explicit and the implicit.

Note: I’ve tried to do these week-by-week updates of the course in the past, and I’m hopeful this time I’ll make it through all 15 weeks. Part of what held me up in the past was adding videos and documents, so this time I’m going to likely bypass that.

This first appeared in the March 10, 2015, edition of the free Disquiet “This Week in Sound” email newsletter: tinyletter.com/disquiet.

Tag: / Leave a comment ]

Sound Class, Week 5 of 15: Product Design

Soundscape, soundmark, acoustemology -> potato chips, Harley engines, Windows 95

A potato chip, an electric car, and a TV network all walk into a classroom …

Well, not the best start to a joke. Nor is, “What do a motorcycle, an alarm clock, and an operating system have in common?” But however poorly crafted the jokes, the extent to which consumer products not considered inherently sonic often have strongly considered, and sometimes legally contested, sonic profiles is a rich topic to explore.

The role of sound in product design is the subject of week 5 of the 15-week course that I teach on the role of sound in the media landscape at the Academy of Art in San Francisco. After three weeks spent studying listening, we now spend seven weeks on the second arc of the course: “sounds of brands.” (A third and final arc, “brands of sounds,” begins week 11.) After spending week 4 on the history of the jingle, we proceed to “product design.” Each week of the 15-week course my plan is to summarize the previous class session here. Please keep in mind that three hours of lecture and discussion is roughly 25,000 words; this summary is just an outline, in this case less than 10 percent of what occurs in class.

Before we dive into particular products and their respective sonic design components, I back up to a broader subject, one that we then use to consider the role of sound in product design. I start by exploring the word “soundscape,” as developed by composer and acoustic ecologist R. Murray Schafer. I share this definition of soundscape: “[W]e regard the sounds of the environment as a great macro-cultural composition, of which man and nature are the composer/performers.” I explain how the definition appears in a document developed by Schafer’s World Soundscape Project, which he founded in 1969 at Simon Fraser University. The quote is from the World Soundscape Project Document No. 4 of 4: A Survey of Community Noise By-Laws in Canada, published in 1972. We talk about “soundscape” frequently in the class, and today we focus on it for the longest time of the semester, discussing how the term is rooted in the idea of a “landscape,” and how the terms differ. I mention that the “composer/performers” description might be helpfully expanded to “composer/performers/audience.”

From “soundscape” we move to “soundmark,” utilizing this helpful employment, also by R. Murray Schafer, from his 1977 book The Tuning of the World, later re-titled The Soundscape: “Once a soundmark has been identified, it deserves to be protected, for soundmarks make the acoustic life of a community unique.” We discuss how much as “soundscape” is rooted in the term “landscape,” the term “soundmark” is rooted in “landmark” — and also, arguably, in “trademark,” which comes up later in today’s class. One of the great things about teaching at the Academy of Art is the international make-up of the student body, and we spend time today with people noting soundmarks from their own hometowns. I mention Big Ben in London, and the streetcars of New Orleans, and the Tuesday noon siren in San Francisco as examples.

Whenever I introduce a new term in class — from “ambient” to “anechoic” to “room tone” to “retronym” — I make a point that I ultimately don’t care if the students remember the specific words. I care that they remember the ideas the words represent. That’s a helpful distinction. It’s easy to remember specific definitions of terms, but harder to learn how to really employ a word, an idea, in one’s thoughts and writings. We do exercises where we explore the ideas inherent in a given term, without using the words at all. I don’t care if they remember the word “anechoic” years from now; I care that they remember the concept of the anechoic chamber, and perhaps have to scratch their head to remember what the actual word is.

I introduce the day’s third major new vocabulary term by explaining that it is the most complex of all the terms we’ll discuss, and the one they’re likely to have the greatest difficulty with. The word is “acoustemology,” and I find it highly useful. Here is the definition: “local conditions of acoustic sensation, knowledge, and imagination embodied in the culturally particular sense of place.” That’s from Steven Feld’s “Waterfalls of Song” in the collection Senses of Place, published in 1996. The term is a useful expansion of the idea of a “soundscape.” One of the complications of talking about a soundscape with students is the difference between, say, the soundscape one experiences in a given moment from the Platonic ideal soundscape associated with that same place. Feld’s concept of “acoustemology” gets at the inherent sonic potential, the sonic potential energy, of a place, and the term’s focus on culture gets at how humans don’t just contribute sound to an environment; they inherently lend meaning to sound, hence his emphasis on “imagination.”

After the mid-class break we talk through various examples of sound in product design, utilizing the idea of soundscape, soundmarks, and acoustemlogy. We discuss Dr. William E. Lee III, as quoted in a Discover article by Judith Stone, on the role of sound in potato chips. It’s an article I’ve been using ever since I started teaching this course, back in 2012. Dr. Lee says, at one point, “People will move snack food around the mouth to maximize noise. Kids have what I call noise wars — they crunch in such a way that they’re throwing noise at each other.”

I show a brief promotional video from the car manufacturer Audi talking about the role of sound in electric vehicles. We discuss the notion of sonic “skeuomorphism.” A skeuomorph is a design element that in its initial appearance had some functional role, and is later retained for largely decorative purposes. The term is often discussed in regard to Apple’s OS design prior to Jonathan Ive’s promotion from hardware design to also lead software design. We talk about the original shutter sound of a camera, later employed on digital cameras, and connect those sounds to the engine noises being produced by companies such as Audi for their electric vehicles.

As Michael B. Sapherstein wrote in a 1998 analysis of Harley Davidson, as of that year of the nearly three quarters of a million trademarks enforced in the United States by the Patent and Trademark Office, the number related to sounds was … just 23. We discuss a suit against Honda by Harley regarding its engine noise, and Harley’s failed attempt to trademark that noise. And, among other examples, I bring up, by way of contrast, a sound not directly resulting from engineering, but one added consciously to a product: the startup sound of an operating system. Brian Eno developed a startup sound for Windows 95, and he famously was given by Microsoft a list of “about 150 adjectives” that the sound should encompass.

How product design connects to soundscapes, soundmarks, and acoustemology has to do with utilizing those frameworks as a means to consider the environment in which specific products are experienced, consumed, active.

One way I find helpful to consider such a thing is to draw a four-quadrant grid with “explicit” and “implicit” on one axis, and “product” and “category” on the perpendicular axis. We explore what sounds are “explicitly” associated with a given product, such as the “Snap, Crackle, and Pop” of Rice Krispies, and the engine noise of a Harley motorcycle, and those sounds that are more “implicit,” such as the lock of a car door or the beep of an external hard drive. One thing we explore is how such “implicit” sounds can become “explicit” through creative executions that tie the product to the sound, such as the click of the Microsoft Surface tablet.

We usually discuss Jacques Tati’s film Playtime at the end of this class, but this week was short due to a student-faculty event.

* Homework

As for homework, for the coming week students write four times in their ongoing sound journals, they propose subjects for in-class presentations, and they develop “brand playlists.” A this point in their sound journals, students are expected to have ceased doing journal entries that are simply lists of sounds, and instead be writing longer entries about fewer sounds — and, importantly, focused not on description as much as on meaning. For their presentation subjects they are instructed as follows: “It should be something personal to you, something important — a hobby, a favorite place you like to visit, a sport you play, etc. In your presentation you will discuss the role of sound in relation to that subject.” And this is the longest of the week’s homework assignments, the development of a “brand playlist”:

“You will develop three in-store “playlists”: sets of pre-recorded songs to be played in retail establishments. Each playlist is to be formatted as follows: (1) the name of the retail brand, (2) a brief slogan (ten words or fewer) summarizing your playlist’s approach, (3) a list of six example songs from the playlist (for each song note the artist, song title, album source, year of release), (4) a summary (approximately 200 words) of the creative approach you are taking and how it aligns with the store’s products, category, and audience — in particular, in some way the playlist should connect to either the implicit or explicit sounds of the given brand.”

Next week: The sound of retail space.

Note: I’ve tried to do these week-by-week updates of the course in the past, and I’m hopeful this time I’ll make it through all 15 weeks. Part of what held me up in the past was adding videos and documents, so this time I’m going to likely bypass that.

This first appeared in the March 3, 2015, edition of the free Disquiet “This Week in Sound” email newsletter: tinyletter.com/disquiet.

Tag: / Leave a comment ]

Sound Class, Week 4 of 15: The Jingle

Sounds of brands, ancient markets, news callers, Texaco, Spotify, Brylcreem, homework

20150224-week4

The commercial jingle took a strange turn at the birth of radio. To understand that detour it can help to listen further back, to trace the jingle to the very birth of commerce, long before recorded music — arguably long before recorded history.

The “jingle” is the subject of the fourth week of the course I teach on the role of sound in the media landscape at the Academy of Art in San Francisco. After three weeks spent studying listening, the fourth week is the start of the second arc of the course: “sounds of brands.” This second arc is the longest of the course’s three arcs, and runs through week 10. Each week of the 15-week course my plan is to summarize the previous class session here. Please keep in mind that three hours of lecture and discussion is roughly 25,000 words; this summary is just an outline, in this case less than 10 percent of what occurs in class.

As with the previous two weeks, the structure of this lecture is based around a timeline of sorts. For class meeting two it is “the history of listening,” and for class meeting three (last week) it is “a trajectory of the use of sound in film and (later) television.” This week it is a rough outline of “the history of the jingle.” The outline reads as follows. This is less a timeline than a sequence of talking points in rough chronological order:

•• the development of the jingle

We start with the definition of the “jingle,” which originates in the 14th century to mean “of imitative origin,” in Dutch and German. In time this comes to be a verb, and to expand in the mid-1600s to be a “catchy array of words in prose or verse.” Its employment as a “song in an advertisement” dates from around 1930, fairly recently. But if the usage is recent, the role of the jingle is not.

• from the Moroccan market to newsboys

We start with the purpose and benefit of the jingle. As early as there were marketplaces there was the need for a product to distinguish itself, for a caller to attract consumers, to get them to visit one stall rather than another. That practice continues to this day in some markets, and had something of a heyday in modern times with the “newsboy,” who could announce bits of the headlines but still make purchase of the paper requisite for getting the full story.

• song sheets

There’s a received assumption that connects the jingle specifically to a commercial song, a ditty written to sell a product. I talk a bit about popular singers who got their start as jingle writers. But as the word’s definition explains, the “catchy” verse preceded what we have come to think of as a full song — which isn’t to say we had to wait until the rise of radio and recorded music for the jingle to be a proper song. One artifact of interest is the advertising or promotional “song sheet,” as documented by Elizabeth C. Axford and by Timothy D. Taylor, among others. The song sheet, in its day, was a promotional song given as a small gift to consumers, for example when they visited a Studebaker dealership to test-drive a vehicle. The genius, in retrospect, of the song sheet was that it meant people would then return to their homes and played the advertiser’s jingle themselves on the family’s parlor piano. Talk about “viral.” The practice makes the Max Headroom “blipvert” seem like a brute force attack by comparison.

• Burma Shave

These popular roadside signs (e.g., “Don’t pass cars / on curve or hill / If the cops don’t get you / morticians will / Burma Shave”) didn’t kick in until well into the 20th century, but they serve as a good example of a modern jingle that isn’t truly a song, and also how a jingle can be crafted to suit its environment. The question that lingers over this class meeting is: “What is the Burma Shave of the Internet?”

• Texaco Star Theatre

The odd detour I mention early on is how at first radio was not a matter of interstitial advertising, as we experience it today, but of sponsored hours. To that end, for many years early in radio one had a positive association with an advertiser because their name was affixed to a regular weekly variety show. Only later on did radio stations stop selling “time” and start selling “audience.” The jingle as we know it may have its roots in the markets of yore, but it only really took shape once brands needed to make the most of a half minute or so of advertising, after the hour-long sponsorship had faded. We may not have solved the riddle that is the “Burma Shave of the Internet,” but we can draw a fairly straight line from the Texaco radio hour, and its ilk, to to modern-day resurgences of the practice, such as “branded playlists” on Spotify.

For this week’s class, the students’ homework included a research and analysis project. The assignment read in part: “Identify a single song that’s been used more than once (three times at least) in different settings to promote different products/services from different companies. Explain the role that the song plays in the varied executions, and how it’s employed differently in each setting.” In class I break them into small groups, of three or four students each, and they compare what they learned in their research. The goal for each group is to develop a list of best practices they agree upon for employing a pre-existing song to represent an organization, brand, or service. We then collate these best practices again when the whole class reconvenes to sort out what the individual groups decided.

I usually show a few archaic commercials at this point. We already marveled at some Kit Kat candy commercials in recent weeks. We now watch an animated Chiquita TV commercial that explains how you don’t refrigerate bananas, and compare it with how, say, early iPod commercials had to teach the viewer how to use the (then) new touch interface. We also watch an early Brycreem commercial, and I investigate how the melody is quite expertly insinuated into the narrative before it appears explicitly as a jingle. The close reading of the Brylcreem requires several repeat viewings and a lot of pausing, as we did the week prior with a scene from the David Fincher version of Girl with the Dragon Tattoo.

• Homework

For the next week they have three assignments. They are to write in their sound journals, as always four times in the given week. They are to read an interview with former KCRW DJ Nic Harcourt, to learn about the role of the music supervisor. And they are to watch Jacques Tati’s 1967 film Playtime and write about the role of sound in its narrative. I warn them that if they found The Conversation, which we watched for homework two weeks prior, to be a little slow, that Playtime is about half its speed.

Note: I’ve tried to do these week-by-week updates of the course in the past, and I’m hopeful this time I’ll make it through all 15 weeks. Part of what held me up in the past was adding videos and documents, so this time I’m going to likely bypass that.

This first appeared in the February 24, 2015, edition of the free Disquiet email newsletter: tinyletter.com/disquiet.

Tag: / Leave a comment ]

Sound Class, Week 3 of 15: Sound Design as Score

The Conversation, Walter Murch, surveillance culture, retronyms, Southland

20150217-week3

Quick question: How many microphones are in the room you are in right now?

That’s the fun and instructive little parlor game I like to play with my students a few weeks into the sound course I teach. They each share their guess with me, and then I talk through all the technology in the room, and rarely is anyone even close to the right number. The students are often a little surprised, at first, to learn that many of their phones have three microphones, and that the most recent MacBook Air laptops have two microphones, to allow for noise cancellation of ambient sound from the environment. They forget that many of their earbuds come with microphones, as do their iPads, their iPods, their Bluetooth headpieces. We’re meeting in a classroom, so there’s no concern about their game consoles, or their smart thermostats, or their home-security system. By the end of the exercise, they are a little anxious, which is productive because this week we discuss Francis Ford Coppola’s masterpiece thriller, The Conversation. By the end of the exercise, we’re all a bit like Gene Hackman at the end of the film: wondering where the surveillance is hidden.

There is almost every week of the class a question at its heart to which I do not have an answer. The question this week is: How is the increasing ubiquity of recording equipment in everyday life transforming the role of sound in film and television?

This is third week of the course I teach at the Academy of Art on the role of sound in the media landscape. The third week marks the close of the first of three arcs that comprise the course. First come three weeks of “Listening to Media,” then seven weeks of “Sounds of Brands,” and then five weeks of “Brands of Sounds.” If the first week of the course is about the overall syllabus and the second week looks back on 150,000 years of human history, how hearing has developed biologically, culturally, and technologically, then the third week focuses on just barely 100 years: We look at how film and, later, television have shaped the way sound is employed creatively.

Each week my plan is to summarize the previous class session here. Please keep in mind that three hours of lecture and discussion is roughly 25,000 words; this summary is just an outline, maybe 10 percent of what occurs in class.

As with last week’s “Brief History of Listening,” this week uses a timeline as the spine of the discussion, or in this case a “trajectory.” For most of this class meeting, this “trajectory” appears on the screen at the head of the room:

•• A Brief Trajectory of Film Sound

• filmed theater
• moving camera / editing
• synchronized sound (1927, Jazz Singer)
• talkie → term “silent film” (c.f. “acoustic guitar”)
• orchestral score (classical tradition)
• electronic (tech, culture, economics)
• underscore (Cliff Martinez, Lisa Gerrard, Clint Mansell, David Holmes, Underworld, Nathan Larson)
• sound design as score
• side note: score as entertainment (audio-games, sound toys, apps)

I’ll now, as concisely as possible, run through what we discuss for each of those items.

•• A Brief Trajectory of Film Sound

• filmed theater

We begin at the beginning of film, and discuss how when new media arise they often initially mimic previous media, in this case showing how early film was often filmed theater.

• moving camera / editing

In time the combination of moving cameras and editing provide film with a visual vocabulary of narrative tools that distinguish it from filmed theater.

• synchronized sound (1927, Jazz Singer)

We talk about how the introduction of sound to film doesn’t coincide with the introduction of recorded sound. The issue isn’t recording sound. It is the complexity of synchronization.

• talkie → term “silent film” (c.f. “acoustic guitar”)

The word “retronym” is useful here. A retronym is a specific type of neologism. A “neologism” is a newly coined word. A retronym is a new word for an old thing required when a new thing arises that puts the old thing in new light. The applicable joke goes as follows:

Q: What was an acoustic guitar called before the arrival of the electric guitar?

A: A guitar.

We also discuss the brief life of the term “cameraphone,” which was useful before cameras became so ubiquitous that a consumer no longer makes a decision about whether or not to buy a phone with a camera. Given the rise of social photography, it’s arguable that cellphones are really cameras that also have other capabilities.

In any case, that tentative sense of technological mid-transition is at the heart of this part of the discussion, about how films with sound were initially as distinct as phones with cameras, and how in time the idea of a movie without sound became the isolated, unusual event. We talk about how the “silent” nature of “silent film” is a fairly popular misunderstanding, and that silent films in their heyday were anything but, from the noise of the projectors, to the rowdiness of the crowds, to the musical accompaniment (often piano).

• orchestral score (classical tradition)

We discuss how the orchestral nature of film scores was not unlike the way films originated in large part as filmed theater. The orchestral score connected the audience experience to mass entertainments, like theater and and opera and musicals, in which orchestras and chamber ensembles were the norm. Long after the notion of filmed theater had been supplanted by a narrative culture unique to film, the norm of the orchestral score lingered.

• electronic (tech, culture, economics)

We discuss the rise of the electronic score, how the transition from orchestral to electronic involved a lot of difference forces. Technology had to become accessible, and changes in pop culture eventually required music that no longer felt outdated to couples out on a date, and finally economics meant that large Hollywood studios, which often had their own orchestras and production procedures, needed incentives to try something new.

• underscore (Cliff Martinez, Lisa Gerrard, Clint Mansell, David Holmes, Underworld, Nathan Larson)

The broad-strokes sequence of how movie scores changed since the rise of the talkie has three stages, from traditional orchestral scores, to early electronic scores that mimic orchestral scoring, to electronic scores that have their own unique vocabularies. (That’s leaving aside groundbreaking but also way-ahead-of-their-time efforts such as Bebe and Louis Barron’s Forbidden Planet.) I highlight the work of a handful of composers, all of whom to varying degrees employ what can be called “underscoring”: scores that rarely reach the crescendos of old-school melodramatic orchestral scores, and that often meld with the overall sound design of the filmed narrative they are part of. (I also note that all of these folks came out of semi-popular music: Cliff Martinez played with the Dickies, Captain Beefheart, and the Red Hot Chili Peppers, Lisa Gerrard with Dead Can Dance, Clint Mansell with Pop Will Eat Itself, and Nathan Larson with Shudder to Think. Underworld is a band, and David Holmes is a DJ and solo electronic musician.)

• sound design as score

Where I’ve been heading with this “trajectory” discussion — I call it a trajectory rather than a “timeline” because I feel the sense of momentum in this particular topic — is to focus on contemporary work in which sound design is the score. To emphasize the transition, I show a series of short videos. We watch the opening few minutes of a 1951 episode of Dragnet and then the opening portion of an episode of Southland, which closely follows the model of Dragnet: the martial score, the civic-minded officer’s point of view, the spoken introduction, the emphasis on “real” stories. The difference is that the melodramatic score of Dragnet is dispensed with in Southland, as is the notion of a score at all. Southland, which aired from 2009 through 2013, has no music once its filmic opening credits were over. Well, it’s not that there’s no music in Southland. It’s that any music one hears appears on screen, bleeding from a passing car, playing on the stereo in a doctor’s office, serving as the ringtone on someone’s cellphone. All sound in the show collectively serves the role once reserved largely for score. When there’s a thud, or a gunshot, or a droning machine, it touches on the psychology of the given scene’s protagonist.

To make my point about the means in which sound design serves as a score, I play an early clip from I Love Lucy, and contrast the early employment of laugh tracks in that show with portions of MAS*H, another sitcom, that lacked laugh tracks. I talk about the extent to which much movie scoring is often little more than a laugh track for additional emotions.

We then spend about 15 or 20 minutes watching over and over the same very brief sequence from David Fincher’s version of The Girl with the Dragon Tattoo, which I dissect for the gray zone between where the movie’s sound ends and the score by Trent Reznor and Atticus Ross begins. (If I have time in the next few weeks, I may do a standalone post with screenshots and/or video snippets that break down the sequence.)

In the work of Fincher, Reznor, and Ross we have a masterpiece of underscoring. The film isn’t quite in Southland‘s territory, but it is closing in on it. I then show two videos that work well together. These are promotional interviews, mini-documentaries, one of Jeff Rona talking about his work on the submarine movie Phantom and the other of Robert Duncan talking about his work on the submarine TV series Last Resort. The videos are strikingly similar, in that both show Rona and Duncan separately going into submarines, turning off the HVAC, and banging on things to get source audio for their respective efforts. All the better for comparison’s sake, the end results are quite different, with Duncan pursuing something closer to a classic orchestral sound, and Rona in a Fourth World vibe, more electronic, more pan-cultural, more textured. What is key is that the sounds of the scores then lend a sense of space, of real acoustic space, to the narratives whose actions they accompany.

Some semesters I also play segments from The Firm, to show the rare instance of a full-length, big-budget Hollywood film that has only a piano for a score, and Enemy of the State, to show references to The Conversation, and an interview with composer Nathan Larson, who like Rona and Duncan speaks quite helpfully about using real-world sounds in his scoring.

In advance of the class meeting, the students watch Francis Ford Coppola’s 1974 masterpiece, The Conversation. This is a core work in sound studies, thanks to both Walter Murch’s sound design in the film, and to the role of sound in the narrative. Gene Hackman plays a creative and sensitive private eye, who uses multiple microphones to capture an illicit conversation. Sorting out what is said on that tape causes his undoing. It’s helpful that the building where I teach my class is just a few blocks from Union Square, where the opening scene of the film is shot. We discuss Walter Murch and his concept of “worldizing,” of having sound in the film match the quality experience by the characters in the film. For class they read a conversation between Murch and Michael Jarrett, a professor at Penn State, York. They also are required to choose three characters other than Gene Hackman’s, and talk about the way sound plays a role in the character development. After the discussion, we listen in class to a short segment from Coppola’s The Godfather, released two years before The Conversation, in which Al Pacino kills for the first time, and discuss how there is no score for the entire sequence, just the sound of a nearby train that gets louder and louder — not because it is getting closer, but because its sound has come to represent the tension in the room, the blood in Pacino’s ears, the racing of his heart. This isn’t mere representation. It is a psychological equivalent of Murch’s worldizing, in which the everyday sounds take on meaning to a character because of the character’s state of mind. Great acoustics put an audience in a scene. Great sound design puts an audience in a character’s head.

The students also do some self-surveillance in advance of the class meeting. The exercise works well enough on its own, but it’s especially productive when done in parallel with The Conversation, which at its heart is ambivalent at best about the ability of technology to yield truth. The exercise, which I listed in full in last week’s class summary here, has them take a half-hour bus trip, and then compare what they recall from the trip versus the sound they record of the trip: what sounds do they miss, what sounds do they imagine.

When there is time, I like to close the trajectory/timeline with “score as entertainment (audio-games, sound toys, apps),” and extend the learnings from film and television into other areas, like video games, but there was not enough time this class meeting.

• Homework

For the following week’s homework, there are three assignments. In their sound journals students are to dedicate at least one entry to jingles (the subject of the following week’s class) and one to the sound in film or television. They are to watch an assigned episode of Southland and detail the role of sound in the episode’s narrative, the way sound design serves as score. And they are to locate one song that has been used multiple times in different TV commercials and discuss how it means different things in different contexts.

Note: I’ve tried to do these week-by-week updates of the course in the past, and I’m hopeful this time I’ll make it through all 15 weeks. Part of what held me up in the past was adding videos and documents, so this time I’m going to likely bypass that.

This first appeared in the February 17, 2015, edition of the free Disquiet email newsletter: tinyletter.com/disquiet.

Also tagged / / Leave a comment ]