My 33 1/3 book, on Aphex Twin's Selected Ambient Works Volume II, was the 5th bestselling book in the series in 2014. It's available at Amazon (including Kindle) and via your local bookstore. • F.A.Q.Key Tags: #saw2for33third, #sound-art, #classical, #juntoElsewhere: Twitter, SoundCloud, Instagram

Listening to art.
Playing with audio.
Sounding out technology.
Composing in code.

tag: sounds-of-brands

Sound Class, Week 5 of 15: Product Design

Soundscape, soundmark, acoustemology -> potato chips, Harley engines, Windows 95

A potato chip, an electric car, and a TV network all walk into a classroom …

Well, not the best start to a joke. Nor is, “What do a motorcycle, an alarm clock, and an operating system have in common?” But however poorly crafted the jokes, the extent to which consumer products not considered inherently sonic often have strongly considered, and sometimes legally contested, sonic profiles is a rich topic to explore.

The role of sound in product design is the subject of week 5 of the 15-week course that I teach on the role of sound in the media landscape at the Academy of Art in San Francisco. After three weeks spent studying listening, we now spend seven weeks on the second arc of the course: “sounds of brands.” (A third and final arc, “brands of sounds,” begins week 11.) After spending week 4 on the history of the jingle, we proceed to “product design.” Each week of the 15-week course my plan is to summarize the previous class session here. Please keep in mind that three hours of lecture and discussion is roughly 25,000 words; this summary is just an outline, in this case less than 10 percent of what occurs in class.

Before we dive into particular products and their respective sonic design components, I back up to a broader subject, one that we then use to consider the role of sound in product design. I start by exploring the word “soundscape,” as developed by composer and acoustic ecologist R. Murray Schafer. I share this definition of soundscape: “[W]e regard the sounds of the environment as a great macro-cultural composition, of which man and nature are the composer/performers.” I explain how the definition appears in a document developed by Schafer’s World Soundscape Project, which he founded in 1969 at Simon Fraser University. The quote is from the World Soundscape Project Document No. 4 of 4: A Survey of Community Noise By-Laws in Canada, published in 1972. We talk about “soundscape” frequently in the class, and today we focus on it for the longest time of the semester, discussing how the term is rooted in the idea of a “landscape,” and how the terms differ. I mention that the “composer/performers” description might be helpfully expanded to “composer/performers/audience.”

From “soundscape” we move to “soundmark,” utilizing this helpful employment, also by R. Murray Schafer, from his 1977 book The Tuning of the World, later re-titled The Soundscape: “Once a soundmark has been identified, it deserves to be protected, for soundmarks make the acoustic life of a community unique.” We discuss how much as “soundscape” is rooted in the term “landscape,” the term “soundmark” is rooted in “landmark” — and also, arguably, in “trademark,” which comes up later in today’s class. One of the great things about teaching at the Academy of Art is the international make-up of the student body, and we spend time today with people noting soundmarks from their own hometowns. I mention Big Ben in London, and the streetcars of New Orleans, and the Tuesday noon siren in San Francisco as examples.

Whenever I introduce a new term in class — from “ambient” to “anechoic” to “room tone” to “retronym” — I make a point that I ultimately don’t care if the students remember the specific words. I care that they remember the ideas the words represent. That’s a helpful distinction. It’s easy to remember specific definitions of terms, but harder to learn how to really employ a word, an idea, in one’s thoughts and writings. We do exercises where we explore the ideas inherent in a given term, without using the words at all. I don’t care if they remember the word “anechoic” years from now; I care that they remember the concept of the anechoic chamber, and perhaps have to scratch their head to remember what the actual word is.

I introduce the day’s third major new vocabulary term by explaining that it is the most complex of all the terms we’ll discuss, and the one they’re likely to have the greatest difficulty with. The word is “acoustemology,” and I find it highly useful. Here is the definition: “local conditions of acoustic sensation, knowledge, and imagination embodied in the culturally particular sense of place.” That’s from Steven Feld’s “Waterfalls of Song” in the collection Senses of Place, published in 1996. The term is a useful expansion of the idea of a “soundscape.” One of the complications of talking about a soundscape with students is the difference between, say, the soundscape one experiences in a given moment from the Platonic ideal soundscape associated with that same place. Feld’s concept of “acoustemology” gets at the inherent sonic potential, the sonic potential energy, of a place, and the term’s focus on culture gets at how humans don’t just contribute sound to an environment; they inherently lend meaning to sound, hence his emphasis on “imagination.”

After the mid-class break we talk through various examples of sound in product design, utilizing the idea of soundscape, soundmarks, and acoustemlogy. We discuss Dr. William E. Lee III, as quoted in a Discover article by Judith Stone, on the role of sound in potato chips. It’s an article I’ve been using ever since I started teaching this course, back in 2012. Dr. Lee says, at one point, “People will move snack food around the mouth to maximize noise. Kids have what I call noise wars — they crunch in such a way that they’re throwing noise at each other.”

I show a brief promotional video from the car manufacturer Audi talking about the role of sound in electric vehicles. We discuss the notion of sonic “skeuomorphism.” A skeuomorph is a design element that in its initial appearance had some functional role, and is later retained for largely decorative purposes. The term is often discussed in regard to Apple’s OS design prior to Jonathan Ive’s promotion from hardware design to also lead software design. We talk about the original shutter sound of a camera, later employed on digital cameras, and connect those sounds to the engine noises being produced by companies such as Audi for their electric vehicles.

As Michael B. Sapherstein wrote in a 1998 analysis of Harley Davidson, as of that year of the nearly three quarters of a million trademarks enforced in the United States by the Patent and Trademark Office, the number related to sounds was … just 23. We discuss a suit against Honda by Harley regarding its engine noise, and Harley’s failed attempt to trademark that noise. And, among other examples, I bring up, by way of contrast, a sound not directly resulting from engineering, but one added consciously to a product: the startup sound of an operating system. Brian Eno developed a startup sound for Windows 95, and he famously was given by Microsoft a list of “about 150 adjectives” that the sound should encompass.

How product design connects to soundscapes, soundmarks, and acoustemology has to do with utilizing those frameworks as a means to consider the environment in which specific products are experienced, consumed, active.

One way I find helpful to consider such a thing is to draw a four-quadrant grid with “explicit” and “implicit” on one axis, and “product” and “category” on the perpendicular axis. We explore what sounds are “explicitly” associated with a given product, such as the “Snap, Crackle, and Pop” of Rice Krispies, and the engine noise of a Harley motorcycle, and those sounds that are more “implicit,” such as the lock of a car door or the beep of an external hard drive. One thing we explore is how such “implicit” sounds can become “explicit” through creative executions that tie the product to the sound, such as the click of the Microsoft Surface tablet.

We usually discuss Jacques Tati’s film Playtime at the end of this class, but this week was short due to a student-faculty event.

* Homework

As for homework, for the coming week students write four times in their ongoing sound journals, they propose subjects for in-class presentations, and they develop “brand playlists.” A this point in their sound journals, students are expected to have ceased doing journal entries that are simply lists of sounds, and instead be writing longer entries about fewer sounds — and, importantly, focused not on description as much as on meaning. For their presentation subjects they are instructed as follows: “It should be something personal to you, something important — a hobby, a favorite place you like to visit, a sport you play, etc. In your presentation you will discuss the role of sound in relation to that subject.” And this is the longest of the week’s homework assignments, the development of a “brand playlist”:

“You will develop three in-store “playlists”: sets of pre-recorded songs to be played in retail establishments. Each playlist is to be formatted as follows: (1) the name of the retail brand, (2) a brief slogan (ten words or fewer) summarizing your playlist’s approach, (3) a list of six example songs from the playlist (for each song note the artist, song title, album source, year of release), (4) a summary (approximately 200 words) of the creative approach you are taking and how it aligns with the store’s products, category, and audience — in particular, in some way the playlist should connect to either the implicit or explicit sounds of the given brand.”

Next week: The sound of retail space.

Note: I’ve tried to do these week-by-week updates of the course in the past, and I’m hopeful this time I’ll make it through all 15 weeks. Part of what held me up in the past was adding videos and documents, so this time I’m going to likely bypass that.

This first appeared in the March 3, 2015, edition of the free Disquiet “This Week in Sound” email newsletter: tinyletter.com/disquiet.

Tag: / Leave a comment ]

Sound Class, Week 4 of 15: The Jingle

Sounds of brands, ancient markets, news callers, Texaco, Spotify, Brylcreem, homework

20150224-week4

The commercial jingle took a strange turn at the birth of radio. To understand that detour it can help to listen further back, to trace the jingle to the very birth of commerce, long before recorded music — arguably long before recorded history.

The “jingle” is the subject of the fourth week of the course I teach on the role of sound in the media landscape at the Academy of Art in San Francisco. After three weeks spent studying listening, the fourth week is the start of the second arc of the course: “sounds of brands.” This second arc is the longest of the course’s three arcs, and runs through week 10. Each week of the 15-week course my plan is to summarize the previous class session here. Please keep in mind that three hours of lecture and discussion is roughly 25,000 words; this summary is just an outline, in this case less than 10 percent of what occurs in class.

As with the previous two weeks, the structure of this lecture is based around a timeline of sorts. For class meeting two it is “the history of listening,” and for class meeting three (last week) it is “a trajectory of the use of sound in film and (later) television.” This week it is a rough outline of “the history of the jingle.” The outline reads as follows. This is less a timeline than a sequence of talking points in rough chronological order:

•• the development of the jingle

We start with the definition of the “jingle,” which originates in the 14th century to mean “of imitative origin,” in Dutch and German. In time this comes to be a verb, and to expand in the mid-1600s to be a “catchy array of words in prose or verse.” Its employment as a “song in an advertisement” dates from around 1930, fairly recently. But if the usage is recent, the role of the jingle is not.

• from the Moroccan market to newsboys

We start with the purpose and benefit of the jingle. As early as there were marketplaces there was the need for a product to distinguish itself, for a caller to attract consumers, to get them to visit one stall rather than another. That practice continues to this day in some markets, and had something of a heyday in modern times with the “newsboy,” who could announce bits of the headlines but still make purchase of the paper requisite for getting the full story.

• song sheets

There’s a received assumption that connects the jingle specifically to a commercial song, a ditty written to sell a product. I talk a bit about popular singers who got their start as jingle writers. But as the word’s definition explains, the “catchy” verse preceded what we have come to think of as a full song — which isn’t to say we had to wait until the rise of radio and recorded music for the jingle to be a proper song. One artifact of interest is the advertising or promotional “song sheet,” as documented by Elizabeth C. Axford and by Timothy D. Taylor, among others. The song sheet, in its day, was a promotional song given as a small gift to consumers, for example when they visited a Studebaker dealership to test-drive a vehicle. The genius, in retrospect, of the song sheet was that it meant people would then return to their homes and played the advertiser’s jingle themselves on the family’s parlor piano. Talk about “viral.” The practice makes the Max Headroom “blipvert” seem like a brute force attack by comparison.

• Burma Shave

These popular roadside signs (e.g., “Don’t pass cars / on curve or hill / If the cops don’t get you / morticians will / Burma Shave”) didn’t kick in until well into the 20th century, but they serve as a good example of a modern jingle that isn’t truly a song, and also how a jingle can be crafted to suit its environment. The question that lingers over this class meeting is: “What is the Burma Shave of the Internet?”

• Texaco Star Theatre

The odd detour I mention early on is how at first radio was not a matter of interstitial advertising, as we experience it today, but of sponsored hours. To that end, for many years early in radio one had a positive association with an advertiser because their name was affixed to a regular weekly variety show. Only later on did radio stations stop selling “time” and start selling “audience.” The jingle as we know it may have its roots in the markets of yore, but it only really took shape once brands needed to make the most of a half minute or so of advertising, after the hour-long sponsorship had faded. We may not have solved the riddle that is the “Burma Shave of the Internet,” but we can draw a fairly straight line from the Texaco radio hour, and its ilk, to to modern-day resurgences of the practice, such as “branded playlists” on Spotify.

For this week’s class, the students’ homework included a research and analysis project. The assignment read in part: “Identify a single song that’s been used more than once (three times at least) in different settings to promote different products/services from different companies. Explain the role that the song plays in the varied executions, and how it’s employed differently in each setting.” In class I break them into small groups, of three or four students each, and they compare what they learned in their research. The goal for each group is to develop a list of best practices they agree upon for employing a pre-existing song to represent an organization, brand, or service. We then collate these best practices again when the whole class reconvenes to sort out what the individual groups decided.

I usually show a few archaic commercials at this point. We already marveled at some Kit Kat candy commercials in recent weeks. We now watch an animated Chiquita TV commercial that explains how you don’t refrigerate bananas, and compare it with how, say, early iPod commercials had to teach the viewer how to use the (then) new touch interface. We also watch an early Brycreem commercial, and I investigate how the melody is quite expertly insinuated into the narrative before it appears explicitly as a jingle. The close reading of the Brylcreem requires several repeat viewings and a lot of pausing, as we did the week prior with a scene from the David Fincher version of Girl with the Dragon Tattoo.

• Homework

For the next week they have three assignments. They are to write in their sound journals, as always four times in the given week. They are to read an interview with former KCRW DJ Nic Harcourt, to learn about the role of the music supervisor. And they are to watch Jacques Tati’s 1967 film Playtime and write about the role of sound in its narrative. I warn them that if they found The Conversation, which we watched for homework two weeks prior, to be a little slow, that Playtime is about half its speed.

Note: I’ve tried to do these week-by-week updates of the course in the past, and I’m hopeful this time I’ll make it through all 15 weeks. Part of what held me up in the past was adding videos and documents, so this time I’m going to likely bypass that.

This first appeared in the February 24, 2015, edition of the free Disquiet email newsletter: tinyletter.com/disquiet.

Tag: / Leave a comment ]

Sound Class, Week 3 of 15: Sound Design as Score

The Conversation, Walter Murch, surveillance culture, retronyms, Southland

20150217-week3

Quick question: How many microphones are in the room you are in right now?

That’s the fun and instructive little parlor game I like to play with my students a few weeks into the sound course I teach. They each share their guess with me, and then I talk through all the technology in the room, and rarely is anyone even close to the right number. The students are often a little surprised, at first, to learn that many of their phones have three microphones, and that the most recent MacBook Air laptops have two microphones, to allow for noise cancellation of ambient sound from the environment. They forget that many of their earbuds come with microphones, as do their iPads, their iPods, their Bluetooth headpieces. We’re meeting in a classroom, so there’s no concern about their game consoles, or their smart thermostats, or their home-security system. By the end of the exercise, they are a little anxious, which is productive because this week we discuss Francis Ford Coppola’s masterpiece thriller, The Conversation. By the end of the exercise, we’re all a bit like Gene Hackman at the end of the film: wondering where the surveillance is hidden.

There is almost every week of the class a question at its heart to which I do not have an answer. The question this week is: How is the increasing ubiquity of recording equipment in everyday life transforming the role of sound in film and television?

This is third week of the course I teach at the Academy of Art on the role of sound in the media landscape. The third week marks the close of the first of three arcs that comprise the course. First come three weeks of “Listening to Media,” then seven weeks of “Sounds of Brands,” and then five weeks of “Brands of Sounds.” If the first week of the course is about the overall syllabus and the second week looks back on 150,000 years of human history, how hearing has developed biologically, culturally, and technologically, then the third week focuses on just barely 100 years: We look at how film and, later, television have shaped the way sound is employed creatively.

Each week my plan is to summarize the previous class session here. Please keep in mind that three hours of lecture and discussion is roughly 25,000 words; this summary is just an outline, maybe 10 percent of what occurs in class.

As with last week’s “Brief History of Listening,” this week uses a timeline as the spine of the discussion, or in this case a “trajectory.” For most of this class meeting, this “trajectory” appears on the screen at the head of the room:

•• A Brief Trajectory of Film Sound

• filmed theater
• moving camera / editing
• synchronized sound (1927, Jazz Singer)
• talkie → term “silent film” (c.f. “acoustic guitar”)
• orchestral score (classical tradition)
• electronic (tech, culture, economics)
• underscore (Cliff Martinez, Lisa Gerrard, Clint Mansell, David Holmes, Underworld, Nathan Larson)
• sound design as score
• side note: score as entertainment (audio-games, sound toys, apps)

I’ll now, as concisely as possible, run through what we discuss for each of those items.

•• A Brief Trajectory of Film Sound

• filmed theater

We begin at the beginning of film, and discuss how when new media arise they often initially mimic previous media, in this case showing how early film was often filmed theater.

• moving camera / editing

In time the combination of moving cameras and editing provide film with a visual vocabulary of narrative tools that distinguish it from filmed theater.

• synchronized sound (1927, Jazz Singer)

We talk about how the introduction of sound to film doesn’t coincide with the introduction of recorded sound. The issue isn’t recording sound. It is the complexity of synchronization.

• talkie → term “silent film” (c.f. “acoustic guitar”)

The word “retronym” is useful here. A retronym is a specific type of neologism. A “neologism” is a newly coined word. A retronym is a new word for an old thing required when a new thing arises that puts the old thing in new light. The applicable joke goes as follows:

Q: What was an acoustic guitar called before the arrival of the electric guitar?

A: A guitar.

We also discuss the brief life of the term “cameraphone,” which was useful before cameras became so ubiquitous that a consumer no longer makes a decision about whether or not to buy a phone with a camera. Given the rise of social photography, it’s arguable that cellphones are really cameras that also have other capabilities.

In any case, that tentative sense of technological mid-transition is at the heart of this part of the discussion, about how films with sound were initially as distinct as phones with cameras, and how in time the idea of a movie without sound became the isolated, unusual event. We talk about how the “silent” nature of “silent film” is a fairly popular misunderstanding, and that silent films in their heyday were anything but, from the noise of the projectors, to the rowdiness of the crowds, to the musical accompaniment (often piano).

• orchestral score (classical tradition)

We discuss how the orchestral nature of film scores was not unlike the way films originated in large part as filmed theater. The orchestral score connected the audience experience to mass entertainments, like theater and and opera and musicals, in which orchestras and chamber ensembles were the norm. Long after the notion of filmed theater had been supplanted by a narrative culture unique to film, the norm of the orchestral score lingered.

• electronic (tech, culture, economics)

We discuss the rise of the electronic score, how the transition from orchestral to electronic involved a lot of difference forces. Technology had to become accessible, and changes in pop culture eventually required music that no longer felt outdated to couples out on a date, and finally economics meant that large Hollywood studios, which often had their own orchestras and production procedures, needed incentives to try something new.

• underscore (Cliff Martinez, Lisa Gerrard, Clint Mansell, David Holmes, Underworld, Nathan Larson)

The broad-strokes sequence of how movie scores changed since the rise of the talkie has three stages, from traditional orchestral scores, to early electronic scores that mimic orchestral scoring, to electronic scores that have their own unique vocabularies. (That’s leaving aside groundbreaking but also way-ahead-of-their-time efforts such as Bebe and Louis Barron’s Forbidden Planet.) I highlight the work of a handful of composers, all of whom to varying degrees employ what can be called “underscoring”: scores that rarely reach the crescendos of old-school melodramatic orchestral scores, and that often meld with the overall sound design of the filmed narrative they are part of. (I also note that all of these folks came out of semi-popular music: Cliff Martinez played with the Dickies, Captain Beefheart, and the Red Hot Chili Peppers, Lisa Gerrard with Dead Can Dance, Clint Mansell with Pop Will Eat Itself, and Nathan Larson with Shudder to Think. Underworld is a band, and David Holmes is a DJ and solo electronic musician.)

• sound design as score

Where I’ve been heading with this “trajectory” discussion — I call it a trajectory rather than a “timeline” because I feel the sense of momentum in this particular topic — is to focus on contemporary work in which sound design is the score. To emphasize the transition, I show a series of short videos. We watch the opening few minutes of a 1951 episode of Dragnet and then the opening portion of an episode of Southland, which closely follows the model of Dragnet: the martial score, the civic-minded officer’s point of view, the spoken introduction, the emphasis on “real” stories. The difference is that the melodramatic score of Dragnet is dispensed with in Southland, as is the notion of a score at all. Southland, which aired from 2009 through 2013, has no music once its filmic opening credits were over. Well, it’s not that there’s no music in Southland. It’s that any music one hears appears on screen, bleeding from a passing car, playing on the stereo in a doctor’s office, serving as the ringtone on someone’s cellphone. All sound in the show collectively serves the role once reserved largely for score. When there’s a thud, or a gunshot, or a droning machine, it touches on the psychology of the given scene’s protagonist.

To make my point about the means in which sound design serves as a score, I play an early clip from I Love Lucy, and contrast the early employment of laugh tracks in that show with portions of MAS*H, another sitcom, that lacked laugh tracks. I talk about the extent to which much movie scoring is often little more than a laugh track for additional emotions.

We then spend about 15 or 20 minutes watching over and over the same very brief sequence from David Fincher’s version of The Girl with the Dragon Tattoo, which I dissect for the gray zone between where the movie’s sound ends and the score by Trent Reznor and Atticus Ross begins. (If I have time in the next few weeks, I may do a standalone post with screenshots and/or video snippets that break down the sequence.)

In the work of Fincher, Reznor, and Ross we have a masterpiece of underscoring. The film isn’t quite in Southland‘s territory, but it is closing in on it. I then show two videos that work well together. These are promotional interviews, mini-documentaries, one of Jeff Rona talking about his work on the submarine movie Phantom and the other of Robert Duncan talking about his work on the submarine TV series Last Resort. The videos are strikingly similar, in that both show Rona and Duncan separately going into submarines, turning off the HVAC, and banging on things to get source audio for their respective efforts. All the better for comparison’s sake, the end results are quite different, with Duncan pursuing something closer to a classic orchestral sound, and Rona in a Fourth World vibe, more electronic, more pan-cultural, more textured. What is key is that the sounds of the scores then lend a sense of space, of real acoustic space, to the narratives whose actions they accompany.

Some semesters I also play segments from The Firm, to show the rare instance of a full-length, big-budget Hollywood film that has only a piano for a score, and Enemy of the State, to show references to The Conversation, and an interview with composer Nathan Larson, who like Rona and Duncan speaks quite helpfully about using real-world sounds in his scoring.

In advance of the class meeting, the students watch Francis Ford Coppola’s 1974 masterpiece, The Conversation. This is a core work in sound studies, thanks to both Walter Murch’s sound design in the film, and to the role of sound in the narrative. Gene Hackman plays a creative and sensitive private eye, who uses multiple microphones to capture an illicit conversation. Sorting out what is said on that tape causes his undoing. It’s helpful that the building where I teach my class is just a few blocks from Union Square, where the opening scene of the film is shot. We discuss Walter Murch and his concept of “worldizing,” of having sound in the film match the quality experience by the characters in the film. For class they read a conversation between Murch and Michael Jarrett, a professor at Penn State, York. They also are required to choose three characters other than Gene Hackman’s, and talk about the way sound plays a role in the character development. After the discussion, we listen in class to a short segment from Coppola’s The Godfather, released two years before The Conversation, in which Al Pacino kills for the first time, and discuss how there is no score for the entire sequence, just the sound of a nearby train that gets louder and louder — not because it is getting closer, but because its sound has come to represent the tension in the room, the blood in Pacino’s ears, the racing of his heart. This isn’t mere representation. It is a psychological equivalent of Murch’s worldizing, in which the everyday sounds take on meaning to a character because of the character’s state of mind. Great acoustics put an audience in a scene. Great sound design puts an audience in a character’s head.

The students also do some self-surveillance in advance of the class meeting. The exercise works well enough on its own, but it’s especially productive when done in parallel with The Conversation, which at its heart is ambivalent at best about the ability of technology to yield truth. The exercise, which I listed in full in last week’s class summary here, has them take a half-hour bus trip, and then compare what they recall from the trip versus the sound they record of the trip: what sounds do they miss, what sounds do they imagine.

When there is time, I like to close the trajectory/timeline with “score as entertainment (audio-games, sound toys, apps),” and extend the learnings from film and television into other areas, like video games, but there was not enough time this class meeting.

• Homework

For the following week’s homework, there are three assignments. In their sound journals students are to dedicate at least one entry to jingles (the subject of the following week’s class) and one to the sound in film or television. They are to watch an assigned episode of Southland and detail the role of sound in the episode’s narrative, the way sound design serves as score. And they are to locate one song that has been used multiple times in different TV commercials and discuss how it means different things in different contexts.

Note: I’ve tried to do these week-by-week updates of the course in the past, and I’m hopeful this time I’ll make it through all 15 weeks. Part of what held me up in the past was adding videos and documents, so this time I’m going to likely bypass that.

This first appeared in the February 17, 2015, edition of the free Disquiet email newsletter: tinyletter.com/disquiet.

Also tagged / / Leave a comment ]

Sound Class, Week 2 of 15: A Brief History of Listening

Celebrity death, 150,000 years in 3 hours, John Cage, Kit Kats, Whitney Houston

20150210-week2

The question at the heart of the second meeting of the sound course I teach to a mix of BA and MFA students is something of a hypothetical, a historical one: Imagine it is 1750 and the fellow who sang songs at the pub in your town every Friday night has, quite suddenly and unexpectedly, died. You will never hear his voice again. What is that like? How is that loss experienced — how was that 18th-century celebrity death experienced? And as we ponder the historical question, we consider further what someone in 1750 didn’t know, couldn’t necessarily have conceived of: that a few centuries later we would have recordings of our favorite musicians, recordings that would largely constitute their artistic legacy. To wrap one’s head around that kind of loss — that is what this week’s class meeting is an attempt at.

The second week of sound class is titled “A Brief History of Listening.” In three hours we cover roughly 150,000 years of human history, and still have time to talk about candy bars and Whitney Houston, and to go over the previous week’s homework, which included reading an essay by neuroscientist Seth S. Horowitz and reading an interview with composer and acoustic ecologist R. Murray Schafer.

Needless to say, this is all handled in a fairly succinct manner. This lecture and discussion is part of an initial three-week build up to the core of the course.

I teach my course about the role of sound in the media landscape at the Academy of Art in San Francisco. The first three weeks of the course focus on listening to media, followed by seven weeks on the “sounds of brands,” followed by the final five weeks, which are dedicated to “brands of sounds.” The class meets for three hours every Wednesday starting at noon, and there are nine hours of assigned homework. Each week my plan is to summarize the previous class session here. Please keep in mind that three hours of lecture and discussion is roughly 25,000 words; this summary is just an outline, less than 10 percent of what occurs in class.

Week 1 of class was largely given over to the syllabus and to a handful examples to get discussion and ideas flowing. I had 7 students last semester, and I have more than twice as many this semester, so I’m adjusting to the number of voices in class. As a result, I have a little bit of material left over from the opening week’s lecture that I need to cover, and this is where the candy bars come in.

Much as during the first week I found it useful to focus on various examples of JJ Abrams’ work in television and film to show how a single individual can embrace sound as a creative part of a broader, collaborative cultural pursuit, this week I spend a few minutes watching old TV commercials with the students. First I show two Kit Kat candy commercials from the late 1980s, in which the “Gimme a Break” jingle is so absurdly optimistic that it verges on, like the worst jingles, a kind of corporate pop-culture jingoism.

I apologize for my generation, though we were more victims of that culture than perpetrators of it. And then I show two Kit Kat commercials from the past five years, in which the same jingle plays out with each note sourced from on-screen, real-life, real-world sound — quotidian sound. One of these commercials is shot in a library, where books being shut and computer keyboards being typed on collectively play the Kit Kat jingle. There is a second such commercial set on the steps outside the library, the main difference being that the second one has the louder outdoor background sound of the city, a thick urban hum.

There’s an enormous amount that can be unpacked from these two commercials, in particular the idea of field recordings, of everyday sound, having sonorous, musical qualities, and of how these commercials connect the act of “taking a break” (that being the “idea,” such as it is, at the heart of this particular brand of candy, much as “happiness” is central to Coca-Cola’s marketing endeavors) to the actual jingle. After watching one of these commercials, the next time you type or close a book, you will likely hear the jingle in your head. These commercials take the corniest aspect of sound branding — the jingle — and make it somehow tasteful. The full fourth class meeting in this course will focus on jingles, so I pretty much leave it there, except to show one contemporaneous Kit Kat commercial from India that makes the 1980s American commercials look subtle by comparison.

We then close the loop with an exercise from the previous class meeting. The first class included two listening exercises. At the very start of the first class, students for 15 minutes wrote every sound they heard. This introduced them to the sound journal they will write in four times a week for the length of the course. After a brief mid-period break, for 10 minutes that first class meeting they wrote down every sound they associated with the first few minutes after waking up on an average Tuesday morning. For the second class, part of their homework was to do just that: wake on Tuesday, the morning before class, and write down everything they heard. In class I then return to them their exercise from the first week, and we compare and contrast what they had heard with what they thought they heard.

For most of the remainder of this class meeting, a timeline appears on the screen at the head of the room. It reads as follows. I apologize that this is a ridiculously brief & largely Western timeline, but it’s still useful:

•• A Brief Timeline of Listening

• 90k ~ 50k BC: human hearing & speech
• ~3300 BC: Sumerian proto-Cuneiform
• ~3000 BC: ancient Egypt homing pigeons
• 750 ~ 550 BC: “oral culture becomes written culture”
• 1450s: moveable type / Gutenberg
• 1850s: recorded sound
• 1870s: the telephone
• 1952: John Cage’s 4’33”
• 1993: Mosaic browser / World Wide Web

I’ll now, as briefly as possible, run through what we discuss for each of those items.

•• A Brief Timeline of Listening

• 90k ~ 50k BC: human hearing & speech

This number probably goes back another 50,000 to 100,000 years, and what is up for grabs is what it means to be human, what did communication constitute before we had the physical capability of hearing, and how long a gap was there between our ability to hear and our development of speech.

• ~3300 BC: Sumerian proto-Cuneiform

However long the gap between our development of hearing and speaking, there was in turn a gap before the rise and proliferation of notated speech — of notated thought. This all helps set the stage for the introduction, far in the future, of recorded (and notated, though we don’t discuss it in depth here) sound.

• ~3000 BC: ancient Egypt homing pigeons

We could mark this transition in human expression at several points along our collective timeline, but the homing pigeon makes a stronger model than the horse because a message carried by a horse suggests a distance, a form of travel, that a human might take, while the pigeon follows a path that people cannot as easily traverse. This is, in essence, the telephone, the Internet, of its time. The ability to send information a very long distance emphasizes how language is, itself, a form of technology.

• 750 ~ 550 BC: “oral culture becomes written culture”

In the homework reading from week 1, R. Murray Schafer talks about how complaints about noise pollution go back to Roman days. Here we talk about an ancient Greek anxiety expressed by Socrates, who says to his interlocutor, Phaedrus: “If men learn this, it will plant forgetfulness in their souls.” What is the “this” in this sentence? This is: writing. I pause here and play some music by the late Whitney Houston, not her singing, just the background music, what is listed on singles as the “instrumental track.” We listen to what a Whitney Houston song sounds like without Whitney Houston, which leads to an extended group conversation that explores the 1750 hypothetical I mention up top. It’s a very engaging topic for discussion, and we try to imagine what loss was like at a time before recorded sound. I can barely scratch the surface here, but this is one of my favorite topics in a class that I love to teach.

• 1450s: moveable type / Gutenberg 1850s: recorded sound

It remains the case that people mistakenly say Gutenberg “invented the printing press,” and after clearing that up we talk about moveable type as a precursor to recorded sound. Our experience of recorded sound has a strong precedent in the development of printing and, later, moveable type.

• 1870s: the telephone

I go over a brief history of its development, and matters of technological adoption in general. Much as humans didn’t all wake up one day able to hear and speak, or later with the ability to read and write, nor did we all suddenly have telephones delivered to our front doors. Recent episodes of Downtown Abbey, set exactly 90 years ago, helpfully reinforce this. (Also: radio.)

• 1952: John Cage’s 4’33”

I introduced the concept of an anechoic chamber — a space designed to have no echo — in the previous class, and here expand on it by talking about John Cage. I stick to his greatest hits: his famous anechoic-chamber anecdote, his 4’33” composition, and his book Silence. I quote him from Silence, in which he connects the ideas behind 4’33″” to the glass houses of Mies van der Rohe (how they “reflect their environment, presenting to the eye images of clouds, trees, or grass, according to the situation”) and the sculptures of Richard Lippold (“it is inevitable that one will see other things, and people too, if they happen to be there at the same time”). We focus discussion on this statement of Cage’s: “There is always something to see, something to hear. In fact, try as we may to make a silence, we cannot.” The students have undertaken sound journals, and emphasizing that silence is an “idea” not an actual real thing is helpful in getting them to listen to that which we have long been taught not to hear. If, as William Gibson said, “cyberspace” is a consensual hallucination of a place, then “silence” is a consensual hallucination of an absence.

• 1993: Mosaic browser / World Wide Web

If I’ve learned anything in the six semesters I’ve taught this class, it is to not overestimate the benefits of talking to students about the past 20 years of rapid increase in technology. So, I just end my timeline with the introduction of the Mosaic browser, which I posit as a division not unlike the one on the other side of which stand those people back in 1750 who didn’t know what they were missing — or so we 21st-century listeners might contend, and condescend.

• Homework

And I’ll close here with the homework that I assign in advance of the third week. There will be four more weekly sound journal entries. There will be a viewing, of the 1974 Francis Ford Coppola film The Conversation. There will be one reading: an interview with sound designer Walter Murch conducted by Michael Jarrett (whose recent book Producing Country: The Inside Story of the Great Recordings is great — and shares its publisher, Wesleyan, with Cage’s Silence). And there is one listening exercise. I’ll end by copying and pasting the exercise directly from the homework:

Exercise: This should take between an hour and a half and two hours to complete. Part A: First, plot out a bus ride or a walk (BART is also fine) that will take approximately one half hour, and during which you’re unlikely to run into anyone or be required to speak with anyone. (If you elect for the bus route, which I recommend, you should remain on the same bus for the full half hour.) Use your phone or another device to record the complete half-hour length of your trip. Part B: Immediately after the trip is over, sit down and make an annotated list of the sounds you recall from the trip. Part C: Immediately after that, listen back to the tape all the way through; make an annotated list of the relative prominence of sounds you had or hadn’t noticed or paid attention to. Part D: Create and send to me a document containing the lists that resulted from Parts B and C above.

And next week, in part three of “Listening to Media,” the class will focus on “The Score”: not on 150,000 years of human history, but on 100 years of film and, later, television. Which is why we’re watching — and listening to — Coppola’s The Conversation.

Note: I’ve tried to do these week-by-week updates of the course in the past, and I’m hopeful this time I’ll make it through all 15 weeks. Part of what held me up in the past was adding videos and documents, so this time I’m going to likely bypass that.

This first appeared in the February 10, 2015, edition of the free Disquiet email newsletter: tinyletter.com/disquiet.

Also tagged / / Comment: 1 ]

Sound Class, Week by Week

A breakdown of the syllabus; a newsletter of class summaries

20150204-syllabusbreakdown

As I mentioned yesterday, the 15 weeks of the sound class I teach here in San Francisco at the Academy of Art are divided into three arcs. Above is a breakdown of the topics for each of the 15 weeks. I’ll be summarizing each week’s class meeting in the email newsletter I publish on Tuesdays at tinyletter.com/disquiet, and I’ll post the material here at Disquiet.com. Today’s session — week two, “A Brief History of Sound” — was largely about celebrity death, more on which in next week’s email newsletter.

Also tagged / / Leave a comment ]