My 33 1/3 book, on Aphex Twin's Selected Ambient Works Volume II, was the 5th bestselling book in the series in 2014. It's available at Amazon (including Kindle) and via your local bookstore. • F.A.Q.Key Tags: #saw2for33third, #sound-art, #classical, #juntoElsewhere: Twitter, SoundCloud, Instagram

Listening to art.
Playing with audio.
Sounding out technology.
Composing in code.

tag: brands of sounds

Two More Listeners

A recording engineer and a sound artist discuss making listening heard.

vitiello1

Earlier this week I posted responses I’d made to a series of questions about listening posed by Steve Ashby, who teaches music at Virginia Commonwealth University. Two more people have replied to Ashby’s questions, and I wanted to share segments of their thoughts here, both of them responding to the fourth, and core, question in Ashby’s survey: “How does one make their listening listened to?”

This is Bryan Walthall, a recording and mastering engineer who runs Stereo Image in Richmond, Virginia:

my perception of the way music sounds has changed greatly over the past 15 years. my favorite records when i was a kid (hendrix, nirvana) sound completely different! sometimes it breaks my heart because they don’t have the exact same magic they did when i was younger. its as if my “suspension of reality”has been diminished because I’ve seen the sausage being made for 15 years. for the most part they still evoke the same emotional response, but it has been diminished. i hear things completely different now, because i know how they were achieved. thats good for me making records, but the kid in me gets a little bummed sometimes that i can’t just listen to the song, i have to “hear the drums”or “know thats a plate and not a spring”or that “thats obviously a vocal double.”

This is the sound artist Stephen Vitiello. Up top is an image of visitors to one of his sound installations:

I mostly hope to achieve this in installation environments. Setting lighting in a space, comfortable seating, establishing a volume level and a speaker system that works well with the material are all important. Also, removing or minimizing visual distractions is vital ”“ so that it is clear that in the work I’m presenting, sound is primary and not secondary to any sort of visual content. As I re-read these responses, it seems I’m hoping to create a space for the installations that goes back to what I used to create for myself when listening to a new record for the first time.

Ashby is archiving the responses at his ashbysounds.com website, and on his syllabus page at VCU’s rampages.us site.

Also tagged / / Leave a comment ]

In the Province of Real Time Electronica

MUTEK’s Patti Schmidt on how Jurassic Park helped birth — and how emphasis on scenography and human scale helps sustain — the music festival

PattiSchmidt_16x9_620x350

The following interview is with Patti Schmidt, a longtime programmer for the MUTEK festival in Montréal, Canada. The interview took place during the final class session of the spring 2015 semester of the class that I teach about the role of sound in the media landscape at the Academy of Art in San Francisco. Schmidt joined us via Skype.

I frequently invite professionals — musicians, startup representatives, coders, sound designers, publicists — to speak in my class. Rather than ask the guests to prepare a presentation, I interview them in front of the class, and then have the students themselves ask questions. This is a lightly edited transcript of Schmidt’s appearance in class. The interview took place on Wednesday, May 13, 2015, just before the 16th annual MUTEK festival, which ran from May 27 through May 31.

Marc Weidenbaum: First thanks, Patti. I’d like to introduce Patti to the class. This is Patti Schmidt from MUTEK. She’s going be talking with us today via Skype.

Patti, these are the students in the sound class I teach here at the Academy of Art in San Francisco. The class is about the role of sound in the media landscape. This last five or six weeks, we’ve been focused on what we call “brands of sounds,”which is how things related to sounds “brand”themselves, how they express themselves in the marketplace. That followed six or seven weeks on the opposite subject, which was “sounds of brands,”about how things — objects, organizations, services — use sound to make an impression.

I tend to end each semester talking about music, and often I’ll have a music publicist come and talk about the challenges of the past 10 years as the record industry has changed, how streaming and other changes in the music and recording industries have shifted their attentions and skills and so forth.

Patti’s speaking with us in class actually began as the result of an interaction with someone in music PR, who reached out to me to ask if I’d be interested in writing an article somewhere about MUTEK, or cover the festival in some way. I replied that I don’t really cover festivals much. Then I suggested we do this, which is have Patti address the class in the form of a live interview, which I’d then edit and post at Disquiet.com, and the MUTEK publicist was enthusiastic about the approach. Patti, could you start just by talking to the class a bit about what MUTEK is and a bit about what you do there.

Patti Schmidt: MUTEK is an electronic music and digital creativity festival having its 16th edition this year. It started in Montréal in the year 2000. The director of MUTEK’s name is Alain Mongeau, and in the mid-’90s he was the president something called ISEA, an electronic arts organization that’s based in the Netherlands. ISEA was one of the first international organizations to really become concerned with the role of digital media and digital sound and digital art. So, he helped host the 1995 edition of ISEA here in Montréal, and his idea was that Montréal is a very unique and weird city in North America because there’s been all kinds of technology leading industries and arts here. The video game industry, Ubisoft [a French company], is based here; Soft Image, which was responsible for Jurassic Park, and all these very early special effects, was based here; and Cirque du Soleil, all this stuff. There are a lot of big spectacle, innovative, tech things that have come out of this province — that you would think might otherwise be isolated because of language, because French is the first language that is spoken here. But somehow through technology and technologically driven art and spectacle, including electronic music, Montréal has sort of distinguished itself in the world. Alain helped start a venue here in Montréal called the Society for Art and Technology, or the SAT, as we call it, and it’s become a real hub for a lot of research on immersive performances, visual works, sounds works.

ISEA was a way for Alain, in 1995, to attempt to really route this idea of innovation in music and performance in Montréal. He went on to program a component for a film festival that was concerned with new media, the Festival of Nouveau Cinema. They gave him a component called the Media Lounge for 5 years, where in the late 1990s he would bring in people like Richie Hawtin, who at the time was rather unknown and would be presenting minimal sound and interactive light installations. This was the beginning of laptops becoming an important tool not only for music, but for visual work. And it became possible to then compose on these brand new portable, reasonably affordable tools. So there was an explosion of art and music going on, all over the world, and so he programmed components of this film festival for a few years. Then he was given some seed money by the guy from Soft Image to begin the very first edition of MUTEK, which was hosted inside of a big complex dedicated to new media that this guy had also just started, called Excentris, roundabout 1999.

That was the basic background on MUTEK. A few years later, maybe it was 2003 or 2004, Alain also — because he has this sort of global view and a positive idea of globalization and technology — he started planting seeds for other MUTEKs in South America, and a “micro”MUTEK festival happened in Chile. Then a few years later — it’s now into its 11th year — Mexico City began its MUTEK franchise. This is all, like, “open source,”no money — we don’t receive any money from these festivals at all. It was more about the idea of inverting the axis of the music industry, which usually goes from North America to Europe, so horizontal, and instead, doing a vertical axis — Montréal down to Latin America — where these emergent economies and artistic communities that were also beginning to just use computers and digital technologies to make music, and to plug into a whole global circuit existed. Alain has a personal history in Latin America, which made this possible. He speaks Spanish; his father is a university professor. They were in Chile during the coup in 1973, and he is very comfortable working these angles. So now MUTEK Mexico is 11 years old — MUTEK Argentina has sort of moved to Mexico. We just started a version of the festival in Colombia. The ones in Chile are a little bit dormant. We also have an outpost in Barcelona, Spain, which is European but it is also a place where tons of Latin American expats end up. The festival has a real mission and mandate statement to always cultivate local audience and the kind of artists and communities that are left out of the regular global conversation that’s western-dominated about technology and music — and that’s an essential interest of the festival. And over the years, as well, MUTEK has cultivated a local community here in Montréal. A number of them, a big chunk of the local artists who helped start MUTEK Montréal, have since relocated to Berlin. And they have quite vibrant careers there, so we work this axis as well. And we still always try to cultivate and throw into our international network local artists who are innovative in using technology. There’s other interesting things to look at over the course of a 16-year history of a festival that takes technology as its important taking-off point, and this technology is constantly mutating, and evolving, and changing, and if you’re going to stay relevant you are going to have to stay on top of what those changes are. Read more »

Also tagged / / Comment: 1 ]

Sound Class, Week 7 of 15: Explicit vs. Implicit

Vocabulary refresher, a useful series of quadrants, breakfast cereal, OS startup sounds

20150324-class7

On the very first day of class I share this sequence:

Hearing → Listening → Discerning → Describing → Analyzing → Interpreting → Implementing →

That is, in a handful or so of words, a map of the 15-week course that I teach on the role of sound in the media landscape at the Academy of Art in San Francisco.

The first semester I taught the course, back in 2012, a student raised a hand from the back of the room and asked, in effect, if I am making up any of the words we use. I suppose hearing “anechoic” and “acoustemology,” among other less esoteric terms, over and over takes its toll, and I replied that I did not make up any of the words. I did, however, take responsibility for two familiar words used in a particular context. Those words, and that context, are the subject of week 7.

First some background on the course, in case this is the first week you’ve read one of these summaries: Each week of the 15-week course my plan is to summarize the previous class session here. Please keep in mind that three hours of lecture and discussion is roughly 25,000 words; this summary is just an outline, in this case less than 10 percent of what occurs in class. Some class meetings emphasize more discussion than others. Week 7 this semester is especially discussion-heavy, and hence the lecture outline here is fairly cursory.

I start off week 7 by reviewing recent vocabulary. When this goes well, we don’t stop with the words I initially reprise, words like “soundscape” and “soundmark,” and, yes, “anechoic” and “acoustemology.” We discuss how the first two develop out of the work of R. Murray Schafer, how the third relates to John Cage, and how the fourth comes out of the work of Steven Feld. To revisit the previous week’s class meeting, on the role of sound in retail space, we discuss Ray Oldenburg’s concept of the “Third Place.” In turn, student queries lead to additional vocabulary refreshes, among them sonic equivalents of so-called “skeuomorphism” design (the shutter sound of digital cameras serves as a good example), “haptic” feedback, and the difference between a “neologism” and a “retronym.”

Then we proceed to those two fairly common terms I mentioned up above, “explicit” and “implicit,” which we employ in a specific context. For the purposes of discussion, an “explicit” sound related to a subject is one closely tied, in the public imagination, to it, such as the “pop pop, fizz fizz” of Alka-Seltzer, or the anthropomorphized Snap, Crackle, and Pop of Rice Krispies. In contrast, “implicit” sounds are those that are to some extent inherent in a given subject, but that are not fully, for lack of a more nuanced term, branded. Different makes of door lock, for example, will sound different upon close inspection, but it would be hard to make a case that to anyone other than a discerning thief those sounds are closely associated with the locks.

We begin by drawing a grid, two by two, and we put those two words on the Y axis. On the X axis, horizontally, we write “category” and “product.” The remainder of week 7 involves working through how sounds can be oriented in those four quadrants. This plays out in various ways, largely as a result of group discussion, and thus it doesn’t translate particularly well to summary. So, I’ll just emphasize some things I’ve learned when teaching this class:

  • It’s important to keep top of mind that the quadrants in this two-by-two grid are along a continuum. Students often mistake them as four independent if interrelated categories. That’s not the case.

  • An operating system startup sound is a useful example. The startup sound itself began deep in the implicit/category zone, and was later elevated to explicit/product when Apple and Windows, just to note two examples, developed unique audio logos.

Homework: The homework for week 8 is to take another pass on the research from week 7, which involved the development of a “sonic audit.” This week in class we take time, in small groups, to compare notes about how to apply the explicit/implicit grids to the students’ chosen topics, which range from Oreo cookies to Nike sneakers to Rolex watches. The assignment is as follows: Do a “sonic audit”of a specific brand/product of your choosing.

Your brand/product should not be inherently sonic; that is, for example, it should be a candy bar, not a headphone — a clothing store, not an MP3 player — an airline, not a mobile music app. You will explore the role of sound in the brand/product that you select. (You can, alternately, elect to focus on an industry/category, such as the Got Milk? and National Pork Board campaigns.)

In the process of developing your sonic audit you should look deeply at the brand/product from numerous viewpoints, such as, but not exclusive to, the following: (a) sounds inherent in the category, (b) sounds exclusive to the brand/product, (c) cultural references (e.g., song lyrics), (d) brand history (e.g., jingles, concert sponsorships, musician spokespeople), etc. Your presentation of your findings should consist not only of exhaustive examples you locate, but of the “cultural meaning”of what you discern. How you present this material is up to you, but it should be substantial. We’ve used short essays in assignments and four-quadrant grids in class, and those are particularly recommended. In the end, the documentation should state and support a specific point of view about the sonic properties of the brand.

Next week: The software tools of sound, with an emphasis on Audacity and, just to nudge things a little, Max/MSP.

Note: I’ve tried to do these week-by-week updates of the course in the past, and I’m hopeful this time I’ll make it through all 15 weeks. Part of what held me up in the past was adding videos and documents, so this time I’m going to likely bypass that.

This first appeared in the March 24, 2015, edition of the free Disquiet “This Week in Sound”email newsletter: tinyletter.com/disquiet.

Also tagged / / Leave a comment ]

Sound Class, Week 3 of 15: Sound Design as Score

The Conversation, Walter Murch, surveillance culture, retronyms, Southland

20150217-week3

Quick question: How many microphones are in the room you are in right now?

That’s the fun and instructive little parlor game I like to play with my students a few weeks into the sound course I teach. They each share their guess with me, and then I talk through all the technology in the room, and rarely is anyone even close to the right number. The students are often a little surprised, at first, to learn that many of their phones have three microphones, and that the most recent MacBook Air laptops have two microphones, to allow for noise cancellation of ambient sound from the environment. They forget that many of their earbuds come with microphones, as do their iPads, their iPods, their Bluetooth headpieces. We’re meeting in a classroom, so there’s no concern about their game consoles, or their smart thermostats, or their home-security system. By the end of the exercise, they are a little anxious, which is productive because this week we discuss Francis Ford Coppola’s masterpiece thriller, The Conversation. By the end of the exercise, we’re all a bit like Gene Hackman at the end of the film: wondering where the surveillance is hidden.

There is almost every week of the class a question at its heart to which I do not have an answer. The question this week is: How is the increasing ubiquity of recording equipment in everyday life transforming the role of sound in film and television?

This is third week of the course I teach at the Academy of Art on the role of sound in the media landscape. The third week marks the close of the first of three arcs that comprise the course. First come three weeks of “Listening to Media,” then seven weeks of “Sounds of Brands,” and then five weeks of “Brands of Sounds.” If the first week of the course is about the overall syllabus and the second week looks back on 150,000 years of human history, how hearing has developed biologically, culturally, and technologically, then the third week focuses on just barely 100 years: We look at how film and, later, television have shaped the way sound is employed creatively.

Each week my plan is to summarize the previous class session here. Please keep in mind that three hours of lecture and discussion is roughly 25,000 words; this summary is just an outline, maybe 10 percent of what occurs in class.

As with last week’s “Brief History of Listening,” this week uses a timeline as the spine of the discussion, or in this case a “trajectory.” For most of this class meeting, this “trajectory” appears on the screen at the head of the room:

ӢӢ A Brief Trajectory of Film Sound

Ӣ filmed theater
Ӣ moving camera / editing
Ӣ synchronized sound (1927, Jazz Singer)
”¢ talkie → term “silent film”(c.f. “acoustic guitar”)
Ӣ orchestral score (classical tradition)
Ӣ electronic (tech, culture, economics)
Ӣ underscore (Cliff Martinez, Lisa Gerrard, Clint Mansell, David Holmes, Underworld, Nathan Larson)
Ӣ sound design as score
Ӣ side note: score as entertainment (audio-games, sound toys, apps)

I’ll now, as concisely as possible, run through what we discuss for each of those items.

ӢӢ A Brief Trajectory of Film Sound

Ӣ filmed theater

We begin at the beginning of film, and discuss how when new media arise they often initially mimic previous media, in this case showing how early film was often filmed theater.

Ӣ moving camera / editing

In time the combination of moving cameras and editing provide film with a visual vocabulary of narrative tools that distinguish it from filmed theater.

Ӣ synchronized sound (1927, Jazz Singer)

We talk about how the introduction of sound to film doesn’t coincide with the introduction of recorded sound. The issue isn’t recording sound. It is the complexity of synchronization.

”¢ talkie → term “silent film”(c.f. “acoustic guitar”)

The word “retronym” is useful here. A retronym is a specific type of neologism. A “neologism” is a newly coined word. A retronym is a new word for an old thing required when a new thing arises that puts the old thing in new light. The applicable joke goes as follows:

Q: What was an acoustic guitar called before the arrival of the electric guitar?

A: A guitar.

We also discuss the brief life of the term “cameraphone,” which was useful before cameras became so ubiquitous that a consumer no longer makes a decision about whether or not to buy a phone with a camera. Given the rise of social photography, it’s arguable that cellphones are really cameras that also have other capabilities.

In any case, that tentative sense of technological mid-transition is at the heart of this part of the discussion, about how films with sound were initially as distinct as phones with cameras, and how in time the idea of a movie without sound became the isolated, unusual event. We talk about how the “silent” nature of “silent film” is a fairly popular misunderstanding, and that silent films in their heyday were anything but, from the noise of the projectors, to the rowdiness of the crowds, to the musical accompaniment (often piano).

Ӣ orchestral score (classical tradition)

We discuss how the orchestral nature of film scores was not unlike the way films originated in large part as filmed theater. The orchestral score connected the audience experience to mass entertainments, like theater and and opera and musicals, in which orchestras and chamber ensembles were the norm. Long after the notion of filmed theater had been supplanted by a narrative culture unique to film, the norm of the orchestral score lingered.

Ӣ electronic (tech, culture, economics)

We discuss the rise of the electronic score, how the transition from orchestral to electronic involved a lot of difference forces. Technology had to become accessible, and changes in pop culture eventually required music that no longer felt outdated to couples out on a date, and finally economics meant that large Hollywood studios, which often had their own orchestras and production procedures, needed incentives to try something new.

Ӣ underscore (Cliff Martinez, Lisa Gerrard, Clint Mansell, David Holmes, Underworld, Nathan Larson)

The broad-strokes sequence of how movie scores changed since the rise of the talkie has three stages, from traditional orchestral scores, to early electronic scores that mimic orchestral scoring, to electronic scores that have their own unique vocabularies. (That’s leaving aside groundbreaking but also way-ahead-of-their-time efforts such as Bebe and Louis Barron’s Forbidden Planet.) I highlight the work of a handful of composers, all of whom to varying degrees employ what can be called “underscoring”: scores that rarely reach the crescendos of old-school melodramatic orchestral scores, and that often meld with the overall sound design of the filmed narrative they are part of. (I also note that all of these folks came out of semi-popular music: Cliff Martinez played with the Dickies, Captain Beefheart, and the Red Hot Chili Peppers, Lisa Gerrard with Dead Can Dance, Clint Mansell with Pop Will Eat Itself, and Nathan Larson with Shudder to Think. Underworld is a band, and David Holmes is a DJ and solo electronic musician.)

Ӣ sound design as score

Where I’ve been heading with this “trajectory” discussion — I call it a trajectory rather than a “timeline” because I feel the sense of momentum in this particular topic — is to focus on contemporary work in which sound design is the score. To emphasize the transition, I show a series of short videos. We watch the opening few minutes of a 1951 episode of Dragnet and then the opening portion of an episode of Southland, which closely follows the model of Dragnet: the martial score, the civic-minded officer’s point of view, the spoken introduction, the emphasis on “real” stories. The difference is that the melodramatic score of Dragnet is dispensed with in Southland, as is the notion of a score at all. Southland, which aired from 2009 through 2013, has no music once its filmic opening credits were over. Well, it’s not that there’s no music in Southland. It’s that any music one hears appears on screen, bleeding from a passing car, playing on the stereo in a doctor’s office, serving as the ringtone on someone’s cellphone. All sound in the show collectively serves the role once reserved largely for score. When there’s a thud, or a gunshot, or a droning machine, it touches on the psychology of the given scene’s protagonist.

To make my point about the means in which sound design serves as a score, I play an early clip from I Love Lucy, and contrast the early employment of laugh tracks in that show with portions of MAS*H, another sitcom, that lacked laugh tracks. I talk about the extent to which much movie scoring is often little more than a laugh track for additional emotions.

We then spend about 15 or 20 minutes watching over and over the same very brief sequence from David Fincher’s version of The Girl with the Dragon Tattoo, which I dissect for the gray zone between where the movie’s sound ends and the score by Trent Reznor and Atticus Ross begins. (If I have time in the next few weeks, I may do a standalone post with screenshots and/or video snippets that break down the sequence.)

In the work of Fincher, Reznor, and Ross we have a masterpiece of underscoring. The film isn’t quite in Southland‘s territory, but it is closing in on it. I then show two videos that work well together. These are promotional interviews, mini-documentaries, one of Jeff Rona talking about his work on the submarine movie Phantom and the other of Robert Duncan talking about his work on the submarine TV series Last Resort. The videos are strikingly similar, in that both show Rona and Duncan separately going into submarines, turning off the HVAC, and banging on things to get source audio for their respective efforts. All the better for comparison’s sake, the end results are quite different, with Duncan pursuing something closer to a classic orchestral sound, and Rona in a Fourth World vibe, more electronic, more pan-cultural, more textured. What is key is that the sounds of the scores then lend a sense of space, of real acoustic space, to the narratives whose actions they accompany.

Some semesters I also play segments from The Firm, to show the rare instance of a full-length, big-budget Hollywood film that has only a piano for a score, and Enemy of the State, to show references to The Conversation, and an interview with composer Nathan Larson, who like Rona and Duncan speaks quite helpfully about using real-world sounds in his scoring.

In advance of the class meeting, the students watch Francis Ford Coppola’s 1974 masterpiece, The Conversation. This is a core work in sound studies, thanks to both Walter Murch’s sound design in the film, and to the role of sound in the narrative. Gene Hackman plays a creative and sensitive private eye, who uses multiple microphones to capture an illicit conversation. Sorting out what is said on that tape causes his undoing. It’s helpful that the building where I teach my class is just a few blocks from Union Square, where the opening scene of the film is shot. We discuss Walter Murch and his concept of “worldizing,” of having sound in the film match the quality experience by the characters in the film. For class they read a conversation between Murch and Michael Jarrett, a professor at Penn State, York. They also are required to choose three characters other than Gene Hackman’s, and talk about the way sound plays a role in the character development. After the discussion, we listen in class to a short segment from Coppola’s The Godfather, released two years before The Conversation, in which Al Pacino kills for the first time, and discuss how there is no score for the entire sequence, just the sound of a nearby train that gets louder and louder — not because it is getting closer, but because its sound has come to represent the tension in the room, the blood in Pacino’s ears, the racing of his heart. This isn’t mere representation. It is a psychological equivalent of Murch’s worldizing, in which the everyday sounds take on meaning to a character because of the character’s state of mind. Great acoustics put an audience in a scene. Great sound design puts an audience in a character’s head.

The students also do some self-surveillance in advance of the class meeting. The exercise works well enough on its own, but it’s especially productive when done in parallel with The Conversation, which at its heart is ambivalent at best about the ability of technology to yield truth. The exercise, which I listed in full in last week’s class summary here, has them take a half-hour bus trip, and then compare what they recall from the trip versus the sound they record of the trip: what sounds do they miss, what sounds do they imagine.

When there is time, I like to close the trajectory/timeline with “score as entertainment (audio-games, sound toys, apps),” and extend the learnings from film and television into other areas, like video games, but there was not enough time this class meeting.

Ӣ Homework

For the following week’s homework, there are three assignments. In their sound journals students are to dedicate at least one entry to jingles (the subject of the following week’s class) and one to the sound in film or television. They are to watch an assigned episode of Southland and detail the role of sound in the episode’s narrative, the way sound design serves as score. And they are to locate one song that has been used multiple times in different TV commercials and discuss how it means different things in different contexts.

Note: I’ve tried to do these week-by-week updates of the course in the past, and I’m hopeful this time I’ll make it through all 15 weeks. Part of what held me up in the past was adding videos and documents, so this time I’m going to likely bypass that.

This first appeared in the February 17, 2015, edition of the free Disquiet email newsletter: tinyletter.com/disquiet.

Also tagged / / Leave a comment ]

Sound Class, Week 2 of 15: A Brief History of Listening

Celebrity death, 150,000 years in 3 hours, John Cage, Kit Kats, Whitney Houston

20150210-week2

The question at the heart of the second meeting of the sound course I teach to a mix of BA and MFA students is something of a hypothetical, a historical one: Imagine it is 1750 and the fellow who sang songs at the pub in your town every Friday night has, quite suddenly and unexpectedly, died. You will never hear his voice again. What is that like? How is that loss experienced — how was that 18th-century celebrity death experienced? And as we ponder the historical question, we consider further what someone in 1750 didn’t know, couldn’t necessarily have conceived of: that a few centuries later we would have recordings of our favorite musicians, recordings that would largely constitute their artistic legacy. To wrap one’s head around that kind of loss — that is what this week’s class meeting is an attempt at.

The second week of sound class is titled “A Brief History of Listening.” In three hours we cover roughly 150,000 years of human history, and still have time to talk about candy bars and Whitney Houston, and to go over the previous week’s homework, which included reading an essay by neuroscientist Seth S. Horowitz and reading an interview with composer and acoustic ecologist R. Murray Schafer.

Needless to say, this is all handled in a fairly succinct manner. This lecture and discussion is part of an initial three-week build up to the core of the course.

I teach my course about the role of sound in the media landscape at the Academy of Art in San Francisco. The first three weeks of the course focus on listening to media, followed by seven weeks on the “sounds of brands,” followed by the final five weeks, which are dedicated to “brands of sounds.” The class meets for three hours every Wednesday starting at noon, and there are nine hours of assigned homework. Each week my plan is to summarize the previous class session here. Please keep in mind that three hours of lecture and discussion is roughly 25,000 words; this summary is just an outline, less than 10 percent of what occurs in class.

Week 1 of class was largely given over to the syllabus and to a handful examples to get discussion and ideas flowing. I had 7 students last semester, and I have more than twice as many this semester, so I’m adjusting to the number of voices in class. As a result, I have a little bit of material left over from the opening week’s lecture that I need to cover, and this is where the candy bars come in.

Much as during the first week I found it useful to focus on various examples of JJ Abrams’ work in television and film to show how a single individual can embrace sound as a creative part of a broader, collaborative cultural pursuit, this week I spend a few minutes watching old TV commercials with the students. First I show two Kit Kat candy commercials from the late 1980s, in which the “Gimme a Break” jingle is so absurdly optimistic that it verges on, like the worst jingles, a kind of corporate pop-culture jingoism.

I apologize for my generation, though we were more victims of that culture than perpetrators of it. And then I show two Kit Kat commercials from the past five years, in which the same jingle plays out with each note sourced from on-screen, real-life, real-world sound — quotidian sound. One of these commercials is shot in a library, where books being shut and computer keyboards being typed on collectively play the Kit Kat jingle. There is a second such commercial set on the steps outside the library, the main difference being that the second one has the louder outdoor background sound of the city, a thick urban hum.

There’s an enormous amount that can be unpacked from these two commercials, in particular the idea of field recordings, of everyday sound, having sonorous, musical qualities, and of how these commercials connect the act of “taking a break” (that being the “idea,” such as it is, at the heart of this particular brand of candy, much as “happiness” is central to Coca-Cola’s marketing endeavors) to the actual jingle. After watching one of these commercials, the next time you type or close a book, you will likely hear the jingle in your head. These commercials take the corniest aspect of sound branding — the jingle — and make it somehow tasteful. The full fourth class meeting in this course will focus on jingles, so I pretty much leave it there, except to show one contemporaneous Kit Kat commercial from India that makes the 1980s American commercials look subtle by comparison.

We then close the loop with an exercise from the previous class meeting. The first class included two listening exercises. At the very start of the first class, students for 15 minutes wrote every sound they heard. This introduced them to the sound journal they will write in four times a week for the length of the course. After a brief mid-period break, for 10 minutes that first class meeting they wrote down every sound they associated with the first few minutes after waking up on an average Tuesday morning. For the second class, part of their homework was to do just that: wake on Tuesday, the morning before class, and write down everything they heard. In class I then return to them their exercise from the first week, and we compare and contrast what they had heard with what they thought they heard.

For most of the remainder of this class meeting, a timeline appears on the screen at the head of the room. It reads as follows. I apologize that this is a ridiculously brief & largely Western timeline, but it’s still useful:

ӢӢ A Brief Timeline of Listening

Ӣ 90k ~ 50k BC: human hearing & speech
Ӣ ~3300 BC: Sumerian proto-Cuneiform
Ӣ ~3000 BC: ancient Egypt homing pigeons
”¢ 750 ~ 550 BC: “oral culture becomes written culture”
Ӣ 1450s: moveable type / Gutenberg
Ӣ 1850s: recorded sound
Ӣ 1870s: the telephone
”¢ 1952: John Cage’s 4’33”
Ӣ 1993: Mosaic browser / World Wide Web

I’ll now, as briefly as possible, run through what we discuss for each of those items.

ӢӢ A Brief Timeline of Listening

Ӣ 90k ~ 50k BC: human hearing & speech

This number probably goes back another 50,000 to 100,000 years, and what is up for grabs is what it means to be human, what did communication constitute before we had the physical capability of hearing, and how long a gap was there between our ability to hear and our development of speech.

Ӣ ~3300 BC: Sumerian proto-Cuneiform

However long the gap between our development of hearing and speaking, there was in turn a gap before the rise and proliferation of notated speech — of notated thought. This all helps set the stage for the introduction, far in the future, of recorded (and notated, though we don’t discuss it in depth here) sound.

Ӣ ~3000 BC: ancient Egypt homing pigeons

We could mark this transition in human expression at several points along our collective timeline, but the homing pigeon makes a stronger model than the horse because a message carried by a horse suggests a distance, a form of travel, that a human might take, while the pigeon follows a path that people cannot as easily traverse. This is, in essence, the telephone, the Internet, of its time. The ability to send information a very long distance emphasizes how language is, itself, a form of technology.

”¢ 750 ~ 550 BC: “oral culture becomes written culture”

In the homework reading from week 1, R. Murray Schafer talks about how complaints about noise pollution go back to Roman days. Here we talk about an ancient Greek anxiety expressed by Socrates, who says to his interlocutor, Phaedrus: “If men learn this, it will plant forgetfulness in their souls.” What is the “this” in this sentence? This is: writing. I pause here and play some music by the late Whitney Houston, not her singing, just the background music, what is listed on singles as the “instrumental track.” We listen to what a Whitney Houston song sounds like without Whitney Houston, which leads to an extended group conversation that explores the 1750 hypothetical I mention up top. It’s a very engaging topic for discussion, and we try to imagine what loss was like at a time before recorded sound. I can barely scratch the surface here, but this is one of my favorite topics in a class that I love to teach.

Ӣ 1450s: moveable type / Gutenberg 1850s: recorded sound

It remains the case that people mistakenly say Gutenberg “invented the printing press,” and after clearing that up we talk about moveable type as a precursor to recorded sound. Our experience of recorded sound has a strong precedent in the development of printing and, later, moveable type.

Ӣ 1870s: the telephone

I go over a brief history of its development, and matters of technological adoption in general. Much as humans didn’t all wake up one day able to hear and speak, or later with the ability to read and write, nor did we all suddenly have telephones delivered to our front doors. Recent episodes of Downtown Abbey, set exactly 90 years ago, helpfully reinforce this. (Also: radio.)

”¢ 1952: John Cage’s 4’33”

I introduced the concept of an anechoic chamber — a space designed to have no echo — in the previous class, and here expand on it by talking about John Cage. I stick to his greatest hits: his famous anechoic-chamber anecdote, his 4’33” composition, and his book Silence. I quote him from Silence, in which he connects the ideas behind 4’33″” to the glass houses of Mies van der Rohe (how they “reflect their environment, presenting to the eye images of clouds, trees, or grass, according to the situation”) and the sculptures of Richard Lippold (“it is inevitable that one will see other things, and people too, if they happen to be there at the same time”). We focus discussion on this statement of Cage’s: “There is always something to see, something to hear. In fact, try as we may to make a silence, we cannot.” The students have undertaken sound journals, and emphasizing that silence is an “idea” not an actual real thing is helpful in getting them to listen to that which we have long been taught not to hear. If, as William Gibson said, “cyberspace” is a consensual hallucination of a place, then “silence” is a consensual hallucination of an absence.

Ӣ 1993: Mosaic browser / World Wide Web

If I’ve learned anything in the six semesters I’ve taught this class, it is to not overestimate the benefits of talking to students about the past 20 years of rapid increase in technology. So, I just end my timeline with the introduction of the Mosaic browser, which I posit as a division not unlike the one on the other side of which stand those people back in 1750 who didn’t know what they were missing — or so we 21st-century listeners might contend, and condescend.

Ӣ Homework

And I’ll close here with the homework that I assign in advance of the third week. There will be four more weekly sound journal entries. There will be a viewing, of the 1974 Francis Ford Coppola film The Conversation. There will be one reading: an interview with sound designer Walter Murch conducted by Michael Jarrett (whose recent book Producing Country: The Inside Story of the Great Recordings is great — and shares its publisher, Wesleyan, with Cage’s Silence). And there is one listening exercise. I’ll end by copying and pasting the exercise directly from the homework:

Exercise: This should take between an hour and a half and two hours to complete. Part A: First, plot out a bus ride or a walk (BART is also fine) that will take approximately one half hour, and during which you’re unlikely to run into anyone or be required to speak with anyone. (If you elect for the bus route, which I recommend, you should remain on the same bus for the full half hour.) Use your phone or another device to record the complete half-hour length of your trip. Part B: Immediately after the trip is over, sit down and make an annotated list of the sounds you recall from the trip. Part C: Immediately after that, listen back to the tape all the way through; make an annotated list of the relative prominence of sounds you had or hadn’t noticed or paid attention to. Part D: Create and send to me a document containing the lists that resulted from Parts B and C above.

And next week, in part three of “Listening to Media,” the class will focus on “The Score”: not on 150,000 years of human history, but on 100 years of film and, later, television. Which is why we’re watching — and listening to — Coppola’s The Conversation.

Note: I’ve tried to do these week-by-week updates of the course in the past, and I’m hopeful this time I’ll make it through all 15 weeks. Part of what held me up in the past was adding videos and documents, so this time I’m going to likely bypass that.

This first appeared in the February 10, 2015, edition of the free Disquiet email newsletter: tinyletter.com/disquiet.

Also tagged / / Comments: 2 ]