My 33 1/3 book, on Aphex Twin's Selected Ambient Works Volume II, was the 5th bestselling book in the series in 2014. It's available at Amazon (including Kindle) and via your local bookstore. • F.A.Q.Key Tags: #saw2for33third, #sound-art, #classical, #juntoElsewhere: Twitter, SoundCloud, Instagram

Listening to art.
Playing with audio.
Sounding out technology.
Composing in code.

tag: brands of sounds

Teaching Sound / Spring 2016

I'll be doing my sound course in San Francisco for 15 weeks starting February 3.


I’ll be teaching my course on “the role of sound in the media landscape” — aka “Sounds of Brands / Brands of Sounds” — again this coming spring 2016 at the Academy of Art here in San Francisco.

The semester runs from February 1, 2016, through May 21, 2016. The class meets on Wednesdays from noon to 2:50pm, which means the first class meeting is February 3 and the final class meeting will be on May 18. There’s no class on March 23, which is spring break, for which I’ll probably assign a close-listening analysis of Cliff Martinez’s work on the score to Spring Breakers. Just kidding. Well, maybe not kidding.

Last semester we had one of the heads of the Mutek festival (Patti Schmidt) address the class, as well as someone from the software developer Cycling ’74 and someone from Facebook’s virtual-reality team, among others. The previous semester we had someone from BitTorrent and someone from SoundCloud, and we took a field trip to an anechoic chamber at the local research lab of an audio company. The guest speakers aren’t generally lecturers; I usually interview them in front of the students, who also ask questions. The semester prior both the sound artist Robin Rimbaud (Scanner) and the voice actor Phil LaMarr (Samurai Jack, Static Shock) visited via Skype.

Here’s the course outline from last year:


I teach the course to a mix of MFA and BA students. This is the seventh semester that I’ve taught the course, after taking off last semester with the intention of teaching it once a year rather than twice a year, to leave room for loads of other projects.

You can read summaries and documentation from past semesters using the “brands of sounds” and “sounds of brands” tags here at

Also tagged / / Leave a comment ]

Two More Listeners

A recording engineer and a sound artist discuss making listening heard.


Earlier this week I posted responses I’d made to a series of questions about listening posed by Steve Ashby, who teaches music at Virginia Commonwealth University. Two more people have replied to Ashby’s questions, and I wanted to share segments of their thoughts here, both of them responding to the fourth, and core, question in Ashby’s survey: “How does one make their listening listened to?”

This is Bryan Walthall, a recording and mastering engineer who runs Stereo Image in Richmond, Virginia:

my perception of the way music sounds has changed greatly over the past 15 years. my favorite records when i was a kid (hendrix, nirvana) sound completely different! sometimes it breaks my heart because they don’t have the exact same magic they did when i was younger. its as if my “suspension of reality”has been diminished because I’ve seen the sausage being made for 15 years. for the most part they still evoke the same emotional response, but it has been diminished. i hear things completely different now, because i know how they were achieved. thats good for me making records, but the kid in me gets a little bummed sometimes that i can’t just listen to the song, i have to “hear the drums”or “know thats a plate and not a spring”or that “thats obviously a vocal double.”

This is the sound artist Stephen Vitiello. Up top is an image of visitors to one of his sound installations:

I mostly hope to achieve this in installation environments. Setting lighting in a space, comfortable seating, establishing a volume level and a speaker system that works well with the material are all important. Also, removing or minimizing visual distractions is vital ”“ so that it is clear that in the work I’m presenting, sound is primary and not secondary to any sort of visual content. As I re-read these responses, it seems I’m hoping to create a space for the installations that goes back to what I used to create for myself when listening to a new record for the first time.

Ashby is archiving the responses at his website, and on his syllabus page at VCU’s site.

Also tagged / / Leave a comment ]

In the Province of Real Time Electronica

MUTEK’s Patti Schmidt on how Jurassic Park helped birth — and how emphasis on scenography and human scale helps sustain — the music festival


The following interview is with Patti Schmidt, a longtime programmer for the MUTEK festival in Montréal, Canada. The interview took place during the final class session of the spring 2015 semester of the class that I teach about the role of sound in the media landscape at the Academy of Art in San Francisco. Schmidt joined us via Skype.

I frequently invite professionals — musicians, startup representatives, coders, sound designers, publicists — to speak in my class. Rather than ask the guests to prepare a presentation, I interview them in front of the class, and then have the students themselves ask questions. This is a lightly edited transcript of Schmidt’s appearance in class. The interview took place on Wednesday, May 13, 2015, just before the 16th annual MUTEK festival, which ran from May 27 through May 31.

Marc Weidenbaum: First thanks, Patti. I’d like to introduce Patti to the class. This is Patti Schmidt from MUTEK. She’s going be talking with us today via Skype.

Patti, these are the students in the sound class I teach here at the Academy of Art in San Francisco. The class is about the role of sound in the media landscape. This last five or six weeks, we’ve been focused on what we call “brands of sounds,”which is how things related to sounds “brand”themselves, how they express themselves in the marketplace. That followed six or seven weeks on the opposite subject, which was “sounds of brands,”about how things — objects, organizations, services — use sound to make an impression.

I tend to end each semester talking about music, and often I’ll have a music publicist come and talk about the challenges of the past 10 years as the record industry has changed, how streaming and other changes in the music and recording industries have shifted their attentions and skills and so forth.

Patti’s speaking with us in class actually began as the result of an interaction with someone in music PR, who reached out to me to ask if I’d be interested in writing an article somewhere about MUTEK, or cover the festival in some way. I replied that I don’t really cover festivals much. Then I suggested we do this, which is have Patti address the class in the form of a live interview, which I’d then edit and post at, and the MUTEK publicist was enthusiastic about the approach. Patti, could you start just by talking to the class a bit about what MUTEK is and a bit about what you do there.

Patti Schmidt: MUTEK is an electronic music and digital creativity festival having its 16th edition this year. It started in Montréal in the year 2000. The director of MUTEK’s name is Alain Mongeau, and in the mid-’90s he was the president something called ISEA, an electronic arts organization that’s based in the Netherlands. ISEA was one of the first international organizations to really become concerned with the role of digital media and digital sound and digital art. So, he helped host the 1995 edition of ISEA here in Montréal, and his idea was that Montréal is a very unique and weird city in North America because there’s been all kinds of technology leading industries and arts here. The video game industry, Ubisoft [a French company], is based here; Soft Image, which was responsible for Jurassic Park, and all these very early special effects, was based here; and Cirque du Soleil, all this stuff. There are a lot of big spectacle, innovative, tech things that have come out of this province — that you would think might otherwise be isolated because of language, because French is the first language that is spoken here. But somehow through technology and technologically driven art and spectacle, including electronic music, Montréal has sort of distinguished itself in the world. Alain helped start a venue here in Montréal called the Society for Art and Technology, or the SAT, as we call it, and it’s become a real hub for a lot of research on immersive performances, visual works, sounds works.

ISEA was a way for Alain, in 1995, to attempt to really route this idea of innovation in music and performance in Montréal. He went on to program a component for a film festival that was concerned with new media, the Festival of Nouveau Cinema. They gave him a component called the Media Lounge for 5 years, where in the late 1990s he would bring in people like Richie Hawtin, who at the time was rather unknown and would be presenting minimal sound and interactive light installations. This was the beginning of laptops becoming an important tool not only for music, but for visual work. And it became possible to then compose on these brand new portable, reasonably affordable tools. So there was an explosion of art and music going on, all over the world, and so he programmed components of this film festival for a few years. Then he was given some seed money by the guy from Soft Image to begin the very first edition of MUTEK, which was hosted inside of a big complex dedicated to new media that this guy had also just started, called Excentris, roundabout 1999.

That was the basic background on MUTEK. A few years later, maybe it was 2003 or 2004, Alain also — because he has this sort of global view and a positive idea of globalization and technology — he started planting seeds for other MUTEKs in South America, and a “micro”MUTEK festival happened in Chile. Then a few years later — it’s now into its 11th year — Mexico City began its MUTEK franchise. This is all, like, “open source,”no money — we don’t receive any money from these festivals at all. It was more about the idea of inverting the axis of the music industry, which usually goes from North America to Europe, so horizontal, and instead, doing a vertical axis — Montréal down to Latin America — where these emergent economies and artistic communities that were also beginning to just use computers and digital technologies to make music, and to plug into a whole global circuit existed. Alain has a personal history in Latin America, which made this possible. He speaks Spanish; his father is a university professor. They were in Chile during the coup in 1973, and he is very comfortable working these angles. So now MUTEK Mexico is 11 years old — MUTEK Argentina has sort of moved to Mexico. We just started a version of the festival in Colombia. The ones in Chile are a little bit dormant. We also have an outpost in Barcelona, Spain, which is European but it is also a place where tons of Latin American expats end up. The festival has a real mission and mandate statement to always cultivate local audience and the kind of artists and communities that are left out of the regular global conversation that’s western-dominated about technology and music — and that’s an essential interest of the festival. And over the years, as well, MUTEK has cultivated a local community here in Montréal. A number of them, a big chunk of the local artists who helped start MUTEK Montréal, have since relocated to Berlin. And they have quite vibrant careers there, so we work this axis as well. And we still always try to cultivate and throw into our international network local artists who are innovative in using technology. There’s other interesting things to look at over the course of a 16-year history of a festival that takes technology as its important taking-off point, and this technology is constantly mutating, and evolving, and changing, and if you’re going to stay relevant you are going to have to stay on top of what those changes are. Read more »

Also tagged / / Comment: 1 ]

Sound Class, Week 7 of 15: Explicit vs. Implicit

Vocabulary refresher, a useful series of quadrants, breakfast cereal, OS startup sounds


On the very first day of class I share this sequence:

Hearing → Listening → Discerning → Describing → Analyzing → Interpreting → Implementing →

That is, in a handful or so of words, a map of the 15-week course that I teach on the role of sound in the media landscape at the Academy of Art in San Francisco.

The first semester I taught the course, back in 2012, a student raised a hand from the back of the room and asked, in effect, if I am making up any of the words we use. I suppose hearing “anechoic” and “acoustemology,” among other less esoteric terms, over and over takes its toll, and I replied that I did not make up any of the words. I did, however, take responsibility for two familiar words used in a particular context. Those words, and that context, are the subject of week 7.

First some background on the course, in case this is the first week you’ve read one of these summaries: Each week of the 15-week course my plan is to summarize the previous class session here. Please keep in mind that three hours of lecture and discussion is roughly 25,000 words; this summary is just an outline, in this case less than 10 percent of what occurs in class. Some class meetings emphasize more discussion than others. Week 7 this semester is especially discussion-heavy, and hence the lecture outline here is fairly cursory.

I start off week 7 by reviewing recent vocabulary. When this goes well, we don’t stop with the words I initially reprise, words like “soundscape” and “soundmark,” and, yes, “anechoic” and “acoustemology.” We discuss how the first two develop out of the work of R. Murray Schafer, how the third relates to John Cage, and how the fourth comes out of the work of Steven Feld. To revisit the previous week’s class meeting, on the role of sound in retail space, we discuss Ray Oldenburg’s concept of the “Third Place.” In turn, student queries lead to additional vocabulary refreshes, among them sonic equivalents of so-called “skeuomorphism” design (the shutter sound of digital cameras serves as a good example), “haptic” feedback, and the difference between a “neologism” and a “retronym.”

Then we proceed to those two fairly common terms I mentioned up above, “explicit” and “implicit,” which we employ in a specific context. For the purposes of discussion, an “explicit” sound related to a subject is one closely tied, in the public imagination, to it, such as the “pop pop, fizz fizz” of Alka-Seltzer, or the anthropomorphized Snap, Crackle, and Pop of Rice Krispies. In contrast, “implicit” sounds are those that are to some extent inherent in a given subject, but that are not fully, for lack of a more nuanced term, branded. Different makes of door lock, for example, will sound different upon close inspection, but it would be hard to make a case that to anyone other than a discerning thief those sounds are closely associated with the locks.

We begin by drawing a grid, two by two, and we put those two words on the Y axis. On the X axis, horizontally, we write “category” and “product.” The remainder of week 7 involves working through how sounds can be oriented in those four quadrants. This plays out in various ways, largely as a result of group discussion, and thus it doesn’t translate particularly well to summary. So, I’ll just emphasize some things I’ve learned when teaching this class:

  • It’s important to keep top of mind that the quadrants in this two-by-two grid are along a continuum. Students often mistake them as four independent if interrelated categories. That’s not the case.

  • An operating system startup sound is a useful example. The startup sound itself began deep in the implicit/category zone, and was later elevated to explicit/product when Apple and Windows, just to note two examples, developed unique audio logos.

Homework: The homework for week 8 is to take another pass on the research from week 7, which involved the development of a “sonic audit.” This week in class we take time, in small groups, to compare notes about how to apply the explicit/implicit grids to the students’ chosen topics, which range from Oreo cookies to Nike sneakers to Rolex watches. The assignment is as follows: Do a “sonic audit”of a specific brand/product of your choosing.

Your brand/product should not be inherently sonic; that is, for example, it should be a candy bar, not a headphone — a clothing store, not an MP3 player — an airline, not a mobile music app. You will explore the role of sound in the brand/product that you select. (You can, alternately, elect to focus on an industry/category, such as the Got Milk? and National Pork Board campaigns.)

In the process of developing your sonic audit you should look deeply at the brand/product from numerous viewpoints, such as, but not exclusive to, the following: (a) sounds inherent in the category, (b) sounds exclusive to the brand/product, (c) cultural references (e.g., song lyrics), (d) brand history (e.g., jingles, concert sponsorships, musician spokespeople), etc. Your presentation of your findings should consist not only of exhaustive examples you locate, but of the “cultural meaning”of what you discern. How you present this material is up to you, but it should be substantial. We’ve used short essays in assignments and four-quadrant grids in class, and those are particularly recommended. In the end, the documentation should state and support a specific point of view about the sonic properties of the brand.

Next week: The software tools of sound, with an emphasis on Audacity and, just to nudge things a little, Max/MSP.

Note: I’ve tried to do these week-by-week updates of the course in the past, and I’m hopeful this time I’ll make it through all 15 weeks. Part of what held me up in the past was adding videos and documents, so this time I’m going to likely bypass that.

This first appeared in the March 24, 2015, edition of the free Disquiet “This Week in Sound”email newsletter:

Also tagged / / Leave a comment ]

Sound Class, Week 3 of 15: Sound Design as Score

The Conversation, Walter Murch, surveillance culture, retronyms, Southland


Quick question: How many microphones are in the room you are in right now?

That’s the fun and instructive little parlor game I like to play with my students a few weeks into the sound course I teach. They each share their guess with me, and then I talk through all the technology in the room, and rarely is anyone even close to the right number. The students are often a little surprised, at first, to learn that many of their phones have three microphones, and that the most recent MacBook Air laptops have two microphones, to allow for noise cancellation of ambient sound from the environment. They forget that many of their earbuds come with microphones, as do their iPads, their iPods, their Bluetooth headpieces. We’re meeting in a classroom, so there’s no concern about their game consoles, or their smart thermostats, or their home-security system. By the end of the exercise, they are a little anxious, which is productive because this week we discuss Francis Ford Coppola’s masterpiece thriller, The Conversation. By the end of the exercise, we’re all a bit like Gene Hackman at the end of the film: wondering where the surveillance is hidden.

There is almost every week of the class a question at its heart to which I do not have an answer. The question this week is: How is the increasing ubiquity of recording equipment in everyday life transforming the role of sound in film and television?

This is third week of the course I teach at the Academy of Art on the role of sound in the media landscape. The third week marks the close of the first of three arcs that comprise the course. First come three weeks of “Listening to Media,” then seven weeks of “Sounds of Brands,” and then five weeks of “Brands of Sounds.” If the first week of the course is about the overall syllabus and the second week looks back on 150,000 years of human history, how hearing has developed biologically, culturally, and technologically, then the third week focuses on just barely 100 years: We look at how film and, later, television have shaped the way sound is employed creatively.

Each week my plan is to summarize the previous class session here. Please keep in mind that three hours of lecture and discussion is roughly 25,000 words; this summary is just an outline, maybe 10 percent of what occurs in class.

As with last week’s “Brief History of Listening,” this week uses a timeline as the spine of the discussion, or in this case a “trajectory.” For most of this class meeting, this “trajectory” appears on the screen at the head of the room:

ӢӢ A Brief Trajectory of Film Sound

Ӣ filmed theater
Ӣ moving camera / editing
Ӣ synchronized sound (1927, Jazz Singer)
”¢ talkie → term “silent film”(c.f. “acoustic guitar”)
Ӣ orchestral score (classical tradition)
Ӣ electronic (tech, culture, economics)
Ӣ underscore (Cliff Martinez, Lisa Gerrard, Clint Mansell, David Holmes, Underworld, Nathan Larson)
Ӣ sound design as score
Ӣ side note: score as entertainment (audio-games, sound toys, apps)

I’ll now, as concisely as possible, run through what we discuss for each of those items.

ӢӢ A Brief Trajectory of Film Sound

Ӣ filmed theater

We begin at the beginning of film, and discuss how when new media arise they often initially mimic previous media, in this case showing how early film was often filmed theater.

Ӣ moving camera / editing

In time the combination of moving cameras and editing provide film with a visual vocabulary of narrative tools that distinguish it from filmed theater.

Ӣ synchronized sound (1927, Jazz Singer)

We talk about how the introduction of sound to film doesn’t coincide with the introduction of recorded sound. The issue isn’t recording sound. It is the complexity of synchronization.

”¢ talkie → term “silent film”(c.f. “acoustic guitar”)

The word “retronym” is useful here. A retronym is a specific type of neologism. A “neologism” is a newly coined word. A retronym is a new word for an old thing required when a new thing arises that puts the old thing in new light. The applicable joke goes as follows:

Q: What was an acoustic guitar called before the arrival of the electric guitar?

A: A guitar.

We also discuss the brief life of the term “cameraphone,” which was useful before cameras became so ubiquitous that a consumer no longer makes a decision about whether or not to buy a phone with a camera. Given the rise of social photography, it’s arguable that cellphones are really cameras that also have other capabilities.

In any case, that tentative sense of technological mid-transition is at the heart of this part of the discussion, about how films with sound were initially as distinct as phones with cameras, and how in time the idea of a movie without sound became the isolated, unusual event. We talk about how the “silent” nature of “silent film” is a fairly popular misunderstanding, and that silent films in their heyday were anything but, from the noise of the projectors, to the rowdiness of the crowds, to the musical accompaniment (often piano).

Ӣ orchestral score (classical tradition)

We discuss how the orchestral nature of film scores was not unlike the way films originated in large part as filmed theater. The orchestral score connected the audience experience to mass entertainments, like theater and and opera and musicals, in which orchestras and chamber ensembles were the norm. Long after the notion of filmed theater had been supplanted by a narrative culture unique to film, the norm of the orchestral score lingered.

Ӣ electronic (tech, culture, economics)

We discuss the rise of the electronic score, how the transition from orchestral to electronic involved a lot of difference forces. Technology had to become accessible, and changes in pop culture eventually required music that no longer felt outdated to couples out on a date, and finally economics meant that large Hollywood studios, which often had their own orchestras and production procedures, needed incentives to try something new.

Ӣ underscore (Cliff Martinez, Lisa Gerrard, Clint Mansell, David Holmes, Underworld, Nathan Larson)

The broad-strokes sequence of how movie scores changed since the rise of the talkie has three stages, from traditional orchestral scores, to early electronic scores that mimic orchestral scoring, to electronic scores that have their own unique vocabularies. (That’s leaving aside groundbreaking but also way-ahead-of-their-time efforts such as Bebe and Louis Barron’s Forbidden Planet.) I highlight the work of a handful of composers, all of whom to varying degrees employ what can be called “underscoring”: scores that rarely reach the crescendos of old-school melodramatic orchestral scores, and that often meld with the overall sound design of the filmed narrative they are part of. (I also note that all of these folks came out of semi-popular music: Cliff Martinez played with the Dickies, Captain Beefheart, and the Red Hot Chili Peppers, Lisa Gerrard with Dead Can Dance, Clint Mansell with Pop Will Eat Itself, and Nathan Larson with Shudder to Think. Underworld is a band, and David Holmes is a DJ and solo electronic musician.)

Ӣ sound design as score

Where I’ve been heading with this “trajectory” discussion — I call it a trajectory rather than a “timeline” because I feel the sense of momentum in this particular topic — is to focus on contemporary work in which sound design is the score. To emphasize the transition, I show a series of short videos. We watch the opening few minutes of a 1951 episode of Dragnet and then the opening portion of an episode of Southland, which closely follows the model of Dragnet: the martial score, the civic-minded officer’s point of view, the spoken introduction, the emphasis on “real” stories. The difference is that the melodramatic score of Dragnet is dispensed with in Southland, as is the notion of a score at all. Southland, which aired from 2009 through 2013, has no music once its filmic opening credits were over. Well, it’s not that there’s no music in Southland. It’s that any music one hears appears on screen, bleeding from a passing car, playing on the stereo in a doctor’s office, serving as the ringtone on someone’s cellphone. All sound in the show collectively serves the role once reserved largely for score. When there’s a thud, or a gunshot, or a droning machine, it touches on the psychology of the given scene’s protagonist.

To make my point about the means in which sound design serves as a score, I play an early clip from I Love Lucy, and contrast the early employment of laugh tracks in that show with portions of MAS*H, another sitcom, that lacked laugh tracks. I talk about the extent to which much movie scoring is often little more than a laugh track for additional emotions.

We then spend about 15 or 20 minutes watching over and over the same very brief sequence from David Fincher’s version of The Girl with the Dragon Tattoo, which I dissect for the gray zone between where the movie’s sound ends and the score by Trent Reznor and Atticus Ross begins. (If I have time in the next few weeks, I may do a standalone post with screenshots and/or video snippets that break down the sequence.)

In the work of Fincher, Reznor, and Ross we have a masterpiece of underscoring. The film isn’t quite in Southland‘s territory, but it is closing in on it. I then show two videos that work well together. These are promotional interviews, mini-documentaries, one of Jeff Rona talking about his work on the submarine movie Phantom and the other of Robert Duncan talking about his work on the submarine TV series Last Resort. The videos are strikingly similar, in that both show Rona and Duncan separately going into submarines, turning off the HVAC, and banging on things to get source audio for their respective efforts. All the better for comparison’s sake, the end results are quite different, with Duncan pursuing something closer to a classic orchestral sound, and Rona in a Fourth World vibe, more electronic, more pan-cultural, more textured. What is key is that the sounds of the scores then lend a sense of space, of real acoustic space, to the narratives whose actions they accompany.

Some semesters I also play segments from The Firm, to show the rare instance of a full-length, big-budget Hollywood film that has only a piano for a score, and Enemy of the State, to show references to The Conversation, and an interview with composer Nathan Larson, who like Rona and Duncan speaks quite helpfully about using real-world sounds in his scoring.

In advance of the class meeting, the students watch Francis Ford Coppola’s 1974 masterpiece, The Conversation. This is a core work in sound studies, thanks to both Walter Murch’s sound design in the film, and to the role of sound in the narrative. Gene Hackman plays a creative and sensitive private eye, who uses multiple microphones to capture an illicit conversation. Sorting out what is said on that tape causes his undoing. It’s helpful that the building where I teach my class is just a few blocks from Union Square, where the opening scene of the film is shot. We discuss Walter Murch and his concept of “worldizing,” of having sound in the film match the quality experience by the characters in the film. For class they read a conversation between Murch and Michael Jarrett, a professor at Penn State, York. They also are required to choose three characters other than Gene Hackman’s, and talk about the way sound plays a role in the character development. After the discussion, we listen in class to a short segment from Coppola’s The Godfather, released two years before The Conversation, in which Al Pacino kills for the first time, and discuss how there is no score for the entire sequence, just the sound of a nearby train that gets louder and louder — not because it is getting closer, but because its sound has come to represent the tension in the room, the blood in Pacino’s ears, the racing of his heart. This isn’t mere representation. It is a psychological equivalent of Murch’s worldizing, in which the everyday sounds take on meaning to a character because of the character’s state of mind. Great acoustics put an audience in a scene. Great sound design puts an audience in a character’s head.

The students also do some self-surveillance in advance of the class meeting. The exercise works well enough on its own, but it’s especially productive when done in parallel with The Conversation, which at its heart is ambivalent at best about the ability of technology to yield truth. The exercise, which I listed in full in last week’s class summary here, has them take a half-hour bus trip, and then compare what they recall from the trip versus the sound they record of the trip: what sounds do they miss, what sounds do they imagine.

When there is time, I like to close the trajectory/timeline with “score as entertainment (audio-games, sound toys, apps),” and extend the learnings from film and television into other areas, like video games, but there was not enough time this class meeting.

Ӣ Homework

For the following week’s homework, there are three assignments. In their sound journals students are to dedicate at least one entry to jingles (the subject of the following week’s class) and one to the sound in film or television. They are to watch an assigned episode of Southland and detail the role of sound in the episode’s narrative, the way sound design serves as score. And they are to locate one song that has been used multiple times in different TV commercials and discuss how it means different things in different contexts.

Note: I’ve tried to do these week-by-week updates of the course in the past, and I’m hopeful this time I’ll make it through all 15 weeks. Part of what held me up in the past was adding videos and documents, so this time I’m going to likely bypass that.

This first appeared in the February 17, 2015, edition of the free Disquiet email newsletter:

Also tagged / / Leave a comment ]