Tristan Perich’s Noise Patterns comes in a clear jewel case, but it isn’t a CD. It’s a small, matte-black circuit board. Powered by a watch battery, it produces a series of musical compositions built from the on/off operations on the minuscule chip at the center of the device, the same sort of chip you might find in a microwave oven.
What follows is a lengthy, detailed interview in which Perich talks about the development of Noise Patterns, and various other aspects of his artistic efforts, which range from full-scale museum installations of drawing machines and “microtonal walls,” to live performances in which he builds circuits in front of the audience.
In Perich’s telling, his previous circuit-board album, 1-Bit Symphony, was built from “tone” while Noise Patterns, as its name suggests, is built from “randomness,” from what sounds like white noise twisted and tweaked to Perich’s design.
There will be a more detailed introduction to this interview posted here soon, but in the interest of time — there is a party/concert celebrating the release of Noise Patterns‘ tonight at (Le) Poisson Rouge in Manhattan, with guests, Robert Henke, Karl Larson, Ricardo Romaneiro, Leo Leite, and Christian Hannon — the transcript, along with annotated images from the production of Noise Patterns and other aspects of Perich’s work, is being posted today.
Weidenbaum: The material you just referred to, the tone synthesis, is that the “Patterns” and “Album” data that’s toward the end of the code in the liner notes?Perich: It’s earlier on. In fact, if you look real close under the “Patterns” in Noise Patterns, you’ll see that it’s actually, sequences of notes, like C octave 7 or you know, G sharp octave 4, or whatever. I actually use the Western pitch system, and instead of mapping it to pitches and frequencies, I map it to probabilities of randomly oscillations. So basically, Noise Patterns is music that’s made up of sequences of noise, of random oscillations of the chip, and I control what I think of as the density of that randomness. It’s basically the probabilities of oscillating between a 1 and a 0. A high density, or a high probability of switching back and forth randomly between 1 and 0, sounds a lot like white noise. It’s this kind of very smooth white-noisy kind of sound, like static from a radio. As I lower the probability, as I make it less dense, more sparse, it sort of sounds like an EQ filter sweep, white noise down to a rounder wash of sound. As I lower that even more, it turns into a crackle, a sparse crackle. As it goes down even farther, it just becomes sporadic pops. Those pops are like when the output switches from a 1 to a 0, you hear that as an individual sample.
Weidenbaum: So those note values, the G sharp and so forth that you were describing earlier, we’re not hearing them as notes. You’re just using them as reference points?Perich: The idea is that these probabilities are a logarithmic scale, and pitch is also a logarithmic scale, so I basically use the pitch system, and I even use the same frequencies as the pitches that are referenced, but instead of generating fixed tones of that period, for instance — I store it as period instead of a frequency, as the period of the waveform — I use that as threshold for my random number generator. If I have a G7, it corresponds to a certain density of randomness, and an octave down, G6, is half as dense random output, because an octave down in pitch is half the frequency, so you double the period, and so basically the randomness gets twice as sparse each time I go down an octave. And it just worked out really nicely that using a logarithmic scale I could move from this kind of really high frequency white noise to sporadic pops, and it gave a frequency spectrum that felt very natural to me, and it allowed me to use the exact same pitch framework that I used in the past. So basically to get back to your question, that “Patterns” section of the memory stores these pitch sequences, really individual patterns, sometimes pitches and sometimes rests, and then the synthesis section of the code, it’s on the other side of the poster, instead of using those values to generate frequency it uses those values to compare to the random numbers that are being generated. Weidenbaum: How much trial and error went into the first iteration, into the 1-Bit Symphony, to get to where you felt that was a creative space to work in. Perich: Yeah, 1-Bit Symphony really gave me a framework. I’ve been working on it for a number of years now. It gave me this code framework to write grid-based, tone-based music, music that’s quantized on a grid and it’s quantized like pitch and works with repeating patterns that I can trigger and create a composition out of. I’ve been using that, subsequently, for all of my composed works, too, so it’s basically the exact same software framework for my live performances with speakers and the only difference is that I have a different number of outputs, and the patterns and the composition change. Otherwise, the software is exactly the same. For Noise Patterns, really the only fundamental thing that changed is the synthesis itself, and then I also added a few other parameters that I can adjust, which correspond to a kind of transpose, a parameter that allows me to adjust the density of the pattern while it’s playing, and also another parameter that allow me to change what’s called the duty cycle of the waveform, which is basically the ratio of time that it spends on versus time that it spends off. Each of the voices in Noise Patterns — there are six voices: two left, two center, and two right — each of those voices is generating this 1-bit waveform and in that, when I’m synthesizing that waveform, I can control this ratio of on time to off time. And the consequence of that is that it is a way of creating space within the waveform, and also adjusting the timbre. Sorry, it’s kind of technical.
Weidenbaum: No, keep going. This is good.Perich: In a nutshell, maybe it’s easier to explain with 1-Bit Symphony: it comes down to polyphony, to mixing multiple signals. If I have a tone, like a 440-hertz A, playing, and it has a certain period — let’s say a hundred samples per oscillation, per period of the waveform. Of those hundred, you can spend 50 of them as a 1 and then 50 of them as a 0, and then repeat. So it’s on half the time, and it’s off half the time. Or I can adjust the duty cycle and bring it down to just 5 samples are on, and 95 samples are off, and that’s a thinner duty cycle, and it changes the timbre. It becomes more nasally, but more importantly it creates a lot of time where the signal is off. It’s off 95 percent of the time, and that 95 percent of the time that it’s off creates a lot of room for the other voices to play, for their waveforms to exist. Weidenbaum: And all those voices are existing right now on this single device. Perich: Yeah, exactly, it’s doing this polyphony synthesis live. And if I go the other way, if I use a wider duty cycle, so it’s on 95 percent of the time and it’s off only 5 percent of the time, what happens is the waveform starts to kind of overwrite everything else, because I’m mixing the voices using the “OR” operation, and if you “OR” a 1 with anything, you get 1. So if one of the voices is 1 most of the time, then your output’s going to be 1 most of the time, and you lose anything that might be happening during that time, it just gets kind of like overwritten with a 1. And so that allowed me to use, for instance, one voice to oversaturate the output, and it has this kind of crunchy effect that I played a lot with in 1-Bit Symphony. And in Noise Patterns, I use it a lot as a way of having a voice come in and overwhelm the other voices. So it’s strange because in a way, it’s kind of like turning up a signal, by changing its duty cycle, when it goes from thin to full-bodied — but as I turn it up depending on what else is playing, the whole output might actually get quieter. So there are these really counterintuitive things, where I can have a really rich signal going on, and I do this a lot especially when I’m performing live, and then I fade in a signal that might be a pulse, let’s say. At first it sounds like a pulse starting to fade in, but then you realize that it’s actually cutting out, carving out the waveform that was there originally. And that happens for instance in the very beginning of the album. It starts out with what sounds like white noise and then a pulse starts to come in, but then all of the sudden the whole thing is pulsating. The original white noise is also pulsating, and it’s actually an artifact of having this other voice getting louder that it starts to erase the original one. Weidenbaum: You elected to use a particular chip to contain this information. And it seems like you’re playing with these artifacts, which, if I’m understanding it correctly, are the results of unintended, and unforeseen events that result from the information playing out as it does. So, I was wondering if you use — this is an incredibly naive question, most of my questions are naive, this is an exceptionally naive one — had you elected to use a cheaper of more expensive chip, would that influence what we’re hearing, or is the data the data and it will play out the same way?
Perich: The data is the data and it will play out the same way, and that’s kind of the nature of computation. It’s ultimately platform independent. You can even do this with a pen on paper, and write down the output as you go, and then, you know, synthesize that. It’s just processes. It’s mathematics, and it doesn’t matter how you play them back. But I should first say that these kind of counterintuitive processes, like effects of these interference patterns of the waveforms, initially — what was the word you used? A surprising …Weidenbaum: Artifacts or unforeseen? Perich: Unexpected? They become the nature of the instrument and over the last few years—I’ve actually been performing this project for a couple of years, leading up to writing the album—I’ve really gotten to explore the way these waveforms can interfere with and eat away at each other, or complement each other. And so now, it’s just basically the idiosyncrasies of this instrument that I’m writing for, and something that I get to play with. Weidenbaum: That’s cool. So it’s not quite the jazz musician’s experience of playing a bum note. It’s more like you’re discovering a particular aspect of the system itself yields a result you could never have planned for, but once you master it, it becomes part of your performance technique. Perich: Yeah, something like that, and maybe half and half. Maybe some of the time I know exactly what I’m looking for and can draw out. But when I have, for instance, all six voices playing at the same time, it can become very difficult to know what’s coming from what. And so, there is this little bit of a shot-in-the-dark moment in these performances. Not on the album — the album is fully composed and I really got to examine every little part in total detail, and shape it exactly how I wanted to. But when I perform, there’s a little bit of the unexpected built into that. It gets really complex, basically.
Weidenbaum: Now, I can spend a lot of time just asking about this programing, but I’ve got to avoid that propensity and just progress through some other aspects, because it’s just so fascinating.Perich: I’ll just finish the early answer by saying that it’s the same chip that I used in 1-Bit Symphony. They’re part of a family of microchips that are made by Atmel, and they all have the same assembly instruction set, and it’s something I’ve become very familiar with. It’s kind of a lame reason to use the same chip but it does what I want it to, and they are very inexpensive, so I can use them massive quantities. It’s funny, with all the new DIY hardware coming out that has built-in HDMI video synthesis and high-quality audio cards built-in, DIY has already gone way past needing to use individual microcontrollers anymore. It almost feels like this hardware that was already antiquated because computers haven’t been this slow since the 1990s — it is now becoming antiquated again, because the DIY hardware has gone way past it. But with that new hardware, you lose the simplicity and austerity of these very simple chips. I guess they’re a hallmark of the early 2000s. Weidenbaum: Where else in my house would I find one of these chips? I own two copies of 1-Bit Symphony and now one of Noise Patterns, but besides those three chips, where else are they in my home? Perich: They would be in something like your microwave or your oven. Well, those are similar. Maybe in your car, to control some aspect of the door-locking system. Anything that has a little bit of intelligence in it, but isn’t super smart. We’re not talking about necessarily Internet-connected appliances. Any appliance that has a little bit of intelligence to it would have some chip like this inside of it, because they have to program some software to allow you to dial in the amount of time you want to microwave your food for, and then have that count down when you hit start. You can control the energy level and all that stuff. That’s the kind of application that it might be used for, or it might also be in your phone or whatever as a part of a larger system. But something I really love about them is that they’re not made for audio. They’re just little computers. They’re not really designed for any particular application. They’re blank slates, and in order to make them do something, you have to physically solder their output to something like a speaker or a motor or a light or whatever. I really love that since they’re so blank slate — Weidenbaum: Yeah. Perich: — it’s super explicit what you’re actually doing with them. Weidenbaum: Now, if I’m not mistaken, the one in 1-Bit Symphony has something of, like, an enclosure that it’s in? Perich: Well, no. It’s a different size just because it’s a different package of the chip. In 1-Bit Symphony it’s called a “through hole part.” You’d normally plug that into a circuit board or a breadboard, and on a circuit board you’d solder the pins through on the back, and you could do that by hand. But Noise Patterns has what is called a surface-mount package. So, it sits on the circuit board and its pins fit on the pad, the pieces of metal that it gets soldered to, and a machine would normally solder that. Weidenbaum: Okay. Perich: It’s designed for a different kind of assembly process, one that’s done by machine instead of by hand. Because of that, it can be a smaller package. And I just wanted to use all surface-mount parts on Noise Patterns, so that they all sit on the surface of the circuit board.
Weidenbaum: How and where is Noise Patterns assembled, this physical object?Perich: The circuit board with its parts are assembled and soldered on a circuit board made in China, and those are shipped over here to my studio, and we program them all here, and then the rest of the packaging — the mat board that it sits in, the poster insert, the CD case and everything — that’s all made in the U.S. Weidenbaum: And is the programming of them, is it sort of, when I was young, there was the opportunity to program a EEPROM chip, that was kind of neat thing you might experiment with. Is it the same sort of encoding process as like an EEPROM chip? Perich: Probably yes. In fact, it has an EEPROM that’s programmed during the programming process. Weidenbaum: Oh, interesting. Perich: What happens is, I have a programmer that connects to my computer over USB, and it connects to these boards, and you’d make contact. You hit program in this command-line setup I have on the computer, and it just flash-erases the chip and overwrites it with this new code. It’s a very direct, low-level process. I compare it transferring music to your phone or MP3 player, except when you’re doing that, it’s a communication between two operating systems. In this case, it’s communicating with this dumb device. It’s just overwriting its entire memory from scratch. Weidenbaum: Got you. Perich: That’s another thing I love about assembly language. There’s no architecture around it. There’s no operating system, and you can start writing code from memory address 0. And it’s just so beautifully direct. It’s reminds me a little bit of how in the 1990’s, a modem was flash-upgradable, so you could access higher speeds in the future — and I don’t think any of us actually did that. It’s the same process, the direct programming of the flash memory of the chip. The EEPROM section—when you’re looking at the source code, that’s on the back side — I use that to store the actual period for each pitch, each frequency, along with the period for each of the tempo changes that I have. That’s because these logarithmic values hard to calculate. They’re these ugly decimal numbers essentially, right? They don’t round to nice whole numbers and they’re hard to calculate, so I store this lookup table in the EEPROM, and that was just a way to getting another 512 bytes of data when the rest of the memory is used up. Weidenbaum: How much information can you store in one of these chips? Perich: It’s 8k in main memory and 512 bytes in the EEPROM, so not very much. Weidenbaum: No. Perich: I think of that as a sort of information-theory problem: if the goal is to write interesting music but to have it fit in this small mount of memory that includes the actual synthesis software, the sequencer software and everything: how can you write this music that sort of unfolds into a meaningful composition that lasts a certain amount of time? Weidenbaum: And how much have you maxed that out? Perich: Every single byte is used. What happened is I wrote the music and then there’s this moment when I transfer it onto the chip, and find out that there’s way too much information. And then I have to whittle down the music, take out musical events or simplify patterns, et cetera, and just keep doing that until it fits. It fits exactly. Weidenbaum: That is so cool. Just aesthetically, I want to ask a question about the paperwork, and then get back to the circuit board. You didn’t use a monotype font for the, where each character is the same width. Perich: Yeah. Weidenbaum: It seems like one would immediately aesthetically associate this with that sort of fixed width. Perich: Yeah.
Weidenbaum: Was that a conscious decision, and if so why?Perich: Many years ago I set my text editor to a regular variable-width font, and I’ve never wanted to go back to fixed-width after that. I find that it’s very rare that you ever want things to line up vertically in your code. And if you do, I feel like it’s symptomatic of not using text properly. And so, in my mind, it should all be variable width. There’s really no reason to line things up that way. Even on the back of the poster, all of the notes are different lengths. It’s a different number of characters in each note, whether it’s a flat or a natural note, and the pattern names are different widths too. Some of them have single digit in the column; some of them are double digit. To me it’s just a more natural way of working with code and information that I think is closer to how we see. And, it’s so much easier to read the variable-width font than the fixed-width font. Weidenbaum: And to confirm, the information that begins with a semicolon is not read by the machine. That’s a comment, correct? Perich: Correct, and I think that this code is much more heavily commented than the original 1-Bit Symphony code. I tried to really clean everything up. Semicolons are comments. Weidenbaum: So you’re writing code with the intention that a human will read it as well as a machine. Perich: Yeah, well that I can read, first of all, because it’s a pretty long source file, and as I write it I go back and I change things, and the file gets adjusted over the years, and so it’s important to keep the code commented so that I can understand how it all works. And later when I look back at it. I do want people to be able to understand it if they look at it. The way I comment is I tend to comment blocks of code at the beginning. I don’t comment every single line, so you have to get the gist of what’s happening in order to then dissect the lines of code. It’s not fully commented but it’s supposed to give a hint of the structure. Also, I indent the lines in a way that most assembly language programs I think do not — basically, I indent them around the structure of the code, so that if there’s a loop, for instance, I might indent the content of that loop, so it’s a kind hierarchical-looking layout of the code that sort of match the structure. Weidenbaum: But the machine doesn’t benefit from those indents. It’s purely for reference sake when you’re working on it? Perich: Correct, yeah. Weidenbaum: Moving from the aesthetics of the code to the aesthetics of the object, what — Perich: Oh, I should say that there’s a little surprise for people who read the source code and understand it. Weidenbaum: Oh, there is? Interesting. I read through most of the comments, but maybe it’s a deeper level of surprise than that. I’ll look through it again. Perich: One deeper level, not that deep. Weidenbaum: Cool, I’ll take another peak. Easter eggs are good. So with the device itself — is device an okay term? Perich: Sure, I don’t know what to call it honestly. Weidenbaum: With the device itself, you know it’s quite lovely the way that when you hold it to the light you see the way that the on/off switch connects down to the battery and to the chip and so forth. And then, later on, you can see the very light connection between the chip — basically, you can see the same paths that were with cables with 1-Bit Symphony. Are those exactly as legible as they would have been, or did you make any aesthetic decisions in your directions to the manufacturer to make things more or less evident? Perich: I didn’t change anything about how they make it. I actually wish they were a little bit more legible. I knew you would be able to see something, because the actual copper has a little bit of thickness, and so when the solder mask — the matte black paint — goes on top of that, you’ll see a little bit of the shape of the metal. It’s just on the verge of what I would have wanted. You can see it if you want but it doesn’t stand out maybe too much. The thing that you don’t see is what’s called the “ground” side of the circuit. There’s power and ground, and the connections that connect to the minus side of the battery are all done differently on this board than they were in 1-Bit Symphony. On 1-Bit Symphony, it has wires connecting all the ground side of the circuit. But in this case, I do something called a “ground plane,” which is a technique in PCB design, where the ground side of the circuit board is the actual surface of the board. The entire board is covered with a layer of copper that connects all of the ground pins of all of the parts. It’s a way of reducing interference, like radio interference, and it meant that you don’t see the actual ground connections. Weidenbaum: You know I want to ask you like Loud Objects, and on my way there, I was wondering, is this “playable” in any way, other than the volume control? Can, say, touching two things at the same time have any effect? Perich: Well, one way to play it is if you read the source code closely. The other way, which is actually a really meaningful thing to me, and it’s super super subtle, is that I included a fast-forward button, so you can fast forward the music, you can skip to the next track, et cetera. But the subtle part is that, when you press the fast-forward button, you have changed the playback speed of the piece of music, but the random-number generator is moving ahead at the same rate. And so, the result of that is that you’ve now offset the music a little bit from the random sequence that it would have gotten had you not pressed that button. And, so you are suddenly getting an entirely new version of the piece. It sounds pretty much identical, because I use randomness in this textural level in the sound synthesis. It’s not like it has any control at the actual musical level. But the philosophical side of it is that if you never press the fast-forward button, then it’s this closed computational system, and it’s basically output-only. You know, you’re turning on the chip, the code starts running and it’s this closed system that mirrors some basic number theory. And the moment you press the button, then you’ve kind of interjected in this pure closed system, and you’ve brought in all of the messiness of the real world — like whether or not our time and space is grid-based on the lowest level, or whether or not we have free will, or where randomness comes in in quantum mechanics. All of these messy real world things suddenly come into the system and it’s no longer this clean abstract thing. And so, for me, what that also means is that the work of mathematicians, like Alan Turing and Kurt Gödel, these people who really explored the limits of computation, and the limits of mathematics, really — their work in applicable until you press the fast-forward button. It’s applicable so long as this is a deterministic system, and you kind of kill that determinism when you press the fast-forward button. So I think of that as, like, playing the piece. Weidenbaum: That is a bit that I’m going to transcribe and reread especially carefully so I can fully appreciate it, because that’s a subtle difference. It’s really interesting. Perich: Thanks. I mean, we know that computers can’t generate random information, and that the random-number generators in software are called pseudo-random number generators, so they look kind of random to us but they’re technically actually just deterministic, and they have a finite period, and they eventually repeat. They’re not truly random. But I find that kind of really interesting and, to me, that’s one of the characters of what it is to work with code. And so, even though this whole piece is based on randomness, that randomness is machine randomness, and it has these limitations to it. And in fact, the randomness has very little entropy. I don’t know if it’s the right way of describing it, but there’s very little entropy to it. I don’t know if that’s the right way of describing it. I think I just used — yeah I just used three bytes of memory to store my random values. So there aren’t too many options you have with three bytes of data, that’s 24 bits of randomness before it repeats. And I tested that and I found that was, basically, the smallest amount of information that I could get away with, without it sounding like something that’s repeating, so that it sounded like white noise to my ears, even though it has a cycle and that it repeats. I could have made it four bytes, five bytes, or whatever, but that that takes a lot more computational power. Part of the information theory side of it is optimizing what that threshold is, what you can get away with in terms of the processes that are implementing the experience you want. Weidenbaum: Wow, this is a really rich territory. Perich: This is my favorite side of things. It’s just like these little subtle — Weidenbaum: It’s really fascinating. I spent a lot of time with your book, The First 1/100th Second of 1-Bit Symphony. And while you were talking just now I pulled it off the shelf. I keep it in the box it shipped in. Perich: Yeah, matte white can be problematic.
Weidenbaum: Yeah, that would be great, please.Perich: And then I tried generating documents for InDesign, and I tried loading the rich-text files in all sorts of different editors, and they all have issues except for Microsoft Word. The only problem with Microsoft Word is that had a maximum tab count, where you can specify the width of each tab in the ruler, and it only supported 50 tabs, and I used I think 49 tabs or something for the bulk of the book, so it just worked. But yeah, it was kind of like really crazy and absurd to make this thing. And then whenever I tried to print it to a PDF, that took forever. And then I sent it to the printer. Weidenbaum: When you sent it to the printer, did they come back and say, “We should check in about this,” or do they just unquestionably produce what you requested? Perich: They were really hands-on. We did paper samples. We did printing samples. We worked with the margins of the book, because I wanted everything to layout nice and tightly, make sure it’s legible because the font is so small. They said it was an unusual project to them, but they were great. It was from Bookmobile, who I feel is dancing the line between printing on demand and more high-quality printing. They printed these beautiful books, and I was incredibly happy with working with them. Weidenbaum: It’s a phenomenal object. So I want to ask another question about the Noise Patterns object and then I want to move to performance with Loud Objects. So, the font that appears on the circuit board is familiar to us because it’s the font — like, I have a small modular synthesizer I have been slowly assembling with various devices, like mostly Doepfer and a few others. And the font I recognize from that. That’s the font that is simply part of the PCB process, correct? Perich: Yeah, I mean it’s just a practical font. It’s a vector font that the PCB software that I use comes with. The software is called Eagle. And it’s a line-based font, so it doesn’t take up much information, it’s easy to render. The software can do normal font, for instance on the back of the circuit board you’ll see the little recycling symbol. Weidenbaum: And the CE. Perich: And the CE certification agency symbol. And, so those are actual vector images. So, there’s no real problem printing a normal font on these. But I felt a connection to the native circuit-board font. And, it has parameter that you can control, that you don’t have access to in most fonts, which is the width of the lines. So, it’s not just like bold or, you know, extra bold. You can control the ratio of the line width to — something. I made it a little wider that the default, so it kind of has this weight to it, Weidenbaum: I corresponded with a few people before doing this interview, one of them being Brian Crabtree, a developer of the Monome grid instrument, and the other being the Serbian sound artist Svetlana Mars, to see if they had any questions for you. Brian put it more beautifully than I was going to — we’re talking about the “aesthetics of code” and we’re talking about the “aesthetics of the system,” and I wonder you make connections between them, or if any correlation is just a natural outcome of your work process. I was wondering what your thoughts are about that? Perich: The visual design of it, of the object? Weidenbaum: I think aesthetics plays out in visual, but also the user experience of the object, even at a broader aesthetic level, the tones — you used a term early on in this conversation. I wish I had written the word down at the time. You used the word to talk about how “arid” it was, or something like that — Perich: Austere? Weidenbaum: Austere, yeah. Perich: Yeah, I mean, I am interested in design in general, let’s say. And, I also grew up looking at a lot of minimalist art and listening to minimalist music that had a certain clear aesthetic that makes you think about certain things. And I also am really interested in mathematics and the foundations of mathematics, and having simple systems, things like looking at how complexity can arise from a few axioms, along with some simple operations. And I think that it’s kind of a combination of these kinds of things. My first circuit album, 1-Bit Music, for me introduced this idea that the circuit needs to be transparent and people need to be aware of the path of electricity through the circuit when you turn it on. And 1-Bit Symphony cleaned that all up, and it kind of made it something that, for me, was a little bit more linear, almost typographic. There was this kind of direction to the electric signal from left to right that I found was important. And I think the same kind of thing is happening in the music, too. I’m inspired by simple forms in general. I have a clear, clear memory the first time I learned about polyrhythms in music from Philip Glass’ early piano pieces, and that carried me for, like, 10 years of musical interest — these very, very simple processes were just incredibly beautiful and profound. And so, I think that there’s that in my music, but then that also extends to the way I think about code. And because I have to represent my music in code, I think about the best way to do that — for efficiency, but then also just for clean code, and also as this challenge. And so that becomes a goal and then that inspired the music as a result, as I start thinking about things that are represented nicely with numbers sequences, and thinking of using that as a starting point for musical material. So it kind of all wraps up on itself, and I think that they’re all different aspects of the same thing. And so with Noise Patterns, I kept the circuit layout exposed, I kept the source printout, I didn’t even change that aesthetic at all. It’s just swapping out the elements of 1-Bit Symphony in that poster for elements in Noise Patterns. I’m trying to say that they’re all connected. Maybe there’s something different about visual product design that’s maybe a little bit more just aesthetic, like matte-black circuit board. When I first saw the prototype, I was just really happy with how it looked. But at the same time, it connects to what I think is the character of the music. And so, it’s weird a little bit for me because I actually try to keep different media separated in my work. I do visual art, I do drawing, I do video, I do music — but I almost never overlap them or combine them, because I think that they are these distinct media. Unless I have a creative idea that is both video and music, then it shouldn’t kind of overlap between them unless it’s originally intended for both of them. So there’s this kind weird thing about these releases where there’s music but they also have this visual element. But I think that the visual element connects to music, and actually, now that I think about it, probably on some level inspires the music that I write for the object, because I’ve been designing this object for a year or two now. I’ve had plans for this album for a few years, definitely, and I only finished the music recently. And I’ve been performing for a number of years, too. So, I think the whole process is some kind of slow congealing —a lot of ideas get cut before the final piece happens both visually and musically. And in the code too. Some of the images I sent you describe this ending of the piece that I wanted to do, where I make the density of the randomness get sparser and sparser and sparser, kind of like keep going down in pitch, the way I was talking about pitch as probability earlier. And having this continuously get sparser and sparser until eventually it just outputs a 1 or a 0, and it would take the age of the universe to hear the next one.
Weidenbaum: Yeah it says, “ending gets sparser and sparser forever,” which dates back to 2012. It’s that long ago that you are beginning to plot this.Perich: I guess so. Weidenbaum: Wow, that’s amazing. Perich: I didn’t even think about that. The chronology of it is I started composing pieces that used noise on stage with other instruments a number of years ago, and it only became a solo electronic project later. This album was a result of that. On some level I’ve been thinking about these things for awhile. That was four years ago. Most of that was just a testimony to trying to do a project like this with a newborn in the family, and how difficult it’s been to do something on this scale the last few years. [Laughs.] I realized recently that I’m so grateful for that delay, because I don’t think I was really ready to write this piece of music until just recently. It would’ve been too much of that first-idea stuff, and not enough of a reflective, slow music-writing process, where you realize what you really want to say in a piece of music, as opposed to just the first impulse. But that got cut, basically. I wanted to do the infinite descent thing, and it would have taken 90-something bits of memory to do it, not even that much, per variable — so, a little more than that. It ended up taking too much of the processor’s memory, and I would have had sacrifice another 500 or so musical events to fit it on the chip, and that was too much. And at some point I realized that fading back into white noise was a more apt ending to the piece.
Weidenbaum: And you never thought about just getting a bigger chip?
Perich: At that point, it was literally already in production, so I didn’t have much of an option to change something like that, but no, the next chip up would have been a significant change in the circuit-board layout because, unfortunately, the larger chips also have the larger number of input-output pins.
Perich: And they require more power, so it’s a whole different thing. I really wanted to keep this bare bones. I didn’t think that was — well that’s a tough one — I didn’t think there was a good enough reason to include that in the music, and it was one of those ideas that I eventually let go of. But I don’t know — it was also really nice. I did some tests sounds with it and it sounded really nice.
Weidenbaum: I want to ask you a handful more questions but we are also are deep into the time, so I want to just try to ask them as a series and I’m going to fight my impulse to ask follow-ups and just ask that you respond at them relatively concisely. There are things I really want to make sure that we include because I think that they’re important satellites to what we have been discussing. The first is a very practical one. I noticed that the headphone out or the audio out seems like a more substantial object in Noise Patterns than in 1-Bit Symphony. Is there anything you learned from the previous audio out that made you elect for this one this time?
Perich: No, that was an aesthetic choice. I just liked the weight of the newer, wider audio jack. It’s functionally exactly the same.
Weidenbaum: Cool. And the next: you talked about, maybe because you know I was going to ask you about your dad and his art, but you’d mentioned the environment in which you were raised — so, at a very practical level, how recent are those “machine paintings” of your dad’s. Is that something that he was always involved in, or is that a more recent part of his artistic output?
Perich: Well, he built the Painting Machine in the 70s, before I was born. I grew up from the get go with the idea that machine-made art was legitimate and meaningful. His story is that he came from Paris — where he was involved in what was called the “Lettrist movement,” which was kind of a Dadaist thing — to New York and started doing video art, and was really part of the art scene in New York. He built the Painting Machine in a way to emulate the way video works, which is a series of scan lines that create an image. He had this idea that the future of painting was electronic, and that he wanted to explore that in his work. I grew up with these massive, really intense beautiful paintings that were made by machine. They were very visceral. They have a presence to them that’s very loud and strong. He can make them more or less precise and that’s something that he has explored over time. He has explored things like color palette, and different parameters — kind of the same way I slowly adjust one parameter here and there over a period of my work. He was not a minimalist at all. His work was very — “expressionist” isn’t the right word, but very human, very emotional, very evocative. Yeah, emotional is maybe the word for me. And yeah, what do you want to know?
Weidenbaum: It’s such a rich territory. It’s very autobiographical, and so I hesitate to even go there just because it can only be dealt with appropriately by giving a lot of time to it. But just to connect a few dots, your father had a strong association right with Interview magazine?
Perich: Yeah he did photography for Warhol for Interview magazine and he was in the whole Studio 54, Max’s Kansas City scene. His video work was of a style of just hitting record and doing these long-form video documents of the city around him, of the people, or these little semi-narrative scripted impromptu-like screenplays that he and his artist community —the people he worked with would put on — and he would screen them. He would send them in to public access television, and they would play on cable television each week, because cable access was this amazing thing that started around that time, where it gives airtime to the public. It was a community-based service. It was basically like Facebook or YouTube, or something —you could go out to some party, like Studio 54, that sort of was an artistic happening, and my dad might be videotaping it. And then you can tune in the following Tuesday or whatever it was and see yourself on TV, and your friends, and it was a new kind of broadcast medium for the public. And actually that ties in very closely to how ITP was started, the Interactive Telecommunications Program at Tisch. It started around video as a new broadcast medium that could be interactive and could be a new form of artistic expression. And then the program basically just kept up with different technology over the decades.
Weidenbaum: Now Warhol of course was someone who was very associated with the idea of mechanisms as part of the artistic process. And I was wondering to what extent your father had discussions with him on that topic — to what extent there was some parallel between your dad’s interest in mechanical systems and Warhol’s philosophical and artistic approach to systematizing the production of art.
Perich: Yeah, I don’t know exactly, but I do know that Warhol has a quote in his journals where he says that he ran into Anton Perich and Anton’s machine was painting at home and that he wishes he had a painting machine — something like that.
Weidenbaum: That’s cool.
Perich: Yeah, very cool. But I haven’t thought about it that in terms of mechanical processes in that sense because when I think about a mechanical process like printmaking or something, it serves like as a direct transfer in a way. I always thought that of my dad’s painting machine as more complex. I don’t know where exactly that line is drawn but I’ve just seen it as a framework for creativity and that he could control it himself. He could feed it images by an overhead projector, and the paint head as it moves across the canvas had a photocell that could read the projection’s light level and turn on and off the paint flow depending on the projected image and sort of do a very, very primitive scan of the image. He would combine these methods to do something that was intuitive. And so, I don’t think of it as process-based on a high level, and actually this make me think a lot about my own approach, which is that I create these systems that are heavily rigid and limited and everything, but they’re really just systems for then creating. I write music for them. And that music writing itself is really intuitive. We both work with machines in that way — the machine is just … he talks about it like a new kind of paint brush.
Weidenbaum: Is your work very much philosophically work aligned with his. Does he see you as like his greatest artistic creation? Or are there strong differences between the two of you and how you approach systems and machines and art.
Perich: Yeah, he is the type of father who — he loves everything I do, no matter what I do. So I don’t know how to tease that apart.
Weidenbaum: Got you. He’s not critical.
Perich: Well, he also is an intellectual, so we can have you know meaningful conversations, but I’m always going to have a father-son relationship with him. I was actually just thinking recently about how I have a daughter and I’m soon to have a son. And I’m kind of getting scared about what my role might be in their artistic development if they decide to become artists. And how one gives somebody this space to become an artist, to explore their own work, but then also give them just the right amount of feedback, for instance, or direction or whatever. I was thinking a little bit about my parents and how they really offered very little in terms of specific artistic feedback. My mom seems to be — her main prerogative is the emotional content of the work, and that she sometimes sees the intellectual stuff as getting in the way of the more human side of a piece. And I think with my dad, it’s just kind of hard to tell. But neither of them really ever got into the nitty gritty of my work. I think it’s just interesting looking back on that.
Weidenbaum: So like I said, I’m trying to force myself to not do what I usually would do, which is to ask lots of follow-up questions, and I think I reserved two more. One of them is very specific and one maybe we could just sort of fade out with. The specific one: Did you at any time in the production of the Noise Patterns think about adding a speaker to it?
Perich: No, not in this case. I do have work that I want to explore that with, but this was never one of those. This was meant to be a headphone or a sound-system piece, a stereophonic, two-channel electronic audio thing. Embedding speakers in the device — they wouldn’t be high enough quality, unless they were like really engineered around that, and they wouldn’t be the right stereo field. It’s really meant to be either be pressed against your ear with good headphones that have suitable bass reproduction, because the “full-body”-ness of the sound needs the right quality of headphones, or else it’s extra abrasive and tinny. Or a really powerful stereo system is great too. I actually wrote a lot of the music sitting in the car, because I just love how it sounded in that. So play it back with some good subwoofers and the volume cranked up real loud. That’s when the visceral side of the electronic sound really, really hits me.
Weidenbaum: I’ll bring it on the road or sure. It’s so awkward for me not to ask follow-up questions. I’m now realizing how much that’s part of my interview technique. I feel like I’m being rude because part of the reason I ask follow-ups is to register with the person I’m speaking with that I’m really paying attention and there are ideas I want to feed off, but I’ve already absorbed so much of your time. I think I want to end on just one thing. We may not literally end on it, when I publish this, for now I want to end on it. I was just entranced when I saw Loud Objects perform at the San Francisco Electronic Music Festival several years ago.
Perich: Oh yeah, thanks.
Weidenbaum: I think at the time I said it was the highlight of the event and it’s certainly the only thing I remember from the event now. To me the connections are fairly clear, but can you connect a little bit the topographical quality of the objects you’ve make with Loud Objects, and the way you’ve talked about “the transparency of the system” in regard to Noise Patterns and your book, but the literal transparency using the overhead projector, us witnessing this thing being constructed, and that “ah-ha moment” we witness it making sound. I think you can see how I’m much absorbed in it, but how does Loud Objects connect in maybe the chronology of your work and maybe in terms of your learning experience, with what eventually became literal circuits provided to the listener.
Perich: Yeah, yeah “witness” is exactly the right word for me. Loud Objects was always able to do something that didn’t really fit into my work, which is actually constructing the circuit in front of the audience during the performances. We did them with overhead projectors so that there’s this magnified element. As we solder together these chips and the power and the audio out and build the circuit in silence, you see how it’s connected and then we finally see the final wire get connected it explodes into sound, and then we modulate that over the rest of the performance. There’s a lot of overlap with that in my work. For one thing, we’re working with 1-bit sound, and if anything I think it’s a little bit less philosophical in Loud Objects, a little bit more practical. We just wanted to have as direct a connection as possible, so we’re taking the direct output of these basic chips and that’s 1s and 0s, so that’s 1-bit sound. I think the history is pretty much also parallel —I started 1-Bit Music, the first album, right after I finished college. It was just around that time, because I went to college with Kunal Gupta and Katie Shima, the other people in Loud Objects — who didn’t perform in that San Francisco show. That’s when Lesley [Flanigan, Perich’s wife] subbed in. In a way Loud Objects doesn’t really have, like, a drummer or a bassist or whatever. We all perform the same operations and can be hot-swapped. I went to Providence for about nine months and really soaked in the noise scene there. It was sort of during that time that I was talking to Kunal and Katie about making music with circuits, because I was just starting to get into the 1-bit stuff myself. We were thinking about modular circuit performances where we’d have these pre-built modules that would be sound generators, and then we would connect them together somehow, and kind of like what we’re doing now but more pre-made. Within that first year or so we were asked to perform at the Bent Festival in New York, and we wanted to do something that was engaged our work one step further, and that’s where the whole overhead-projector idea came from. There’s all this circuit-bending stuff happening all around us, and we wanted to do it live, and invite people into that aesthetic of breaking open the circuit. Then that grew from there, because we were building our circuits instead of modifying existing circuits, so we got more and more involved in the building process and more involved in the kinds of code that we could write for these sound synthesizers. Each chip represents a different program, and I think in a way that the big code difference between Loud Objects and my work is that I come from a compositional stand point, where my code is built around structure and composition, really. And Loud Objects is from the start always entirely playful. We just want to have fun writing code. We want to make software that is totally unintelligible and that we didn’t really understand, and so we tried to just mess around the code and set up structure and interject in it, and really mess things up, and see how we can get some interesting sound out of that. So it was really like playing with the circuit, you know, on a intuitive level and playing with code on an intuitive level, and the performances were just really about bringing that to life, that process. But yeah, same idea as the transparency and agency and barebones tools and their bone structure, basically.
Weidenbaum: One of the most overused words in art is the word “about” — you know, it’s “about technology,” it’s “about social change.” Every piece of art is accompanied by this idea that it’s “about” something. What makes the work that you do so interesting to me is that the systems are so whole, from the sound to the aesthetics to the thought process. It’s very structured and revealing. What I love about the performances is performances are rarely “about” things the way art is because they’re active, and so you’re engaged in different way. And what I love about Loud Objects is it was engaging, and thorough, and that the fact that it can exist as a performance to me ratified this as an idea.
Perich: I don’t know exactly how it works with other artists, but I’ve realized what my work is about over time, and when I started working with 1-bit information, like with one bit sound, it was just an exciting musical moment for me. There was something exciting about creating music with just 1-bit output. But then, all the other stuff and, more recently even, what I think of as social issues, like realizing that the transparency has something to do with agency. You know, the current world where technology is becoming more even more opaque and will continue to do so. Those are all issues that I think came out over time, and they’ve influenced where the work has gone, but I certainly didn’t have all of that figured out at the beginning, and it wasn’t about those things to begin with at least. And of course like after talking about all of this stuff, the technical stuff, and everything else, at the end of the day it’s really just music. It’s meant to be a musical experience. It’s one this if it makes you think about other things, but I want Noise Patterns and 1-Bit Symphony and all my compositions and stuff to hold up as music too, without anything else.
Weidenbaum: Yeah, it really just fascinating. There’s just so much between the installation work and the physical releases, and the book, and yeah it’s a very coherent whole, but the pieces are all so distinct. Well I could just talk for another two hours.
Perich: I’ll tell you one more thing.
Weidenbaum: No, please.
Perich: It’s a very meaningful thing, where Noise Patterns came from, which is from my time the drawing. The drawings are essentially these visual explorations of randomness and order, and then I compose that into some sort of shape pattern, the Drawing Machine does. 1-Bit Symphony and 1-Bit Music are based on tone, especially with 1-Bit Symphony. It’s pure tone, essentially, pure frequencies, and that kind of represents the “order side” of the drawing. And then the randomness in the drawing became, essentially, Noise Patterns, and it was from the drawings, working with the randomness in the drawings, that I realized there was there was this other side, this opposite but strangely similar end of the information spectrum order is zero information and randomness is a hundred percent maximal information information. In a way they’re kind of related, but they’re opposites. But really this album came from the drawing.
Transcription assistance by Kristen Dayon (kristendayon.com).