This is a screen shot off my mobile phone. What it displays is the active interface for the Google Play Music app. Visible are the cover images from four full-length record albums, all things I’ve listened to recently: the new one from the great experimental guitarist David Torn (Sun of Goldfinger, the first track of which is phenomenal, by the way), an old compilation from the early jam band Santana (for an excellent live cover of Miles Davis and Teo Macero’s proto-ambient “In a Silent Way” – more tumultuous than the original, yet restrained on its own terms), and for unclear reasons not one but two copies of Route One, released last year by Sigur Rós, the Icelandic ambient-rock group.
If you look closely at the little icons on top of those four album covers, you’ll note two that show little right arrows. That’s the digital sigil we’ve all come to understand instinctively as an instruction to hit play. And you’ll note that both copies of Route One are overlaid with three little vertical bars, suggesting the spectrum analysis of a graphic equalizer.
What isn’t clear in this still image is those little bars are moving up and down – not just suggesting but simulating spectrum analysis, and more importantly telling the listener that the album is playing … or in this case the albums, plural. Except they weren’t. Well, only one was. While I could only hear one copy of the Sigur Rós record, the phone was suggesting I could hear two. Why? I don’t know. I felt it was teasing me – teasing me about why we still listen the way we used to listen, despite all the tools at our disposal.
Now, if any band could have its music heard overlapping, it’s Sigur Rós, since they generally traffic in threadbare sonic atmospherics that feel like what for other acts, such as Radiohead or Holly Herndon or Sonic Youth, might merely be the backdrop. All these musicians have hinted at alternate futures, though in the end what they mostly produce are songs, individual sonic objects that unfold in strictly defined time.
It’s somewhat ironic that Route One is the album my phone mistook as playing two versions simultaneously, since Route One itself originated as an experiment in alternate forms of music-making. It was a generative project the band undertook in 2016, described by the Verge’s Jamieson Cox as follows: “a day-long ‘slow TV’ broadcast that paired a live-streamed journey through the band’s native Iceland with an algorithmically generated remix of their new single ‘Oveour.'” The Route One album I was listening to contains highlights of that overall experience. An alternate version, with the full 24 hours, is on Google Play Music’s rival service, Spotify.
What this odd moment with my phone reminded me was that it’s always disappointing, to me at least, how little we can do – perhaps more to the point, how little we are encouraged and empowered to do – with the music on our phones.
Why don’t our frequent-listening devices, those truly personal computers we have come to call phones, not only track what we listen to but how we listen to it, and then play back on-the-fly medleys built from our favorite moments, alternate versions in collaboration with a machine intelligence?
Why can’t the tools time-stretch and pitch-match and interlace alternate takes of various versions of the same and related songs, so we hear some ersatz-master take of a favorite song, drawn from various sources and quilted to our specifications?
Or why, simply, can’t we listen easily to two things at the same time — add, for example, Brian Eno’s 1985 album Thursday Afternoon, an earlier document of an earlier generative system, to that of Route One? Or just add one copy of Route One to another, as my phone suggested was happening, one in full focus, the other a little hazy and out of sync.
Why aren’t these tools readily available? Why aren’t musicians encouraged to make music with this mode in mind? Why is this not how we listen today? Why do we listen like we used to listen?