The maximum display width of an image on Disquiet.com increased significantly with this site’s recent redesign. I figured I’d employ the capacity for the first time by taking a screenshot of the six modules that the Scottish company Instruō (instruomodular.com) made available for free last month on the free software synth platform VCV Rack (vcvrack.com) — along with, for good measure, a seventh module, the earlier Cš-L oscillator, just to max out the width. Each of these modules was ported to software from existing commercial hardware that Instruō designs and builds in Glasgow.
It’s also a good opportunity to highlight the interview I did back in January 2021 with Instruō founder Jason Lim about the process and decision-making that went into the company’s initial slate of hardware ports: “How Instruō Went Virtual.”
It’s the start of a new year, and I want to try to get back in the habit of posting quick mentions each Sunday of my favorite listening from the week prior:
▰ Hildur Guðnadóttir already had committed some of the most remarkable film music of the year for Tár, Todd Field’s feature starring Cate Blanchett, and she’s followed it up with Women Talking (Deutsche Grammophon) Both scores veer dramatically from her often drone-based prior work (Chernobyl, Joker, Sicario: Day of the Soldado). Women Talking, in contrast, features a lot of staccato string work.
▰ If I had done a top favorites of 2022, guitarist Bill Frisell’s Four, his third album for the jazz label Blue Note, would have been on the list for sure. It teams him with Johnathan Blake on drums, Gerald Clayton on piano, and Greg Tardy on horns (saxophone, clarinet, bass clarinet). The key word is “team,” as this is a jazz album with essentially no solos; it’s all about constant interplay.
▰ Beth Chesser and Pier Giorgio Storti collaborate as Rathrobin. Their album Ear to the Ground combines strings, voice, and unidentifiable textures, including field recordings, into a sometimes aggressive but often ruminative sonic spaces. It came out almost a year ago, at the end of January 2022, but I’ve only recently started listening to it.
▰ Rplktr (aka Łukasz Langa) recorded half an hour using the Awake script, which comes as part of the Monome Norns musical instrument. It’s sparkling and lightly percussive. Just listen as the patterning unfolds.
▰ Embedding here won’t do it justice, so if you do use Instagram, check out Jorge Colombo’s (instagram.com/jorgecolombo) — specifically the short films he posts. The “NYC2” batch, for example, are black and white snippets, shot in cinematic horizontal mode — field recordings that evidence the keen eye and ear I’ve admired for decades.
On the left is the M8, a remarkable little portable synthesizer (or “synthesizer sampler sequencer,” as the developer describes it: dirtywave.com) that I got recently. On the right is my iPhone running a piece of software called TouchOSC (hexler.net), which provides a customizable control surface. In between is a Micro-USB cable and an Apple dongle. Given how complicated so much technology can be, all the more so when trying to connect two pieces of technology from different manufacturers (don’t get me started on my I2C headaches — and if the term “I2C” is unfamiliar, you might count yourself thankful), I marveled at the immediacy of this connection, the ease with which I could suddenly not just set parameters but maninpulate them in real time.
Increasingly, our devices — large and small alike — can be said to have senses: a variety of inputs that contribute to a mosaic awareness of the world in which they are put to use. It takes a lot of different technologies to make a car smart, or at least “smart.” The automotive manufacturer Tesla has announced that it’s dispensing with one of them: the “ultrasonic” sensors that were a part of the namesake cars’ safety sensory array. And this isn’t the first such functional excision, either, according to Engadget: “Last year, Tesla started phasing out radar sensors in favor of vision-only Autopilot, tweeting at the time that ‘vision has much more precision [than radar].’” Car & Driver weighed in at the time: “Other automakers use radar for their adaptive cruise control systems, and they benefit by being able to operate in inclement weather and direct sunlight.” (Back in 2015, per electrek.co, Tesla filed a patent, “Hidden Ultrasonic Sensor Assembly,” for “new ways to disguise the sensors,” reportedly so they wouldn’t mar the cars’ exterior design.) The cars will now rely on visual data for maneuvering, it appears: “It’s part of the company’s shift towards its camera-only Tesla Vision driver-assist tech,” per The Verge.
I’m intrigued by the idea that a compute-intensive device, especially one concerned with machine proprioception, becomes more capable by limiting the variety of data sources it draws upon. Car drivers generally know that sound plays an important role in gauging things like the quality of a road, the disposition of the weather, the proximity of nearby vehicles, and even the state of the vehicle itself. That said, ultrasonic sensors aren’t truly ear-like; the functionality is more usefully akin to echolocation, in that they emit signals and then gauge the response as a means to map physical spaces.