When DALL·E 2 opened the user floodgates on Thursday, I was ready with my generative visual aspirations. The New York Times, back in April, described the browser-based software DALL·E 2 as “building technology that lets you create digital images simply by describing what you want to see.” That might sound like yet another technologist’s empty vaporware promise, but as the intervening months have shown, the software actually works — which is both thrilling and, true to the modern internet, terrifying. (The same day DALL·E 2 became accessible to anyone, Facebook announced its own AI art app, a text-to-video tool called Make-a-Video.) I spent about an hour during breakfast feeding concepts into the DALL·E 2 algorithm and waiting patiently to see what might pop out. Of all the images I received, the one shown here was my favorite. The prompt I entered was “3D render of an earbud that uses nanotechnology to connect with your hippocampus.” I may share some more of my audio-themed DALL·E 2 harvest later. I posted a bunch to twitter.com/disquiet over the course of the day, and included some telling fails. I particularly like this result because it looks both fantastic (that is, beyond what I had myself imagined when I asked for it) and yet very much like the burnished fantasies (over)sold by technology companies. One has to wonder how much of the raw cultural material — the now hotly debated source content for DALL·E 2’s automated creativity — this is based on was no more real than is this image itself. The dreams that dreams are made of. I’ve spent the past decade sending brief music composition prompts to a growing community called the Disquiet Junto, who each week make new tracks based on carefully worded instructions. Needless to say, interacting in an adjacent manner with artificial intelligence is quite interesting to me.
Give it a go at labs.openai.com.