Music of Chance

It was 1989 and I was roundly ignoring a verdant spring day in the HVAC hermitage of Brown University’s Computer Science Lab, when Henry Kaufman yanked me up from my workstation to say I must see Merce Cunningham’s dance company perform. Who’s that? I blurted to my friend as he pulled me out the lab’s card-keyed door.

The afternoon rehearsal was taking place in a dimly lit hall, free to anyone who happened to know of it, which turned out to be only us and two or three other students. The dancers on-stage sketched coordinated forms accompanied by eerie rhythmic tones. It was the first time I’d seen bodies lose their individuality—and even their corporeality—to become part of a greater abstraction. Figures hurtled across the stage in a blur; bowed bodies nearly smashed into walls; and dancer’s frames hardened rod straight, then pogoed in syncopation. The movements stirred memories of the abstract films of Oskar Fischinger and Len Lye that I had recently fallen in love with, studying experimental animation at the Rhode Island School of Design.

Henry and I, enthralled, had dozens of questions after the rehearsal. Walking down the dark lanes to the stage, we found a man in the music pit with tussled dark hair streaked with grey filaments. As he disorganized a tattered sheaf of papers, we peppered him with questions, particularly about the dancers nearly hitting the walls, and he explained that the choreography was randomized: stochastic methods gave the dancers freedom to improvise, but also caused close calls. He spoke in great detail about the dancers and their relationship to the music, which he described as contrapunctal: not meant to mirror the dancers, but to enhance and amplify them, like a human relationship.

As Henry and I walked out, we passed a student who asked us eagerly, “What did he say to you?” “Who, that guy from the crew?” “That guy? That guy is John Cage.”

The idea of music and dance that adapted fresh to each setting of stage and audience charmed me. I had been playing with interactive sound and image on the computer since I started programming an Apple II+ as a little boy in 1980 and continued similar experiments into college in the late 80s, even as I labored under a heavy Computer Science course load. Sneaking audiovisual experiments between classwork during late evenings when research computers were idle, the screens came alive with my abstract visual experiments like Motion Phone.

Moving images crave sound, but in trying to add generative music to interactive animation, I continuously ran up against a problem: the music felt annoyingly literal when directly and repetitively tied to the graphics. My most satisfying musical experiments came from the opposite direction: taking a musical track and improvise animation to it. I made a few amateur films improvising to tracks from Mingus Ah Hum, an album I was addicted to to at the time. And I fantasized about working with a modern Mingus like John Zorn; but the impossible logistics of thirty-thousand-dollar computers had me stumped at the time.

In 1995 in Los Angeles, I showed Motion Phone for the first time publicly in SIGGRAPH’s “Interactive Communities,” an exhibit of experimental interactive technologies. It was there that I met Larry Cuba, an abstract animator who refused to leave Motion Phone’s workstations, creating exquisite visual poetry in several hours’ of sessions as SIGGRAPH visitors impatiently queued in a growing line behind him. At the end of one of these sessions he turned to me and said that I must meet William Moritz, the chronicler of Oskar Fischinger and the foremost scholar of abstract animation who lived less than an hour away.

Moritz’s hillside home overflowed with file cabinets and abstract art lit by streaks of California sun shooting through its windows. After a few hours’ note-taking, archive digging, gossip, and history, I found the courage to ask Moritz about my nagging problem with the automated music that often accompanied animation and light performances. He replied: “It is precisely those aspects of music that can be mechanically translated from the visual to aural, which are the least interesting.”

Moritz’s comment crystallized the problems I’d had tying images to music. I started to daydream about an organic, generative way of creating music, similar to Motion Phone, but it was only at Interval Research in 1997, that these ideas found fruit when I met Lukas Girling. He was a new graduate from the Royal College of Art’s Interaction Design Program who had come to California to work with Joy Mountford and Bob Adams in an interactive music research group. I volunteered immediately to collaborate with him, and we combined our complementary interests in capturing the body’s gestures to translate them to image and sound. Using the body to introduce “randomness” through its infinitely varying gestures became the way out of the sound/image conundrum: just as with a musical instrument, refined human gestures provided infinite variation, subtlety, and novelty.

Working together, Lukas and I created several prototype interactive music “instruments” that used the language of DJs to create music in lieu of the traditional score/performance model. Lukas opened up my mind to real-time methods for creating music with the body’s most subtle movements that didn’t require a traditional music theory education: instruments that used body language borrowed from DJ’s fingertips and palms sliding across records and mixers. We had a fruitful collaboration on projects that our small audience adored; and we came close a few times to deals with major video game companies, but unfortunately our work was never publicly released.

One of the highlights of that period was meeting Brian Eno, who visited us to comment on our work. He had a chance to use the three (non-musical) pieces of the Dynamic Systems Series: GraviluxBubble Harp, and Antograph, and he pointed out the similarity between his Tape Loop Experiments and Bubble Harp, each of which create an extremely long performance due to the varying durations of each segment: in Eno’s case, tape loops; in mine looping motion gestures. Eno also advised us on the subtleties of tuning old analog synthesizers that later turned into a quantizing feature for our apps. I think Eno also found inspiration in our work for some of the later experiments in audiovisual synchronicity that he’s created in galleries, performances, and later the iPhone app Bloom created in collaboration with Peter Chilvers. Eno’s 1999 article for Wired Magazine The Revenge of the Intuitive written soon after our meetings together at Interval includes many insights into interactive performance.

With these influences in mind, I recently revisited interactive music. Returning to principles from more than a decade before, I re-imagined a way to create infinitely varying music from the formerly silent, and confusingly-named Bubble Harp. The obvious idea that had been with me since the nineties was to pluck each line of the Bubble Harp according to its length. The beauty of this model is that, just like its animation, and true to Cage’s inspiration, the composition will never repeat. Each point, replaying a person’s gesture according to its own duration, dances against the others’ rhythms to create slightly different geometry and sound each time through. At the same time, the variations are constrained, so that the animated drawing becomes a recognizable improvisational structure, like a jazz session.

To make the sometimes dissonant results more musical, I constrained the compositions to specific scales, with the pleasingly pentatonic as a default, allowing creators to tip-toe into music theory as they change into the exotic-sounding Hungarian scale, a familiar Blues scale, and then the full C Major scale to explore music’s full complexity.

Encouraged by Bubble Harp’s success, Lukas and I reunited to collaborate on an instrument inspired by the nontraditional ways of making music that DJs and electronic musicians embrace. In OscilloScoop, Lukas and I have curled the Cartesian X-Y grid of music software back into loops that turn like records, yet sculpt like clay. Among Lukas’ many insights is to turn the relatively impenetrable world of music software such as the ReBirthLemurPro ToolsAbleton Live, and other hard-core musician’s tools, into something as effortless (and fun) as Super Mario. At Interval we had used video game controllers as the low-barrier gateway to control music, but now with the iPad, we are able to be entirely intuitive, touching music with our bare hands.

OscilloScoop’s logic isn’t a musical innovation: at its core it’s a synthesizer with the three spinning crowns that control pitch, filter, and volume. What is exciting is turning the grid-authoring experience into an improvisational game. Just as Cage used games like the I Ching to create his music, or launched musicians and dancers into action with a small set of rules, we have made a set of rules that inspire improvisation and allow untrained musicians to feel their way through sonic textures.

Some of the world’s great musicians don’t read music, but feel their way through innate improvisations, or cut and paste with software, building up songs micro-slice by micro-slice. With apps like Bubble HarpOscilloScoopThicketSoundropSinging Fingers, and SoundyThingie come new ways to create music for the ordinary person that don’t merely turn you into a Guitar Hero puppet, but allow one to create personal, original compositions from infinite sonic possibilities.

Previous
Previous

Björk’s Biophilia

Next
Next

The Power of Play