"Classic" Hyperinstruments
1986-1992
A Composer's Approach to the Evolution of
Intelligent Musical Instruments
by Tod Machover
I grew up studying classical music and playing the cello. My mother is a pianist who also taught musical creativity, and my father has been involved with computer graphics since the dawning of that field. Ever since I was a kid, the idea of combining music and technology was close to my heart. I also grew up performing and listening to rock music, and the general idea of bringing together--or at least confronting--seemingly divergent worlds has always been an obsession of mine.
After a year at the University of California at Santa Cruz and another studying and performing in Florence, Italy, I majored in composition at the Juilliard School, where it was very difficult at the time (the mid-Seventies) to work with electronics, although I did manage to learn something about computer music through private tutoring. I was initially drawn to computers because of my interest at that time in writing extremely complex music that juxtaposed many layers of contrasting music. It was pretty hard to play, and I wanted to learn to program it, to hear myself, and to show other musicians what it sounded like. However, my experience as a performer soon convinced me of the importance of developing computers as a live performance medium, which certainly wasn't the case at the time. Since then I've been developing performance systems that involve real-time computers combined with instruments--sometimes existing instruments, and sometimes ones built from scratch.
Our work on Hyperinstruments started in 1986. It grew out of my experiences composing my opera, "VALIS", as a commission from the Georges Pompidou Center in Paris. The center had asked me to conceive an elaborate project that combined image and sound in a new way. I treated it as an opportunity to redefine "opera." As it's almost virtually impossible to rehearse a complicated computer setup within a traditional opera structure (insufficient rehearsal time, attitude problems, etc.), I decided to start from square one by building my own theater, designing purely electronic scenery, inventing a new opera orchestra, and--in some sense--attracting a new audience to opera.
I based "VALIS" on Philip K. Dick's science fiction novel of the same name, which provided an opportunity to explore the implications of the kinds of technology I wanted to build. The Pompidou Center has a huge entrance hall, the size of an airplane hangar, and I decided to build the theater in that entrance hall, because thousands of people come through there every day (thus exposing people who normally would not go to an "opera" or even a contemporary music concert to the project), and because I could set up the visual and sound installations the way I wanted (recorded on Bridge CD #BCD9007; call 516-487-1662 for information).
We built enough seating for 700 people, with standing room around the edges. The stage was constructed of real marble, built in the form of a labyrinth. A large computer-controlled video wall provided all the scenery, with additional scenery provided by several columns of computer images. We also included an extremely sophisticated laser installation to convey the "pink light", or strange mystical, bombardment experienced by the opera's main character, the explanation of which forms the central argument of the opera. Just as we built the theater from scratch, my idea for the instruments was to depart from the traditional opera orchestra and use real-time, live computer instrumentation. I essentially wanted the fewest number of musicians controlling the most amount of music: the most layers of music, the most amount of subtle control over the music, necessitating the most sophisticated musical setup I could devise.
The technology we developed for this opera project came to be called "Hyperinstruments." By focusing on allowing a few instrumentalists to create complex sounds, we continually improved the technology and reduced the number of actual musical instruments. The resulting opera orchestra consisted of two performers: one keyboard player and one percussionist. They controlled all the music for the opera, and almost all of it is live.
The Hyperinstrument system is based on musical instruments that provide a wide variety of ways for musicians to play music into our computers. The simplest method is via an instrument similar to an existing, traditional instrument, such as a keyboard or percussion controller. More and more, however, we're using extremely sophisticated controllers that monitor hand gestures. The output of those instruments goes to a Macintosh II computer, the "brain" of the Hyperinstrument. We handled Hyperinstrument development in an artificial intelligence environment, using Allegro, common LISP and Language. A Media Lab graduate student and software engineer, Joe Chung, developed a special "Hyperlisp" system. All musical data coming from the live instruments are analyzed and interpreted in real-time in the Mac's Lisp environment, then turned into MIDI data or musical data which is output to a bank of sound-producing devices, MIDI synthesizers, samplers, or more complicated signal-processing devices.
One theory behind the development of Hyperinstruments concerns the potential for live performance. Music is a performance art. You can achieve magical results in a recording studio, where you have the chance to redo and overlay parts, but you should be able to accomplish things that are just as wonderful, and which retain the dimension of direct human expressivity and communication--as well as spontaneity--on stage. To achieve that while performing live, onstage in a concert setting, we need the power of smart computers following the gestures and intentions of fine performers.
Working with such fine performers is a key aspect of our approach to the Hyperinstruments. In the past, we haven't built systems for novices (although this has been a major focus of our work since 1992. See for instance cover story from "Popular Science" magazine, October 1995). In fact, the systems reward skill. Hyperinstruments are extremely sensitive to the nuances and all the special things that the best performers can do, and employ those skills to expand and enhance their performance, all under the performer's control. The better you play, the better the computer reacts.
That leads to a design problem, however. For such sophisticated systems, how simple or how hard should they be to learn and understand? We usually work with virtuosic performers who don't have much experience with electronics. We want this type of terrific performer to be able to come into our studio and understand the concept of the instrument or implementation in 15 or 20 minutes. We want them to understand, "Okay, I see that if I move my hands a certain way, or breathe a certain way, I can produce a certain effect." The musician must be able to understand the concept quickly. But if the musician can learn the entire instrument in 20 minutes, then we've produced not an instrument, but a toy. So the instrument must be easy to understand "conceptually", but also worthwhile and rewarding to practice so the musician can improve on it over a period of time. It must have depth while being easy to learn.
Such systems are not easy to design. We also strongly believe in designing performance systems that allow the performer to remain in control of the system, and in some ways afford the performer even greater musical power than he or she normally would have. I'm not interested in systems in which the computer acts as an accompanist, playing a role of its own, or in systems that prevent the performer from knowing what to expect from the computer. The performer must remain in control at all times. Another important aspect of live-performance computers of the future is the ability of the performer to take on varied music-making roles. Our systems are built mainly for performers, but they work with powerful computers, so they also can be used as improvisation systems and composition systems, and also allow the performer to control the music's overall shape. Instead of simply playing one note at a time, the performer can react like a conductor. We believe the musician of the future should be a combination of performer, improvisor, composer, and conductor, and be able to switch easily between those roles, or to combine them in new ways.
This approach is illustrated in the piece "Towards the Center,." which I composed in 1988/9 for six musicians and conductor. It is scored for six instruments, four of which (violin, cello, flute, clarinet) are amplified and slightly transformed electronically. The keyboard and percussion parts are performed on MIDI controllers (Kurzweil Midiboard and KAT 4-octave mallet percussion system) connected to the real-time hyperinstrument system. The use of the computer was designed to follow, complement, and emphasize the work's musical development, and differs functionally in virtually every one of its sections (including such concepts as rhythmic enhancement and complexification-which we call time warping, timbre tremolos, and automated arpeggios). One interesting aspect of the system in "Towards the Center" is that the relationship of control versus independence (of the two electronic soloists) is mediated by the machine. At moments the players are free from each others' influence, while at other times they group together to form a single "double instrument," where each controls only part of the musical result (recorded on Bridge CD #BCD9020; call 516-487- 1662 for information).
One aspect of musical enhancement that has interested me for a long time concerns rhythm. In a live performance, it can mean making musicians more precise than they normally would be. It also can mean making the rhythm "crazier," or more complex, or creating delicate combinations or relationships of synchronization that would be difficult for somebody to play without the help of a computer. In certain kinds of music that's not an easy thing to do. Such procedures work well in an improvisatory context where it may not be crucial exactly which note occurs on which beat. But if you're dealing with a precise score in which you know that in a certain measure you want a particular musician to play a particular note on a downbeat, and the computer to adjust the rhythm that you have to precise sixteenth notes, the only way to do that is to have in the computers running somewhere in real- time, and when a performer plays the note, the computer holds that note and waits until the next sixteenth note, and plays it where it considers a proper sixteenth note should go. It won't let it fall between the cracks.
In a particular section of "Towards the Center", every time the keyboard player plays an individual note, that note triggers a repeated note passage. The repeated notes play in the tempo of what everybody else is playing. As you press on the key--creating afterpressure on the controller--that rhythm is deformed, and it becomes more and more complex. It actually becomes faster as you press harder on the key. As you lift your finger, the rhythm snaps back into synchrony with the rest of the performers. By pressing the keys and triggering various events, you bring synchrony in and out of an ensemble setting in a complex way. The rhythm is adjusted to be more precise than normal, going into a section where the rhythm keeps going in and out of synchrony, by the performer pressing the keys and the computer adjusting the result.
Another hyperinstrument used in "Towards the Center" and other pieces, is called an automated arpeggiator, usually played by a keyboardist. The keyboard player controls the general shape, texture, and articulation of extremely fast notes which are rhythmically precise. The notes come out so quickly and the rhythm is so delicate that you could never play it by hand. But we don't want it all controlled by the computer; we want some combination of human and computer. To achieve that we store chord progressions and complex rhythm patterns in the computer. Every time the keyboard player plays a note, the computer decides whether the note belongs there or not. If the computer decides it's a correct note, it decides which chord it belongs to, looks in a library, and selects a rhythm pattern that corresponds to that chord, assigning each note of the chord in time to the appropriate note in that rhythm pattern.
This keyboard uses various other methods to control the final result; for example, depending on how loud you play each individual note, the notes in the rhythm pattern that come from the computer become louder or softer. This lets you shape the pattern in the computer. If the computer expects five notes, and I don't play note number one, the computer will reorder the notes in the pattern and note number two becomes note number one. That way, the notes injected into the pattern will be different, and the rhythm shifts accordingly. It also has a control so when I press on the keyboard, the afterpressure (how hard I press on the key) of each individual finger brings up an extra bank of timbres that articulate each individual note. This allows the performer to introduce different chord notes in an irregular rather than just playing them as block chords, thus bouncing the notes around and playing against what the computer expects you to play. What emerges is an extremely delicate effect. Most keyboard players get the idea in ten minutes, but usually practice many hours to achieve even more beautiful and controlled effects.
While the keyboardist performs these automated arpeggios, the percussionist does something else. I started with the idea that one thing computers do well that live instruments cannot is to make gradual transitions of sound color. Just as a computer graphic image of a person's face might transform into a lion's face, computer sound can start with one sound image, such as an oboe, and transform over time to sound like a human voice or anything else. Current synthesis gear technically allows us to achieve such effects, but there aren't really any musical instruments that let us perform and control such transitions--at least not in the sophisticated way that a musician could practice and perfect.
Thus, we try to use existing instruments to interpret techniques which performers know how to master, and extend them to control such effects. For instance, percussionists are good at selecting different physical objects, controlling rhythm and how hard they hit, but no existing percussion instruments give percussionists the opportunity to control and change the overall shape of a percussive sound in time. Therefore, we adopted the concept of a percussion tremolo, a technique percussionists can easily master, and translate it into timbre. We take a series of discrete sounds and measure the speed of live tremolo, separating the tremolo speed from its loudness: the faster the percussionist tremolos, the more complex and "unnatural" the sound becomes. We continue measuring the tremolo speed, and the hyperinstrument contains an entire bank--a "map" of timbres--that start with pure sounds and progress to complex ones. The faster the tremolo, the more complex the computer makes the sound. Percussionists typically slow down and speed up in a somewhat jerky motion, however, and we don't want the sound to transform that way. We want it to be continuous, so we include a filter in the hyperinstrument software that slows down and smoothes the timbral transition.
One thing that interests me is combining electronics with traditional instruments and traditional sounds to increase the sound palette of the traditional orchestra, sounding distinct and contrasting when you want to, but also capable of mixing so well that you can't tell what's what. Also, computers provide the potential to link various performers through the system; instead of connecting one individual to an instrument connected to a computer, two or more people play one instrument. As you consider these possibilities, you realize there are many parameters of musical performance to control--aspects that are difficult to control with one pair of hands or feet. In many situations you might want a percussionist to concentrate on the rhythm of a section, the keyboardist to concentrate on the notes or harmonic content of the section, and the string player to concentrate on the inflection or the phrasing of the section. Instead of thinking of that as three separate lines as we would with a string quartet, you can think of it as one instrument played by three people.
That led us to build "double instruments" and "triple instruments." One such double instrument is based on our desire for two players- -percussionist and keyboardist--to work together to control a sort of giant color organ, an instrument that controls complicated timbres. I wanted the keyboard player to control the overall content of the sound spectrum--the partials, harmonic series, sound quality--and the percussionist to control the behavior of each individual partial. Think of it in terms of a microscope, where you want the keyboard player to control a less-magnified portion-- the overall sound structure-while the percussionist looks at individual parts of the sound under the microscope.
We approached this in "Towards the Center" by combining percussion and keyboard instruments. When the keyboard player plays any note with the left hand below middle C, this both plays the note (actually a pedal or fundamental tone for these complex spectrums), and also redefines the timbres and pitches for the keyboard above middle C. At the same time it sends an enormous collection of notes to the percussion controller. When a note is played on the keyboard above middle C, all the pitches and timbres are redefined, and a collection of inharmonic or harmonic partials are sent to the percussion controller. A percussion player is great at choosing rhythms and playing delicate nuances, but it's difficult to do that while playing a four-octave mallet instrument. If you have seven notes from which to choose, it's even more difficult when you're trying to play that subtly over four octaves. So we send the notes automatically from the keyboard instrument to the seven white keys in one octave of the percussion instrument. This provides the percussionist with the equivalent of a seven-note chord which is automatically determined by the note that the keyboardist plays. As the keyboardist presses more on a left-hand note within a seven-note octave, a "filter" opens up on the percussion controller. spitting out more and more notes to each pad on the percussion controller. Using eye contact, the two players indicate when the percussionist wants the equivalent of a denser spectrum, or wants the keyboardist to eject more notes, and then the percussionist can concentrate simply on picking the part of the spectrum into which the notes are sent. The general spectrum is determined by the keyboardist, and the way those notes are articulated is determined by the percussion controller.
For performing the music of the future, MIDI controllers don't provide adequately sophisticated performance data, and are somewhat limited in capabilities. For this reason we have been interested in connecting complex acoustic instruments-- such as string instruments--to hyperinstrument systems, and also in inventing completely new performance controllers.
For this reason, we have experimented with systems that can capture complex gestures and turn them into musical controls. Our first such experiment, conducted in 1989/90, concentrated on marshalling the expertise used by a conductor, concentrating on left-hand rather than right-hand technique. To do this, we experimented with various glove-type gesture controllers, and chose a device designed by Exos, a Boston-area company. Its Dextrous Hand Master was developed by Dr. Beth Marcus and adapted for musical use by the hyperinstrument group. It is an aluminum "skeleton" that fits onto the fingers with Velcro. Hall effect sensors are used to measure finger movements, placing a magnet and sensor at each finger joint When you move your finger, the angle of the magnet is measured and translated into the angle of the finger joint. This system works fast enough to monitor the most subtle movements of a finger as well as the largest hand gestures, with great precision, accuracy and speed.
I composed the piece "Bug-Mudra" for use with this glove controller. The piece received its premiere at Tokyo's Bunkamura Theater in January 1990. Commissioned by the Fromm Music Foundation of Harvard University, "Bug-Mudra" is scored for two guitars (one acoustic and one electric), percussion (KAT electronic mallet controller plus three acoustic suspended cymbals), and conductor. The three instrumentalists are connected to the hyperinstrument system, as is the conductor, through the Dexterous Hand Master worn on the left hand. The glove measures the nuances of the conductor's left-hand gestures, translating them to influence the piece's overall sonic result. The title "Bug-Mudra" comes from "mudra", the word for hand gestures in classical Indian dance, and "bug", referring to computer "bugs," a pun on the difficulty of getting such a complex interactive system to work in a live concert situation ("Bug-Mudra" is recorded on Bridge CD #BCD9020).
In this piece, the guitar and percussion signals are sent into the first hyperinstrument system, while the glove is analyzed by an IBM PC (since moved to a Mac II) which monitors and classifies the finger gestures. That information is sent to a second Macintosh. This Mac contains a series of Lisp programs that interpret the finger movements and gestures, and turn those into controls which then influence the music in various ways. In different sections of the piece, the glove movements influence loudness mix, spatial placement, and the overall timbre of the whole piece. Thus in "Bug-Mudra" my right hand conducts tempo and speed, while I use my left hand for balance between instruments, and changes of color and articulation--much as traditional conductors would use their hands to indicate that a cello section, for instance, should play louder, while the violins should play softer or with more attack, and so on. The dataglove interprets all my gestures, every little motion. The hand has various degrees of freedom: I can curl the tip of my index finger or the middle portion, or just bend the lower portion of my finger. The glove also measures the induction angle, so I can move my finger back and forth. I can change the effect by a small amount or by the full amount. The piece's entire timbral content--all the sound color of these instruments--are determined by movements of the hand.
An even more elaborate hyperinstrument piece is my "Begin Again Again. . . ," composed in 1991 for cellist Yo-Yo Ma. The work is scored for cello solo and live computer electronics, using the hyperinstrument concept. About 28 minutes long, the piece consists of ten sections, all distinct in character, which are grouped into two large-scale movements. As in much of my recent music, "Begin Again, Again..." combines many forms of musical expression, from rock musiclike drive and intensity to melodious singing, to the timbral exploration of cello sounds--all to create a diverse but coherent artistic statement (A CD and CD-Rom of "Begin Again Again..." are currently being developed for the Sony Classical label).
The hyperinstrument system specially developed for "Begin Again Again..." allows the cellist to control an extensive array of sounds through the nuance of his or her performance. We developed many new techniques so the computer can further measure, evaluate, and respond to as many aspects of the performance as possible. The most prominent sensors include the a special DHM-like sensor worn on the right hand to measure wrist movement while bowing; finger pressure sensors built into the bow; a radio transmitter that indicates where the bow is making contact with the string; four thin strips placed on the fingerboard under each string that measure left-hand position; and special pickups placed on the bridge that facilitate the computer's task of analyzing the cello's actual sound. Information from all these sensors is sent to a Macintosh IIfx computer, which analyzes the data and provides an ongoing interpretation of the performance.
This information is used in different ways at different movements in the piece: at times the cellist's playing controls electronic transformations of his own sound; at other times, playing nuance shapes many aspects of the computer-generated accompaniment, changing orchestration, adding emphasis, simplifying or densifying the musical texture. Sometimes, the influence of the cello on the computerized accompaniment is clear and direct; at other times, it is more indirect and mysterious. The goal has been to create many levels of relationship between soloist and computer, much as the classical concerto dramatizes such relationships between soloist--not as a dichotomy, but as a new kind of instrument.
The piece's title refers both to its musical form and to its expressive content. "Begin Again Again..." is a set of variations in which the same melodies and harmonies are returned to over the course of the work, each time expanded and elaborated in new and unexpected ways. This serves as a metaphor for change in our lives--of breaking with the past while retaining what is dearest to us; of opening doors to unknown possibilities; and, finally, of renewed hope and affirmation.
Our hyperinstrument concepts have been further developed in my composition "Bounce" (recorded on Bridge #BCD9040) for hyperkeyboards, and "Song of Penance" which combines a hyperviola with a large orchestral ensemble and enables the soloist to control and manipulate a vast array of sung and spoken vocal sounds, and in our current project to enable amateur and non- expert musicians to expand their skill and stretch their musical imaginations through such a system. In the past few years we have been making new attempts to apply hyperinstrument techniques to large-scale opera and to interactive public installations.
It seems that each time we solve a particular problem, new technological challenges and musical visions pop up. We are of course happy for that, but it sure keeps us busy!