The Computer-Extended Ensemble

Computer Music Journal, 18:2, pp. 78-86, Summer 1994.

 

David A. Jaffe* and W. Andrew Schloss 
*Center for Computer Research in Music and Acoustics (CCRMA)
Department of Music, Stanford University
Stanford, California 94305 USA
David@Jaffe.com
 School of Music
University of Victoria
Victoria, British Columbia V8W 2Y2 Canada
aschloss@nero.uvic.ca

 

Until recently, there have been two basic models of how electronics interact with a performer in a performance situation, derived from tape music and keyboard performance respectively. Both impose rigid constraints on the dynamics of the ensemble situation. Recent developments have shown these models to be endpoints of a continuum, with the region between rich and largely unexplored. We examine one sojourn into this realm.

 

Tape Music versus Keyboard Music

The Òtape music modelÓ adds several performers to a tape of synthesized sound. Here it is the performers' responsibility to synchronize his tempo, dynamics, and tone quality to that of the tape. The tape is completely inflexible and takes the role of the conductor of the ensemble, in the sense that it leads the performers, whether or not this role is appropriate from a musical point of view.

 The Òkeyboard electronic music model,Ó consists of pianists performing on keyboard synthesizers. Here the situation is reversed. Whereas in the case of the tape music model the performer was forced to slave to the electronics, here it is the electronics that slave to the performer. The electronic element is relegated to the role of sound production and nothing more. The pianist must explicitly start and stop each note by pressing and releasing the key of the keyboard synthesizer. While this model solves some of the problems of the tape music model, it is limited by the very fact that all details of the music must be performed. While a tape music composer typically spends thousands of hours in a studio carefully crafting his music on multiple time scales, a keyboard music composer is limited by what a pianist can directly control in a single performance. Furthermore, he is constrained by aspects of MIDI, the protocol by which electronic keyboards communicate with synthesizers. (See Appendix 1.)

We are beginning to realize that these two models are actually end points of a continuum, with the region between offering rich opportunities and largely unexplored. To venture into this region requires a programmable machine, namely a computer.

 

 Conducted Sequencers and Triggered Events

 

What lies between the two models? Before answering this question in a general manner, let us begin with one model and cautiously venture toward the other.

 Consider a piece for violin and tape. We address the problem of the inflexibility of timing of the electronics. The first step is to replace the tape with a computer performing the tape part in real time. At this point, we have not significantly diverged from the tape music paradigm, since a sequencer performing autonomously is as inflexible as a tape. However, now let us put in the violinist's score in a computer's memory and program the computer to "listen" to the notes played by the violinist and send information to a synthesizer, adjusting its tempo based on that of the violinist. (The technology that enables the computer to listen to the violinist is described in Appendix 2.) Even if the violinist takes drastic liberties with the tempo, the electronic part follows it. Such a scheme can be called the "score follower model" and shares with the keyboard model the quality of slaving the electronics to the performer. However, unlike the keyboard model, it is a computer, rather than a performer, that is ultimately playing the music. This change allows for a more complex synthesis control stream, and hence more complex music, than can be produced in the pure keyboard model. The detail of tape music is retained, while passing control of tempo to the performer.

 An alternative but philosophically similar scheme, pioneered by Max Mathews in his Conductor program, is the Òconducted sequencer model,Ó in which a live performer conducts the tempo of the sequencer, using a special electronic baton, and can also control dynamics of the synthesized music. Here again, the inflexibility of tape music is mitigated by a live performer. In fact, the score follower can be seen as a special case of a conducted sequencer.

In contrast, let us now begin from the keyboard model and take a step toward the tape music model. In order to address the limitation on the amount of information a pianist can convey to the synthesizer, let us abstract and amplify the pianist's actions by means of Òtriggered events.Ó Now, instead of a one-to-one correspondence between the notes played by the keyboard player and the electronic sound produced, we take the keyboard player's action as a command to a computer to produce some musical event. The nature of the event depends on the programming of the computer. For example, we can program the computer to produce a chord for every note played by the keyboard player. The pitches of the chord could depend on which note was played. Furthermore, we can introduce a context-dependency, such that the mapping from performed note to resultant chord could change over the course of the piece, possibly based on what came before. As a more complex example, the computer could respond with a particular melodic fragment each time the performer played a particular note. More generally, any compositional process that can be programmed into a computer could be instantiated, altered or stopped based on actions of the performer.

 These directions invariably lead us into terrain in which the roles of performer and conductor become blended and we find ourselves speaking of the computer as able to perform human-like actions such as Òlistening,Ó Òunderstanding,Ó and Òresponding.Ó Begging the question of whether these metaphors are appropriate when talking about machines, it is clear that, when compared to the models of tape music and keyboard music, the computer is taking a more active role and contributing to the ensemble in a unique manner. This is the realm of interactive computer music.

      +INSERT FIGURE 1 HERE;

 

Listening, Understanding, and Responding-Why Bother?

 

Computer scientists working in the field of artificial intelligence (AI) are interested in writing programs that simulate the behavior of a musician. Their goal is to duplicate a known human behavior, a musical style and performance in this case, as closely as possible. Even if this someday turns out to be possible, it is of little interest to composers, since it amounts to reinventing what already exists. For composers, the central question is this: What new musical effect can be achieved with computer interaction that cannot be achieved by prior existing means? A likely place to begin exploring this question is in an area of music in which interaction between performers is central.

 

Improvisation as a Basis for Interactive Computer Music

 

In improvisational musical styles, the music is held together by a set of strict stylistic conventions. For example, in traditional jazz, the form is fixed and is usually a theme and variations on a fixed harmonic background. The tempo is steady, using syncopated rhythms based primarily on triplets. Only one performer at a time, the soloist, has significant improvisational freedom. The others are in an accompanimental position with limited freedom. As another example, Indian ragas consist of a fixed scale and a set of melodic formulas performed over a continuous drone.

 A particular style or piece is partially defined by the musical elements it leaves free and those that it fixes. In fact, pure improvisation does not exist, as there are always unconscious underlying stylistic assumptions. Rather, improvisational music is a kind of hybrid between pure improvisation and composition. In a compositionally-planned improvisational context, the performer is given improvisational freedom to make choices, making possible for a composer to write music without envisioning its details, yet within a highly-structured, predictable and planned context.

Introducing a computer as an extension of the improvising performer increases the scope of the kind of spontaneous musical decision-making that give improvisational music its distinctive quality. A computer can magnify, transform, invert, contradict, elaborate, comment, imitate or distort the performer's gestures. Its effect can be as subtle as adding a bit of reverberation here and there according to the musical context. Or it can be as extreme as imposing harmonic rules that change the pitch of the performer's notes, cause the performer's notes to be omitted entirely or change the large-scale formal construction of the music. Although it may seem like the computer is taking control away from the performer, keep in mind that it is the performer or composer that chooses the role of the computer. Thus, the computer actually increases the performer's power, allowing him to control sound at any scale, from the finest level of audio detail to the largest level of formal organization.

 

The Computer-Extended Ensemble

 

The full power of the computer in an improvisational context does not show itself until we add a second performer to the ensemble. Now each performer can affect the playing of the other. Both performers can be performing the same electronic instrument voice at the same time. One performer can act as a conductor while the other acts as soloist. And these roles can change at a note-by-note rate. In this manner, the barriers that normally separate performers in a conventional instrumental ensemble become instead permeable membranes. Figuratively speaking, the clarinetist can finger the violin and blow the clarinet while the violinist bows the violin and fingers the clarinet. We have coined the term "computer-extended ensemble" for this situation.

The challenge becomes finding roles for the performers that allow them just the right kind of control. They need to feel that they are affecting the music in a significant and clear fashion. Otherwise, they feel superfluous and irrelevant, as if the music has gotten out of their control. The computer program may be simple or complex, as long as it fires the imagination of the performers.

 

An Example Computer-Extended EnsembleÑWildlife

 

We have been experimenting in this realm with percussionist/composer Andrew Schloss in a duo called Wildlife. Schloss is a virtuoso in both notated and improvisational styles and has performed in Congolese, Afro-Cuban, jazz, and contemporary music contexts. The duo features Schloss and the author performing on two modern instruments, the Mathews/Boie Radio Drum and the Zeta electronic/MIDI violin, with the ensemble extended by two computers. Both performers also have extensive foot pedal controls.

The Mathews/Boie Radio Drum consists of a flat surface containing an array of receiving antennas and two mallets with transmitting antennas. The performer moves the mallets above or on the drum surface and the device senses the position of the mallets in three dimensions using capacitive sensing of electromagnetic waves. Typically,the drum is used as a percussive device, but it can also be used to continuously control several variables simultaneously, where the meaning of the variables is entirely up to the composer who programs the computer. In fact, the drum itself produces no sound; it depends entirely on the computer to process the information it produces and transform that information into sound-producing commands for synthesizers.

The Zeta electronic/MIDI violin is a solid body violin with individual pick-ups for each string. It is different from the Radio Drum in that it does produce an amplified acoustic sound. However, it also passes MIDI information to the computer, reporting which notes are being played, their dynamic level, glissando information, etc. (See Appendix 2 for more details.) This facility allows the computer to double the violin part with some synthetic sound, or more interestingly, make decisions based on what the violinist is playing.

Wildlife consists of a structured improvisation in five movements. All material is generated in response to the performers' actions; there are no pre-recorded sequences or tapes. The system configuration, shown in Figure 2, consists of both the Zeta violin and drum passing information to a Macintosh IIci computer, which does gestural processing and passes information to a sampler and a NeXT computer. The NeXT does further gestural processing and algorithmic composition, performs digital signal processing (DSP) synthesis on the NeXT's built-in DSP chip, and sends information to an external synthesizer. Actuality, a single computer could have been used. The use of two is purely an historical accident. The software on the Macintosh is based on the Max system, while the software on the NeXT is based on Ensemble and the NeXT Music Kit. A discussion of software architectures required for computer-extended ensembles is given in Appendix 3.

            >>>INSERT FIGURE 2 HERE<<<

We now examine several examples of interactive situations that we have found fruitful for improvisation and try to pinpoint what quality in each makes it particularly effective. These examples are taken from Wildlife, and are presented here in increasing levels of complexity.

Example 1: Chord Mapping

The first movement begins with a simple interaction scheme, in order to allow the audience to perceive the causality between a performed action and the resulting synthesizer sound. The violin, in addition to its acoustic sound, produces synthesized piano sounds via a Òchord mapping set,Ó which consists of twelve Òchord mappings.Ó A chord mapping is a chord that is produced by the computer when a performer plays a particular pitch class. The chord is transposed according to the performed octave. The percussionist chooses which of several sets is active. As a simple example, one set might produce chords derived from chromatic tone clusters while another might produce a different octave-displacement for each pitch.

The drum's horizontal axis controls register, its vertical axis controls duration and the height above the drum controls loudness. The drum is also partitioned in half, with one part of the drum playing chords, and the other playing single notes. Overlaying this partition is a grid that the percussionist uses to select the active chord mapping set. Thus the familiar gesture of striking the drum can have the unfamiliar result of changing the harmonization of the violinist's melody, an effect usually considered in the realm of composition rather than performance.

Another unusual aspects of this movement from an ensemble standpoint is that both performers are playing the same synthesized sound at the same time, resulting in ambiguity as to who does what and enabling one player to Òpull the rug outÓ from under the other player. This ambiguity is increased by a variant of the above scheme whereby the violin sends only glissando information. This produces a Òduo-instrumentÓ effect in which the percussionist plays chords and the violinist describes the glissando trajectory of those chords.

 

 Example 2: Klangfarbenmelodie Embellishment of a Cantus Firmus

 

The second movement introduces a further degree of interdependence between the violinist and the percussionist. Here, the violinist improvises a slow sustained melody, while the percussionist embellishes this melody by playing back the violinist's pitches using his own rhythm and with a variety of timbres. This technique was first explored by Andrew Schloss and works as follows:

The violinist produces a cantus firmus with a synthesized organ sound. He has two switches, which he operates with his feet. One switch turns on or off the doubling of his melody with the organ sound. The other enables notes to be sustained in pedal point, during which time he can play purely-acoustic material without triggering additional synthesized notes.

The computer listens to the violinist and remembers the pitches he most recently played. The percussionist can then replay these pitches in any order, with any rhythm and with various timbres. He has, in effect, the entire melody of the violinist from the last hundred notes in front of him on the drum's horizontal axis. As he moves left along the drum's horizontal axis, he moves further back in time, while moving to the far right of the drum gives him access to the pitches the violinist just played. He has two options as to how to perform the material: In Òdiscrete mode,Ó he produces a note each time he strikes the drum in the usual way. In Òcontinuous mode,Ó he merely waves his hand above the surface of the drum and the computer produces a continuous series of notes corresponding to the percussionist's hand positions, which signify tremelando speed, pitch and loudness. Furthermore, regions of the drum represent different timbres. Thus, by moving around on and above the surface of the drum, the percussionist can play a single pitch with a variety of timbres. If he keeps the pitch constant, the effect is of a virtuostic Klangfarbenmelodie.

Example 3: Melody Generation Based on a Dynamically-changing Pitch Set

The third movement gives more independence to the computer, with each performer becoming something of a hybrid between improvisor and conductor. The movement is in two parts. In the first part is a violin/computer duo, while the second is a drum/computer duo.

The violin produces no acoustic sound, but triggers synthesized sounds of a string orchestra. At the same time, an algorithmic composer program on the NeXT computer listens to the pitches the violinist plays and generates melodies using the most recently-played pitches and dynamics. The melodic contour, register, rhythm, timbre, and other attributes of these melodies are based on irregular ÒfractalÓ shapes, using a technique developed by Michael McNabb. Thus, the violinist is cast in the role of a leader in a call-and-response ensemble situation. The degree of independence afforded the computer can be controlled by the violinist; since the program listens only to the twenty-five most recent pitches, the violinist can cause music to ÒconvergeÓ by playing rapid repeated notes. This twin responsibility on the part of the performer as both soloist and conductor gives him great power in directing the flow of the music. Yet, the computer is also given great autonomy and at times seems to have a mind of its own. This paradoxical combination makes for a unique blend of the expected and the unexpected, control and surprise, that is particularly exciting in an improvisational context.

The second part of the movement features the drum interacting with the computer in an opposite manner. Since it is more difficult for the percussionist than for the violinist to pick out exact pitches, we leave the job of choosing pitches up to the computer. The computer not only performs these pitches itself, using the fractal technique described above, but also maps the current set of pitches onto a grid or ÒpaletteÓ on the drum's surface. As the computer changes the set of pitches, the percussionist's palette similarly changes. Thus, the percussionist, who plays timpani sounds, remains perfectly in synch with the computer from an harmonic standpoint, but has complete rhythmic freedom. The percussionist is additionally given control of the speed, articulation and dynamics of the computer melodies, using a set of foot pedals.

 

Possible Pitfalls of the Computer-Extended Ensemble

 

Is there a danger that can be caused by introducing Òtoo much technologyÓ in musical performance? . In developing very powerful computer-extended ensembles, issues of what performance is really about start to surface. Concert music is a vital part of our musical life, and will hopefully remain so in the decades to come, even if style and technological resources change drastically. In order for this to happen successfully, we must reflect on why people go to concerts at all. It can be argued that one of the important aspects of live performance (from the point of view of the audience) is virtuosity, and that perception of this aspect of performance may erode in certain ways with the application of new technologies to musical performance.

Whereas acoustic instruments, since the beginning of time, have exhibited a nearly one-to-one correspondence between the performer's action and its sonic result, the computer-extended ensemble, with its invisible technology, (bordering, at times on magic) has no such intrinsic relation. The question becomes: do we need a perceivable cause-and-effect relationship in live performance or not? We believe this is a question still to be answered, and in our own work we have kept it in mind at all times.

This issue also may become crucial in musical training, especially in the future, when computer-extended instruments are studied by children who may never have played the acoustic version of the instrument. As an example the authors trained as percussionist and violinist, and that training is implicitly available to us even as we play computer-extended instruments that have no acoustic behavior at all. But what would it be like if we never learned to play acoustic drums and violins, only the electronic versions? We would probably lack sufficient intuition and nuance to become effective performers on the computer-extended instruments.

 Finally, there is also the desire on the part of some computer music researchers to create new instruments that are intelligent in a different way: these instruments should ÒknowÓ what to do and require very little skill to play. Again, though this might be fun for beginners, it seems that this trend might ultimately limit a person's understanding of music. The question is whether these instruments will stimulate due to their immediate accessibility, or suffocate due to a kind of atrophy of learned musicality. It is an open question at this point.

Though the power we now have in computer music is wonderful, exhilarating and open-ended, and though it frees us forever from the tyranny of the tape machine, we have entered an era in which cause-and-effect, an inherent aspect of musical performance since the beginning of time, is suddenly evaporating. As for the global problem of complexity and loss of the perception of cause-and-effect, we believe that this is a problem that must be dealt with individually in every situation, and to some extent will be answered by the response of the audience.

 

Conclusions

 

We have posed more questions than we have answered, a natural consequence of attempting to describe a field of music that as of yet barely exists. In our own work, we have found that computers can significantly enhance the ensemble situation and are particularly well-suited to music with some improvisational elements.

But the usefulness of the computer-extended ensemble is not limited to improvisational music. In fact, if we expand the definition of improvisation to include a situation in which a performer spontaneously varies parameters such as dynamics and tempo, while leaving others such as pitch and rhythm fixed by the score, we find that all performed music is in some sense improvised. What makes the computer-extended ensemble excel in pitch-oriented improvisation carries over just as well to performed electronic music in general. In fact, with the recent advent of Òrobot instruments,Ó acoustic instruments that can be played remotely by computer, the domain of the computer-extended ensemble opens to include purely acoustic music as well as electronic music.

 Finally, we have been careful to avoid a discussion of style. Nothing in the nature of the computer-extended ensemble limits it to a particular genre. It requires only musicians and composers willing to explore something new.

 

Acknowledgements

 

Many of the ideas in this paper were developed in collaboration with Andrew Schloss. Thanks to Michael McNabb, whose Ensemble program has been enormously useful in this work, Julius Smith, who co-developed the NeXT Music Kit with the author, and David Zicarelli and Miller Puckette, whose Max program has proven especially useful for rapid prototyping and debugging of computer-extended ensemble configurations. Thanks also to visionary instrument builders Max Mathews and Bob Boie (Radio Drum) and Keith Mcmillian (Zeta violin.) Finally, thanks to Joel Chadabe, Max Mathews, Morton Sobotnick and the other pioneers of interactive computer music for asking the right questions.

 

References

 

R.A. Boie, L.W. Ruedisueli and E.R. Wagner. 1989. Gesture Sensing via Capacitive Moments. Work Project No . 311401-(2099,2399).   Murray Hill, New Jersey USA: AT&T Bell Laboratories.

Chadabe, J. 1984. ÒInteractive Composing: An Overview.Ó Computer Music Journal 8(1): 22-28.

Jaffe, D. 1985. ÒEnsemble Timing in Computer Music.Ó.  Computer Music Journal 9(4):38-48.

Jaffe, D. 1991. ÒMusical and Extra-Musical Applications of the NeXT MusicKit.Ó Proceedings of the 1991 International Computer Music Conference, San Francisco: International Computer Music Association. pp. 521-524.

Jaffe, D. 1989. ÒAn Overview of the NeXT Music Kit.Ó Proceedings of the 1989 International Computer Music Conference,  San Francisco: International Computer Music Association. pp. 135-138.

Jaffe, D. and L. Boynton. 1989. ÒAn Overview of the Sound and Music Kits for the  NeXT Computer.Ó  Computer  Music Journal 13(2):48-55. Reprinted in S. T. Pope, ed. 1991. The Well-Tempered Object. Cambridge, Massachusetts USA: MIT Press.

Mathews, M. V. and W. A. Schloss. 1989. The Radio Drum as a  Synthesizer Controller. Presentation at the 1989 International Computer Music Conference.

Matthews, M. V. 1988. ÒThe Conductor Program and Mechanical Baton.Ó in M. V. Mathews and J. R... Pierce, eds. 1988. Current directions in Computer Music  Research. Cambridge, Massachusetts USA: MIT Press.

McNabb, M. 1990. ÒEnsemble: An Extensible Real-Time Performance Environment.Ó Proceedings of the 89th Audio Engineering Society Convention. New York: Audio Engineering Society.

MIDI Manufacturers Association. 1988. MIDI 1.0 Detailed Specification. Los Angeles: The International MIDI Association.

Puckette, M. 1990. ÒAmplifying Musical Nuance.Ó Journal of the Acoustical Society of America. 87-S1.

Schloss, W. A. ÒRecent Advances in the Coupling of the Language Max with the Mathews/Boie Radio Drum.Ó Proceedings of the 1990 International Computer Music Conference,  San Francisco: International Computer Music Association.

Smith, J., D. Jaffe, and L. Boynton. 1989. ÒMusic System Architecture on the NeXT Computer.Ó Proceedings of the 1989 Audio Enginnering Society Conference. New York: Audio Enginnering Society.

Zicarelli, D. 1991. Writing External Objects for MAX. Palo Alto, California USA: Opcode Systems, 1991.

 

Appendix 1: Limitations of MIDI

 

 Some of the limitations of the keyboard music model (and, for that matter, of much recent electronic music) stem from the MIDI specification, the means by which keyboards convey information to synthesizers. MIDI was originally invented purely as a means of describing the state of a clavier-style keyboard and conveying that information to a synthesizer to produce sound. MIDI defines messages such as Ònote on,Ó Ònote off,Ó Òsustain pedal on,Ó and Òsustain pedal offÓ and is inadequate to describe the state of wind instruments, stringed instruments, or other instruments with many continuously varying parameters. Thus, the limitations of the keyboard music model are are a direct outgrowth of the limitations of the piano as a general sound-producer. For example, a piano has no way to produce a true glissando. A MIDI keyboard is slightly better in this regard, as it has a Òpitch bend wheelÓ that enables a given pitch to be bent up or down if the performer takes one of his hands from the keyboard and moves it to the pitch bend wheel. Yet, this leaves him only one hand with which to play the keyboard. Also, it is difficult or impossible to produce true glissandi in two directions at once. This is a limitation not only of the fact that the performer has only two hands but also of the fact that the MIDI specification itself restricts each MIDI channel to a single glissando. Finally, MIDI does not address the issue of precisely defining the sound itself nor of specifying changes that occur at an audio rate. In spite of all this, MIDI is extremely useful simply because it has been adopted as a standard that allows devices produced by all manufacturers to be interconnected.

 

Appendix 2: Converting Acoustic Sound into Information for a Computer

 

When converting an acoustic sound such as that of a violin to information useful for the computer, the computer must determine when a new note is played and what its pitch is. A number of techniques are used to do this. A typical algorithm assumes various periodicity rates and looks for repetitions of the waveform, then chooses the rate that produces the best results. Unfortunately, such processes take some time, which can result in a delay or ÒlatencyÓ between the time a performer plays a note and the time the computer determines what note was played. Often the time required depends on pitch; the lower the pitch, the longer the process takes, since the waveform repeats at a slower speed. For example, the Zeta violin uses a pitch detector with a latency of about 10 ms on the E string. But on the G string the latency is closer to 30 ms, which is quite long and can be disturbing to the performer. Furthermore, a variable latency dependent on pitch can be particularly disturbing. There are a number of solutions to this problem, such as fitting the instrument with physical sensors . But we found we can live with this problem.

 What is more of a problem is under- and over-detection. That is, the pitch detector sometimes misses a note or thinks the violinist played two notes in rapid succession when he actually only played one note. Under- and over-detection is an artifact of the Ònote onÓ MIDI paradigm which doesn't map well to bowed string instruments, as described in Appendix 1. In fact, it is possible that the pitch tracker latency can be reduced by changing the Ònote onÓ paradigm to one that more closely approximates what the instrument and the ear do with respect to pitch at the start of a note. That is, the note is initiated before the pitch is Òwell-definedÓ and its pitch becomes clearer over the first few instants of the note. If the synthesis engine could similarly begin a note with a pitch that is not-yet defined, it could respond more quickly. Physical models may be helpful in this regard.

 

 Appendix 3: An Example Software Architecture for the Computer-Extended Ensemble

 

There are a number of software systems available commercially and in academic laboratories for doing interactive computer music. Most have similar facilities to support scheduling, broadcast, reception and processing of events. We briefly examine one such system, the NeXT Music Kit, with the goal of understanding some of the mechanisms that make computer-extended ensembles possible.

The Music Kit is a library of software modules for creating music and sound applications on a NeXT computer. At the heart of the Music Kit is a software module or ÒobjectÓ called the ÒConductor.Ó It is the Conductor's responsibility to run small sub-programs at particular scheduled times. Another way to say this, more in line with the object-orientation of the Music Kit, is that a Conductor Òsends a messageÓ to an object at a particular time. The object may request other messages to be sent to itself or to other objects at some time in the future. For example, the effect of discrete echoes can be produced by scheduling a series of three repeated notes with each one at a lower dynamic level. If such scheduling is done in response to notes played by a performer, the result is a simple interactive system that produces echoes.

The Music Kit processes performed notes using Note Filters. Each type or ÒsubclassÓ of Note Filter processes notes in a particular manner. For example, the echo function described above is implemented as a Note Filter. Each Note Filter has inlets and outlets. Two Note Filters are connected together by connecting the outlet of one to the inlet of another. When defining a Note Filter, there is no need to know where the notes are coming from or where they are going. The Note Filter simply receives notes from its inlets, modifies them in some manner, and sends them on to outlets. As an example, assume you have an octave transposition Note Filter and the echo Note Filter described above. If you use the configuration shown in figure 3, the result is a series of echoes an octave higher than played. If you use two instances of the echo Note Filter, as shown in figure 4, the result is a series of echoes, combined with another series of echoes produced for every echo from the first Note Filter and sounding an octave higher. The result for each note played by the performer would be twelve notes, three at the original pitch and nine an octave higher.

      +INSERT FIGURES 3 AND 4 HERE;