Вы находитесь на странице: 1из 16

Yiorgos Vassilandonakis

University of Virginia
An Interview with Trevor
McIntire Department of Music
112 Old Cabell Hall Wishart
Charlottesville, Virginia 22903 USA
yiorgos@earthlink.net

Trevor Wishart (see Figures 1 and 2) has carved sound properties, his philosophy, and his wit, as well
a distinct niche as a composer, author, educator, as his programming abilities, having watched him
vocal improviser, software designer, and practitioner tweak or rewrite software on the spot to improve
of sonic art and digital music creation. Born in or update functionality. I met him again in Paris
1946 in Leeds, England, Mr. Wishart studied at a few years later, when I got the chance to work
Oxford, the University of Nottingham, and the closely with CDP. This interview took place in three
University of York, where he earned his doctorate separate sessions, the first of which was conducted
in composition in 1973. He became involved in live in Paris in June 2006, over a meal between Mr.
the early stages of electronic music, which he Wishart’s teaching sessions at CCMIX; the remain-
has actively researched and developed ever since, ing sessions were concluded over telephone and
creating an impressive body of works that have electronic mail during the summer and fall of 2008.
been commissioned by l’Institut de Recherche
et Coordination Acoustique/Musique (IRCAM), the
Paris Biennale, and the BBC Promenade Concerts.
(See Table 1.) He has also been awarded prizes such as Early Years and Socio-Political Context
the Giga-Herz Grand Prize at Zentrum für Kunst und
Medientechnologie (ZKM), the Golden Nica at the Vassilandonakis: Most of your work has been as an
Linz Ars Electronica, the Gaudeamus Award, and the independent composer. How have you managed to
Euphonie d’Or at the Bourges Festival. Mr. Wishart pursue a career without the support of a teaching
is also known for his works involving the human post or even a major research institution?
voice, including improvisations using extended
vocal techniques, as well as his musical theater Wishart: There seem to be ways for a performer
pieces. Specializing in sound metamorphosis, he is of contemporary music to make a living in big
a principal contributor to the Composers’ Desktop metropolises like London, Paris, or Berlin. But these
Project (CDP), a set of software tools for multi- venues and their audiences seemed remote from
faceted sound transformation. Mr. Wishart has the world I grew up in. And anyway, it seemed
developed and maintained Sound Loom, a user that composers who were not performers needed an
interface for CDP, since 2000. academic post to financially support their work.
Further contributions include his books On Sonic Vassilandonakis: They still do. . .
Art and Audible Design, as well as his Sounds
Fun books of educational games, which have been Wishart: Absolutely, though it also depends on
translated into Japanese. (See Table 2.) Mr. Wishart where you are. There are still places with remnants
has held residencies in Australia, Canada, Holland, of financial resources for independent composers,
Germany, and the United States, and at the Univer- though not many. At the time I was starting out,
sities of York, Cambridge, and Birmingham. He is an the Arts Officer for our local region (Yorkshire) was
honorary Visiting Professor at the University of York. a great supporter of the new medium known as
I first met Trevor Wishart in June 2003 while “performance-art,” and many well-known perform-
attending a summer program in Electronic Music ing groups moved to the region to live and work.
at the Centre de Création Musicale, Iannis Xenakis Much of their experimental visual work happened in
(CCMIX), in Paris. I was struck by his knowledge of the street or in small-town, small-scale venues (the
now-defunct “Arts Labs”). I worked with several
of these artists while I was still a doctoral student
Computer Music Journal, 33:2, pp. 8–23, Summer 2009 at York, and I was inspired by the idea of making
c
⃝2009 Massachusetts Institute of Technology. your way in life by taking work out “on the road”

8 Computer Music Journal


Figure 1. Trevor Wishart Figure 2. Presenting a
performing at the Krikri seminar at the Trieste
Festival in Ghent (2006). Music Conservatory in
Trieste, Italy.

and being judged in front of a real audience. So I


was lucky to find a niche for myself staging what I committed, in a youthfully idealistic way, to ideas
called “environmental events” (we’d probably call of taking art out of the usual contexts and into the
these “site-specific events” today), and multimedia community, without any realistic notion of how
shows like Son et Lumière Domestic, which were this might be achieved. In particular, the death
given Arts Council support as “performance art.” of my father while I was still a student made me
I should also be honest and say that, without my reconsider what I was doing as a composer. The
wife’s support, the first ten years of my independent post-Xenakis type orchestral work I was writing
career would have been impossible, as I came from (based on seven intervallically distinct tone-rows
a poor family with no resources to back me up. and random number tables) seemed irrelevant to his
life, working in a factory in an industrial town, and
Vassilandonakis: You have an academic composer’s
this is what first inspired me to buy a tape recorder
background, and yet your career has focused mainly
and go out recording the sounds of industry around
on electronic music. Was there a point at which you
the region with some vague idea of making a piece
consciously renounced or gravitated away from the
of music out of it.
traditional academic path? How did you make the
transition to electronic music? Vassilandonakis: How familiar were you with the
electronic genre of the times as you went in to
Wishart: I was a student during the ferment of the
manipulate those recordings? Was that project
late 1960s and early 1970s when lots of alternative
completed successfully?
ideas were in the air. My friend and roommate went
off to found Uncareers, an alternative approach to Wishart: Most of my university training had been
entering the “real world” of employment. I became focused on composers in the instrumental tradition,

Vassilandonakis 9
so although I had come across the electronic works see this unbridgeable gulf between improvisers and
of Xenakis, Stockhausen, and Berio, and I knew of composers (especially studio composers) that some
the existence of musique concrète, I had no existing musicians seem to want to erect.
models to relate to when I began my studio work.
Vassilandonakis: Yet there are some fundamental
I eventually generated the piece Machine . . . an
differences in these two ways of musical creation.
Electronically-Preserved Dream, which combined
Since you have also been active as an improviser,
those industrial recordings with semi-improvised
how does one inform the other in your own thinking?
sound materials for singers or speaking voices and
cutups of media clips and of my own voice in a Wishart: It’s good to improvise (I still do live
radical ad hoc approach I called “music montage.” vocal performances with no more electronics than
a microphone and amplifier), but it’s also good
Vassilandonakis: How did it feel to have access to
to spend time working over materials in a very
all these new studio tools back then?
intensive way, making decisions about long time-
Wishart: Working with sound in the studio was scale relationships which are not so accessible in
a revelatory experience for me. With my strong the heat of a live performance. I’ve done some
academic background in composing, and a particular pretty exciting live vocal performances, but I
interest in musical form, it was shocking to discover would never have come up with the “ocean”
that sounds had their own agenda; one couldn’t just sound transformations in Imago in a live situation.
transfer idealized notions of proportions to the pre- Curiously, it can be the unpredictability of the
recorded sound-materials one had collected. They studio work that leads to discoveries that are not
seemed to have a life and demands of their own. I going to happen in the live situation: A complex
have to admit that I find mathematics beautiful, and patch applied to a complex source often leads to
I read mathematics books for entertainment, but sound results which can’t have been foreseen. Many
musical beauty is more than mathematical form, of them are average (or just plain uninteresting), but
as it also has to function effectively in time, unlike you can take the best of the unforeseen results and
a mathematical argument. I’m always concerned apply a new, unpredictable process, leading to new
therefore with shaping materials in a way consistent sounds from which you can make a choice, and so
with their particular nature. on. This process of progressive selection (weeding
out) and development is not easily applied in a live
Vassilandonakis: . . . a fact that can throw out all
situation.
pre-composition calculations at times.
Vassilandonakis: It must have been hard to follow
Wishart: Absolutely. An obvious example I often
this approach of “slow improvisation” in the analog
use is that a sound-morph usually has to take a
studio with tape machines and filters, in the sense
certain minimum time to register with the listener.
that it takes so much physical work to realize these
There’s time required to recognize the initial sound,
processes, and it is a long time before you can listen
time to recognize the final sound, and time to accept
to what you’ve done. Was your approach then less
that a morphing (rather than an edit-switch or a
improvisatory? I’m sure it must have honed your
crossfade) has taken place. You can’t demand, for
skills a great deal.
some grand aesthetic reason, that such an event take
up less time. In general, all the time proportions Wishart: It’s just that the improvisation took place
in my work are “calculated” by listening; a typical at a different place in the compositional chain. In
experience is to create a musical phrase which feels the analog studio, the improvisation lay mainly in
(after continued listening, reworking, etc.) just the generating the material, which for me was done
right length, only to have to cut it down (or even through controlled-improvised live performances,
reject it altogether) when it is placed in its musical always ending up being much more than I would
context in a piece I’m building. I sometimes describe finally use. I would then select from the recorded
this process as “slow improvisation,” as I don’t materials to work on for a piece. In Red Bird,

10 Computer Music Journal


for example, I asked two performers to speak the free improvisation, which perhaps explains why the
phrases “reasonable” and “listen to reason” in a sound materials for many of my earlier pieces were
host of different ways. (I provided a very, very long recorded from improvisation sessions with voices or
list of adverbs describing speech or emotional states found objects. The work with Interplay spawned the
and insisted on recording every version.) I then creative musical games I later published as Sounds
selected and classified the materials according to Fun, and this and related work fed into the Schools
how they sounded (not how they were necessarily Council Music Project run by John Paynter, which
intended to sound). All these sounds existed as sought to make composing the center of classroom
physical bits of tape, separated by colored plastic activities in school music in the UK. At the time,
splicing tape on numerous plastic reels. In the this was a radical new idea. As a result, I found
computer studio, a lot of the “improvisation” myself devising workshop projects for teachers and
takes place in the transformation process, where for classroom situations. Twenty years on, all these
I am experimenting with transformations (and activities have become standard practice in British
transformations of transformations) of the original schools, but at that time they were a completely
materials, with my existing software instruments, new way for a composer to make a living.
or new instruments I develop as I compose. Now
Vassilandonakis: There are several direct references
that I mention it—this approach of generating
to socio-political issues in your work, though. For
lots of possibilities and carefully cataloguing the
example, when you use the recorded voices of
materials—is something I’ve actually carried over
Margaret Thatcher and Lady Di (in Two Women) as
into the digital domain, where these kinds of
source material, you cannot avoid the connection
management tasks are much easier.
between the sonic material itself and the public
Vassilandonakis: Did you ever feel that this new persona delivering the material.
electronic genre was more in touch with the times
Wishart: I’ve tended to select sound materials with
and the current socio-political situation?
some extra-musical idea in mind, particularly in,
Wishart: Ever since this early work in the studio, say, Red Bird (sounds of birds, animals, words,
part of the attraction of sonic art has been the and machines in a kind of mythic sound-drama) or
connection with the “real world.” I know this The Division of Labour (a text from Adam Smith’s
is an odd idea for many musicians, as studio The Wealth of Nations). But there are two caveats.
composers seem a detached bunch compared with First, the materials themselves have to be sonically
instrumentalists or laptop performers who are doing interesting. One reason I’ve worked a great deal with
their thing out there in public. But this standoff the human speaking voice is because the speech
between live/real-time and studio composition is stream is spectrally complex and varying rapidly in
not a real issue. No one thinks novelists are sad time. It’s always a rich mine of data from a purely
because, unlike standup comics or stage actors, they sonic point of view. Some “political” works I’ve
don’t do it live. It’s simply a different way of working heard tend to rely too much on the content or
with materials. “meaning” of the sounds they use, forgetting that
these have also to be malleable as sound objects
Vassilandonakis: You also worked on educational
if you’re to do exciting musical work with them.
projects and music for the community at that time.
Secondly, once I’ve got my extra-musical bearings,
Wishart: The leap into sonic art was just part of a so to speak, I want to build a formal structure that
larger picture for me, and I became involved with works as music. I feel that music (you could say
other groups with a more direct approach. I worked this about mathematics too) is a separate universe
for some time with Interplay, a Leeds-based offshoot of discourse from language. You have to understand
of Ed Berman’s Interaction, which was taking how to navigate this particular universe to build
alternative theatre and creative play into local pieces, even where you’re using representational
deprived communities. I also became involved in materials. So any extra-musical agenda I may have

Vassilandonakis 11
usually just sets a frame or context for the musical a theoretical point of view, was to see sound
thinking that follows. A simple example is The transformation as the sonic equivalent of motivic or
Division of Labour, where the Adam Smith text harmonic development, enabling standard notions
(about the production of a pin, divided into 18 of musical form to be generalized into the sonic
distinct operations, the classic description of the domain.
advantages of the factory system) provides the
Vassilandonakis: What are some of the approaches
stimulus for the musical idea (sonic variations of
you follow to generate material? Is there a conscious
the text-material), while the eventual degradation of
work process you follow?
the text (in the penultimate variation, and the tail
of the piece) is the closest the form comes to direct Wishart: The way I work falls into three main
political comment. stages. The first stage is the idea itself. To start
a piece, I need to link together a set of sounds (it
could be just one sound, as in Imago, or over 8,000
Compositional Process sounds, as in Globalalia) and some general over-
arching idea (in these cases, respectively, surprising
Vassilandonakis: You seem to work with material metamorphosis, and the universality of the sounds
that undergoes a continuous transformation and of human discourse). This is often the hardest part
sonic development. of composing for me. It’s not so much a problem to
work with sounds, and to make successful or even
Wishart: In my first studio piece, Machine . . . an spectacular sound events. The main problem is why
Electronically-Preserved Dream, I set up an improvi- start on a piece at all?
sation situation for a choir with eight “conductors.” The second stage is to transform the sounds. With
The choir imitated tape loops of machine sounds pieces like Tongues of Fire and Imago, I subject the
I’d recorded, and then, using some pretty general- source material to transformation using all the tools
ized instructions, were asked to transform these, in the CDP armory. From these transformations,
gradually, into something “more human.” (Okay, I select the most interesting or potentially useful
I’d want to be more specific these days . . . ) So the sounds, and then I transform those sounds further.
idea of transformation grew out of some fairly naive In this way, I build up a tree of related sounds. A
socio-political ideas. But, by the time I got to Red sound may be “useful,” for example, if a harmonic
Bird, I’d refined the notion of transformation (and (pitched) sound comes to be strongly inharmonic,
of sound representation) and was looking at an alto- or a strongly attacked sound loses those attack
gether more sophisticated way of structuring a piece characteristics. This means that it will develop
of sonic art. Red Bird is interesting partly because in some completely new direction when further
it still (I think) succeeds, even though many of the transformations are applied. (It’s like modulating
desired transformations either proved impossible (in to some distant key in traditional pitch space.)
the analog studio), or were only semi-successful, but Some sounds also need to be retained because they
mainly because it predates my access to computers establish an audible link between (say) a harmonic
for music making. Having struggled with the analog and an inharmonic variant. These linking sounds
possibilities for four years to make that piece, I are important in the structure of the piece to
instantly recognized that computers should make ensure the sonic coherence of the materials. Some
sound transformation a truly practical proposition. of these materials will become long enough (e.g.,
So the idea of working with sound transforma- via time-stretching, or creating textures of many
tion in music for me predates the digital tools I individual events, or “zig-zag,” or “drunken-walk
later used (and very largely made) for achieving reading”) to become the basis of musical phrases
this. in their own right. Combining these with other
VOX5 took the transformational idea into the events, I’ll work up materials into phrase-length
digital world. But a more important step, from structures.

12 Computer Music Journal


The third stage is to determine the structure of Use of Voice and Instruments
the piece. Eventually, I will make some phrases
that are particularly striking and select these as
Vassilandonakis: Let’s talk more about your long
the nodal events of the entire piece to be placed at
relationship with the voice and vocal improvisation.
particular points in time in the whole structure.
This also implies that the sounds from which they Wishart: I first became interested in new vocal
evolve can be placed somewhere in that structure to possibilities in the very early 1970s when Richard
prepare for the nodal events, or to act as memories Orton introduced me to the singing of harmonics in
of them (perhaps codas). As this process continues, Stockhausen’s Stimmung. In my struggle to make
the options for placing further materials become Red Bird in the analog studio, I found the most
more and more restricted until the large-scale form useful tool I had was my voice. Without computer
of the piece almost “crystallizes out.” It’s at this analysis and transformation, I was forced to rely on
stage that materials that seemed ideal when they the malleability of the voice to achieve many of the
were first made get trimmed, occasionally extended, sound transformations I wanted. I also met Warren
or entirely rejected, because of the surrounding Burt, who’d been working with extended vocal
context. I want to end up with a formal structure techniques groups in California, and we exchanged
that has the feeling (as I’m listening to it) of some vocal sounds.
kind of musical “necessity” in the way it moves
Vassilandonakis: You obviously like the sonic
from one section to the next.
possibilities of the voice. What about the extra layer
However, with a multi-source piece like
of language and meaning that is attached to vocal
Globalalia, there was a long pre-composition phase
sound?
where I edited my materials (speaking voices from
radio or TV in many languages) into syllables, clean- Wishart: I’m not so interested in conventional
ing them up as best I could, and entering them into linguistics, as my focus is on the sonic objects in
a database facility (which I wrote for Sound Loom). speech, and their sonic relations, rather than their
The database enabled me to select sounds for their meaning-based combinations. But the speaking
pitch, pitch-motion, consonant-vowel-(motion), voice provides an interesting model for sonic art,
gender, language, etc. for subsequent compositional as speech is primarily the detailed articulation of
work. The piece is in the form of a frame tale (a story timbre, whereas traditional music relies on the
used for telling stories, like Sheherazade (or The detailed articulation of pitch or rhythm. The mere
Arabian Nights)), so it has a theme that “repeats” at existence of speech demonstrates that an articulate
various points and a set of variations on particular music-of-sound can exist.
syllable-groups taken from the theme. Here, the ma-
Vassilandonakis: Have you ever gone back to
terials were worked in a slightly different way: Spe-
composing for instrumental ensembles, applying
cific items were selected (using the database) with
your knowledge of sound from the electronic
particular properties, and a section was developed
medium? Or does it seem less satisfying to you?
around the particular properties of those materials—
for example, the syllable “ma” can be time-stretched Wishart: I think it’s more a matter of how I want
(as “m” still sounds like “m” when stretched) in to work with sounds. Roughly speaking, musical
a way that “ka” cannot (as “k” is destroyed as a instruments present you with a take-it-or-leave-it set
“k” if it’s time-stretched). However, “r” (of the of available “timbres,” as they are designed with this
Scottish or Dutch variety) has to be time-extended in mind, i.e., to hold timbre constant while they play
in a completely different way, leading to different skillfully with pitch and duration. Of course you can
sonic possibilities (a good “water” transformation explore a vast range of extended possibilities on any
for example). This all sounds terribly rational, but instrument (key tapping, slapping the body of hollow
there’s a good deal of pure “play” involved in finding string instruments, etc.), but in this way you get a
out what works best for each set of sounds. slightly arbitrary bag of spectra for each instrument,

Vassilandonakis 13
rather than having that complete control of the musical formal relations must be based on what can
spectral space one has in the studio. And, in a sense, be heard, and not merely on what composers say
you’re fighting against the grain—against what the ought to be heard. This was a big issue at the time, as
instrument has been designed for. With the voice, the “New Complexity” school was quite dominant,
timbral flexibility is intrinsic to human speaking, so and some composers were prone to the “intellectual
working seamlessly in the timbral domain is not a terrorism” approach that Sokal lambasted in his
problem. The sounds in our everyday environment famous attack on post-modernism (i.e., if you can’t
are, similarly, not honed to some pre-ordained hear what’s written in the score, that’s due to
musical purpose, and for that reason they may be either your lack of musical ability or that of the
sonically rich and internally evolving in ways we performers).
often try to avoid when designing a pitch-playing
instrument. But even more importantly, musical Vassilandonakis: Sounds like a creed from a distant
instruments have been specifically designed to musical past, yet it’s not really that long ago.
function in the traditional musics of their cultures; Most composers today consciously try to engage
they are machines for making music. The reason the listener, even if that doesn’t mean flattering
I continue to do non-electronic extended vocal the listener. They at least seem to “care if you
improvisation is that the voice has that spectral listen.”
flexibility I’ve tried to develop and master in the Wishart: I think so, too. My own approach is simply
world of recorded sound. The voice predates music, that if a reasonably competent musical listener can’t
and we use it constantly when we’re not making hear a purported musical event or relationship, then
music at all; it’s more like a sound recorded from it doesn’t exist as a musical entity. No amount
our natural environment. And it’s this link with the of special pleading from the composer makes any
larger world, the world of sound beyond what we difference. I apply this strictly in my own work.
traditionally think of as music, which also interests Even though there are often three (but rarely more
me. In a studio work, as in a film, you can generate than three) threads developing at the same time in
a whole new imaginary world, or parse the real my music, they must all be audible. (Sometimes you
world in unexpected or surreal ways. I particularly may need several playbacks to hear everything.) And
like the collision of this semi-cinematic view with what you hear is what you get; music is concrete,
pure musical thinking. However, I’m certainly not not abstract.
ruling out composing pieces for instruments in the
future. Vassilandonakis: You talk about the importance of
the perceived physicality of sound, and the constant
tendency of the ear to attach a perceived source to
Perception and Cognition any sound. Could you share your views on this?
Wishart: Unfortunately, here I have no scientific
Vassilandonakis: Your music is very aware of,
evidence to back up my case, but long experience
and, I’d dare argue, even dependent at times upon,
suggests to me that we have listening templates,
principles and issues in perception and cognition.
similar to those in the visual field where certain
How did that aspect of your work develop, and how
cells are sensitive to edges, or parallel lines; we
do you approach it?
don’t just see an array of pixels. I would propose
Wishart: When I attended the IRCAM induction that we tend to recognize or construct the physi-
course in 1981 the most interesting lectures were cality and causality of sounds, even where those
those on psychoacoustics by Steve McAdams. As a sounds are produced entirely synthetically. We
lapsed scientist (I started my university days study- will subconsciously classify sounds as struck or
ing chemistry), I shared his scientific perspective stroked (causality); as solid-resonant, rigid, elastic,
on musical perception—most importantly, that granular, etc. (physicality); and these pre-conscious

14 Computer Music Journal


categorizations inform the way we listen and tape-style techniques, and also waveform-related,
whether we hear sounds as related to one another. and time-varying spectrum processes. How did you
Vassilandonakis: I don’t think you need to go too discover and develop things like waveset distortion,
far to find evidence to support this argument. Isn’t brassage, end-synced delay, and others?
it a survival skill to be able to identify the identity Wishart: While I’m composing, I develop new in-
and spatial location of a (potentially threatening) struments [Interviewer’s note: in CDP, several tools
sound very quickly? Hence the importance of attack or processes can be combined into an “instrument”
characteristics in identifying any sound. that automates a series of actions into one]—though
Wishart: We can defeat these preconceptions of less and less these days. Sometimes, I create high-
course, but we have to work hard to do it. There level tools, like the Interpolation Workshop in Sound
are some very interesting features of natural sounds Loom (see Figure 3) that make it easier to use an
related to their variability. For example, if you take existing tool (in this case, CDP’s “mix in-between,”
a single tongue-flap from a rolled “r” and make a which creates interpolated mixes)—usually when I
regular-timed loop with the same repetition rate notice that I’m using some procedure a great deal but
as the original, the new sound sounds not even I could be using it more efficiently. In some cases, I
remotely like the original. The brain seems to decide to extend the possibilities of existing instru-
pick up a whole qualitative “aura” from “natural ments. For example, the Varibank Filter (see Figure 4)
variability” that the reconstituted regular loop does uses the existing CDP filter routines to build a bank
not possess. of filters on pitches specified by the user, as well as
the harmonics of those pitches, where the pitches
(and the filter Q) can all vary continuously through
Electronic Techniques time. I developed this instrument while working on
Fabulous Paris, starting by extending the existing
filterbank, then rationalizing the approach.
Vassilandonakis: Who were your influences when
The whole collection derives from a single band,
it comes to computer music?
non-time-variable filter module. Part of the develop-
Wishart: That’s a difficult question to answer, as I ment is rational (“I would like this”) and part spec-
tend to plough my own furrow, particularly since ulative (“I wonder what would happen if . . . ”). The
getting into the recorded-sound medium. I’ve had rr-extend program (see Figure 5), one of the most re-
lots of help from formal teachers (like Richard Orton cent programs, fits the first category, and is the result
in York) and expert friends like Martin Atkins and of many years thought about how to time-stretch
Miller Puckette (brushing up against their expertise iterative sounds (like a rolled “r”). Once the program
has helped hone my much less accomplished is created, one can start to do crazy things with the
programming skills). Also, I never thought of myself parameters to produce results one wouldn’t have
as getting into “computer music.” It was just that dreamed of. Waveset Distortion fits completely the
computers came along and made what I’d been second category. I was speculating about ways to pro-
attempting to do in the analog studio a lot easier, duce ugly digital sounds (there were too many “beau-
as well as vastly expanding the possibilities. My tiful” digital bells around at the time, and I felt the
concerns have always been to make music with need for a little grit in the system). I simply decided
sounds from the real world, and to devise (software) to try out a procedure that was rational in one sense
instruments to make that possible in a detailed (it produces a continuous waveform with no clicks)
and sophisticated manner, treating complex sounds but physically irrational (it ignores the positioning
with the rigor a traditional composer might apply to of wave cycles in the source), and listen to the result.
motivic development or harmonic structure. Having made one version of a procedure, it is
Vassilandonakis: Could you talk more about your not difficult to make many related versions, so
electronic techniques? There seem to be some “waveset repetition” led to “waveset averaging”

Vassilandonakis 15
Figure 3. Interpolation
Workshop in Sound Loom.

16 Computer Music Journal


Figure 4. Varibank Filter.

to “waveset enveloping” to “waveset omission.” outcome of what I do, not in the means I use
And these are interesting processes because they to achieve it. I will often explore materials by
are highly source-dependent (i.e., unpredictable applying different transformations to discover the
with complex sources); they are ideal for studio most fruitful processes for that particular sound.
composing, as you can apply them, listen to the I also tend to combine many processes to shape
surprising results you get, select anything that is a sound. For example, I am currently working on
interesting, and reject everything else. a piece that extends the pitch characteristics of
In my musical composition, I normally use lots short vocal utterances. Each utterance is treated
of sound-transformation tools, rather than to focus differently, depending on what I discover works
on any particular one, though the occasional piece best. One particular phrase consists of repetitions of
(e.g., the second movement of Two Women) is more an utterance in which the language-like character
process-focused. I am interested in the sonic/musical dissolves away.

Vassilandonakis 17
Figure 5. The rr-extend
program.

This was made in several stages. At the first stage, data was used to construct a pitch-changing filter
the pitch of the spoken phrase was extracted, quan- that follows the extracted pitch of the source. This
tized onto the tempered scale, and then corrected was applied, with various Q-values (and various
by hand. (I’m listening for a best fit on the tempered numbers of harmonics), to the previous sound,
scale.) These data were used to create two sine tones so that the output sounds became “focused” on
that track the pitch—one at the original pitch, the these tempered-scale pitches. (The consonants get
other an octave higher. These were then cross-faded “swallowed up.”)
in a way that related to the brightness of the vowels These various variants were then used to make a
used in the original voice. The resulting sound was texture (irregular repetitions of these events, of a spe-
then given the loudness envelope of the original cific time “density”) that passes from the more rec-
speech; it was then mixed, at a low level, with ognizable variants to the less recognizable. (Hence,
the original speech, underlining the “focused-on” the consonants gradually dissolve.) This texture
pitches. At the second stage, the same pitch-line was then (fourth stage) sound-shredded to produce

18 Computer Music Journal


material with a slightly plucked feel. Using the One question I have to ask in this case is, “Why
“grain repetition” process on each channel of the are the voices being transformed?” In a studio
new sound (this recognizes attacked events within piece, we are already in an imaginary world where
a stream and treats them each separately), each anything is possible. In the real-time situation, we
attack was repeated with a short rhythmic pattern. have the contrast between the physical presence
The original texture was then made to slowly of the performers on the stage and the possible
cross-fade into the plucked texture. At the fifth disembodiment of their voices. What is the theatrical
stage, progressive spectral blurring was applied or symbolic significance of this? I’ve tried to deal
to the end of the texture, so the event-detail was with this issue a few times.
lost, and the sound smeared out. The pre-treated
Vassilandonakis: Your tools are very refined, per-
texture was then cross-faded into the blurred
sonal, and continuously evolving.
version. The sixth stage involves an octave-higher
version (brassage transposition, in this case) of this Wishart: I don’t regard my tools as personal. My
whole event, which was made without changing aim has always been to make the tools as available
the timing. This was then slowly faded into the as possible so anyone can use them. The main
non-transposed version. Next (seventh stage), the motivation behind this is to enable other people
resulting event was passed through another filter to participate fully in the new music making. My
tuned to all of the tempered pitches. The pre-filtered lucky break in life was to be born into a house with
sound was made to slowly cross-fade to the filtered a piano in it. This was a family heirloom; there was
version. Finally, the resultant sound was given a no way my family could have bought a piano. But
long dovetail, fading slowly to zero level at its end. this magical object launched me on my career as a
In this way, I produce an event in which the composer. I later got into writing software because I
pitch characteristics of the original are preserved in simply couldn’t afford the endless train of updated
a texture where the original vocal phrase dissolves black boxes that university and commercial studios
away. I select these particular processes by trial and could buy out of their equipment budgets. (Writing
error (informed by experience). Often my choice of software takes time, but once written, it’s free, and
process does not work, or I have to fine-tune the you can upgrade it without asking.) So I’m deeply
parameters to get a musically worthwhile result. The committed to access!
final event is no more than 30 seconds long, of which
Vassilandonakis: By “personal,” I meant they’re
perhaps 20 seconds will be heard in the final piece.
created to solve a specific problem or fill a need you
Vassilandonakis: You create mainly fixed-media were facing while creating a specific piece. In that
music. Have you tried real-time processes? sense, you can’t deny that your personal method
of working is reflected upon them. Consequently,
Wishart: I’ll make a distinction between real-time
you do influence the way other composers will
processes and real-time pieces: if one is making
work, even if there are limitless possibilities. What
a fixed-media piece, then the advantage of using
I’m leading to is the fact that all the CDP tools
real-time processes is that you can explore the
I’ve played with are extremely composer-oriented,
output of a process through physical gesture before
and they yield very musical results right away. I
deciding what you want to use. I’ve not worked in
think it’s really important that they’re created out
this way simply because I never got into real-time
of necessity by a composer working on a piece, not
programming. Most of the processes I use work
a software engineer trying to satisfy the marketing
faster than real time, and I don’t have a problem
department of a company that is geared toward the
with drawing graphs or typing tables, so I don’t
amateur songwriter.
miss this. However, for a real-time piece, one has to
use real-time processes, and I’ve been working on a Wishart: Yes, that’s true. In particular, I hone my
real-time piece with voices for the past 2–3 years, instruments through the experience of using them—
with varying degrees of success. I usually start with a basic design and then improve

Vassilandonakis 19
on it as I continue to use it and discover the best When multitasking machines came about, we
way to use it, or discover its limitations, and see graduated to the PC platform in 1994, which
how it can be extended. I have a number of ways of was cheaper and more available in the UK than the
approaching the development of new instruments: Macintosh. The group was a composers’ cooperative,
In some cases, I become aware while composing and it expanded and contracted at various times,
that a particular instrument could be developed morphed into a small business, then back into a
further, or that it has limitations that are not cooperative. It struggled financially for many years
necessary, or that I am very often using particular as it was not viewed primarily as a commercial
processes in combination, or in slow succession. For enterprise but a resource for composers, and, being
example, the “varibank” filter, which allows you PC-based until recently, it was largely spurned by
to define filterbanks of time-changing harmonies the academy (who opted to use Macintosh). By now,
(without restricting you to the tempered scale), however, the CDP is a vast collection of non-real-
was developed after I’d been using individual static time signal processing software. It’s difficult to
filter banks in sequence, in Fabulous Paris. In other think of anything you can’t do with the CDP.
cases, I speculate that a certain process might be I wrote the Sound Loom graphic interface (using
interesting and just go for it and try it out. I might Tcl/Tk) in Berlin in 1998. (There’s also an alternative
even develop an initial instrument into a family graphic interface, SoundShaper.) More recently
of related processes (like the “distort” family of (2005), the software and interface have been ported
waveset distortion instruments). to the Macintosh, and there are several other CDP
Like many composers, I also have slightly obses- outputs (real-time plug-ins, algorithmic composition
sive tendencies, so I try to think of every possible tools, etc.). And the CDP is responsive to its
way a process might be developed. This can lead to users as we are using it too, so if another user
strange problems, like CDP users asking my advice has a problem, we need to fix it. The Sound
about some process I wrote a long time ago but have Loom interface, available for free on my Web site
never used! (www.trevorwishart.co.uk) is updated about every
2–3 weeks, and the signal-processing software is
updated approximately each six months.
Vassilandonakis: What’s the CDP community like?
The Composers’ Desktop Project
Wishart: One important aspect of the CDP’s phi-
losophy is the availability of these tools. There’s a
Vassilandonakis: Could you talk a little more about
natural tendency in large institutions to compete
how CDP came about?
to have the best, the biggest, the fastest, the most
Wishart: The Composers’ Desktop Project grew out advanced technical system. But music is by nature
of a group of studio composers based in York in a social activity. Not everyone can afford or access
the early 1980s. We had all worked in the analog these advanced systems, so we run the risk of creat-
studio, and understood the potential of computers, ing a “ghetto” of superheroes showing off a technical
which were, at the time, largely inaccessible for prowess to which no one else can aspire. To ensure
musical use. There was a minicomputer running that the music itself develops, we need to share
Csound at the University, but this was difficult the tools and the knowledge about how to work
to use and to share (students clearly had to take with sounds. In this way, the musical language will
precedence). With the arrival of the Atari ST in develop, and the music itself (rather than merely
1986, we calculated that we could run the software the machinery) will evolve. At the same time, I’ve
on this desktop machine with a little technical help never regarded CDP/Sound Loom as a commercial
(getting data out of the ROM port, and building a exercise. I’m not trying to sell anything, so there are
buffering device). We proceeded to port Cmusic, and no flashy graphics to make the tools “attractive”
later Csound, and then we wrote our own code in C. in some marketing sense. The CDP is just very

20 Computer Music Journal


powerful. You may take some time to get to grips processes might be used with what sounds—for
with all its possibilities, but we’ve gone to great example, how you can time-extend a sound, and
lengths to make everything as clear and accessible the advantages or drawbacks (artifacts) of different
as possible. procedures. It also deals with more general issues,
like the perception of “physicality” and “causality”
of sounds, the nature of pitch (for example, does a
Books portamento sound have a pitch in the traditional
sense, even if it has a harmonic spectrum?), or the
limits of temporal complexity in music.
Vassilandonakis: Although you never held a long-
term academic post, you have lectured widely and Vassilandonakis: It has a highly stylized presen-
you have published two books that deal with sound tation, with hand-drawn diagrams—a very useful
composition, each from a different perspective. First reference, and great sound examples.
came On Sonic Art (Wishart 1985).
Wishart: Incidentally, after parodying the cover
Wishart: I had proposed the piece that became and title of Thomas Morley’s 16th-century treatise
VOX5 as a project to IRCAM in 1979. I was invited for this book, I discovered that he, in turn, had
onto their course in 1981, and at the end of the borrowed it from an earlier book of John Dowland.
course I was offered the opportunity to realize I’m currently putting together a third book, which
my piece. But then, the entire hardware setup will deal with formal procedures in my sound-art
was changed, and it was five years before I got pieces. I’ve been lecturing about these for some
back to IRCAM. In the intervening years, I began years, but this seems a good moment to assemble all
writing the VOX cycle. Without access to IRCAM’s this information between two covers. I don’t have
spectacular technology, I ended up writing On Sonic a title for it yet (Sound Form is all I have come up
Art on the assumption that I would never be able with so far, which is much too boring), but I have
to realize the ideas I had imagined. This book deals much of the material for the book already written,
with the aesthetic implications of recording and and the sound examples prepared. I need a space
sound analysis technology for the future of music when I’m not composing or performing in which to
and ranges over many aspects of music making, put it all together.
including sound in space and the possibilities of the
human voice.
Vassilandonakis: . . . a very interesting text, with
Next Steps
ideas that are still very valid today, despite the in-
termediate advance of technology. I would compare Vassilandonakis: Where do you see computer music
it to Xenakis’s Formalized Music, in the sense that heading? Is there a future outside institutional
it manifests a visionary approach—if I may say so— support?
and it is inspired by science and technology, applied
Wishart: I don’t think of computer music as a
to art in new ways. Your second book, Audible
category separate from music, so I have to answer
Design, is a more specialized textbook of sorts on
the question “Where is music heading?” And I
how to work on sound with a computer, regardless
certainly can’t claim to predict the future. Technical
of what tools (software/hardware) one may have
advances will make many new things possible,
access to (Wishart 1994). How did it come about?
particularly in the sphere of live interaction with
Wishart: Audible Design was written some years sounds and performers, and more realistic capture
later as a craft text, so to speak—a how-to book, of real-world sound environments. So long as there
explaining all the then-existing signal processing are strong musical ideas to match these technical
processes using diagrams (rather than mathematics), developments, I welcome this, but I’m not so
and discussing, with musical examples, what interested in technical development for its own

Vassilandonakis 21
sake. Computers made possible for me certain the quality of their speech). However, this meant
musical possibilities I was already interested in. that year two was taken up partly with detailed
The musical ideas should always lead the technical cleaning of these recorded sources.
realizations, or music becomes merely a kind of The aim is to produce a work of around one
demo-medium for technical innovation. Where hour (probably in four separate “acts”) representing
music goes depends on where composers decide to (and abstracting) the unique qualities of individual
take it. It’s even possible, though highly unlikely,
that composers and performers could decide that Table 1. Trevor Wishart’s Works with Electronics
electronics in music is a dead end. However, I hope
that the idea of sound transformation as a means of Year of Composition Work Notes
structuring musical form will become part of the
repertoire of composers’ tools in the future. Mixed-Media Works (Voice or Instruments with
Electronics)
I also think one legacy of the development of
2000 Machine 2 choir and machine sounds
the personal computer, the recordable CD, and the 1990 Dance Music orchestra with click tracks
Internet has been to democratize access to com- 1989 VOX 6 four amplified voices and
posing with sound. The CDP philosophy of making tape
powerful tools available as cheaply as possible has, 1988 VOX 4 four amplified voices and
I hope, contributed to this democratization process; computer-generated click
one can compose the music without being attached tracks
to an institution, though support from other musi- 1985 VOX 3 four amplified voices and
cians is still important if you want to really develop quad click-track tape
your skills, rather than just engage in solipsistic 1984 VOX 2 four amplified voices and
sound-doodling. Membership-led organizations, like stereo tape
1982 VOX 1 four amplified voices and
the Sonic Arts Network in the UK, should allow
quadraphonic tape
good music to see the light of day, wherever it is 1979 Pastorale Walden-2 flute, tuba, props, and stereo
produced. tape
Vassilandonakis: What is next for you, composi- Fixed-Media Works
tionally? 2007 Angel stereo
2004 Globalalia stereo
Wishart: I’m in the middle of a long-range project, 2002 Imago stereo
part of a three-year residency, based at the University 2001 The Division of stereo
of Durham, and supported by the Arts Council of Labour
England, in which I’m combining my community- 2000 American Triptych stereo
arts and workshop skills with children and amateur 1997 Fabulous Paris stereo
1994 Blue Tulips stereo
musicians, with location recording and software de-
1994 Tongues of Fire stereo
velopment to make a large piece reflecting the com- 1988 Two Women stereo
munity and environment of Northeast England—the 1986 VOX 5 quadraphonic
area stretching from Stockton and Darlington (of 1982 Anna’s Magic Garden stereo
railway fame) on the Tees, to the Scottish border. In 1980 Beach Double stereo
the first year, I went out making contacts, meeting 1977 Red Bird stereo
people and making recordings of individuals of 1976 Fanfare & stereo
that region in schools, old-people’s meeting-places, Contrapunctus
homes, and pubs. Because I am interested in natural 1976 Menagerie stereo
speech (not staged or professional speech), I tried to 1972 Journey-into-Space stereo
record people in relaxed settings, rather than in a 1970 Machine . . . an stereo
Electronically
recording studio where people often feel they have
Preserved Dream
to present themselves in a particular way (altering

22 Computer Music Journal


speaking voices, but also organizing the collective Table 2. Continued
of voices in many different ways. So the second
Publications
very time-consuming task has been dividing
Gottstein, B., ed. 2006. Musik Als Scientia: The Edgard
the recorded speech into musically manageable
Varèse DAAD Guest Professors at the TU Berlin
phrases, and then cataloguing these in ways that studio. Berlin: Pfau-Verlag Saarbrücken.
capture their sonic characteristics (e.g., tempo Wishart, T. 1994. Audible Design. York, UK: Orpheus
the Pantomime.
Table 2. List of Recordings and Publications by Wishart, T. 1985. On Sonic Art. London: Hardwood.
Trevor Wishart Shepherd, J., et al. 1980. Whose Music: A Sociology of
Musical Languages. New Brunswick, New Jersey:
Commercially Available Recordings Transaction Publishers.
Wishart, T. Forthcoming. Metafonie: Cinquanta anni di Wishart, T. 1977. Sounds Fun Book 2. London:
musica elettroacustica. DVD. Milan: LIMEN Universal Edition.
DVD01-AV001. Wishart, T. 1975. Sounds Fun: A Book of Musical
Wishart, T. 2008. Machine. London: Paradigm Records Games. York, UK: Schools Council.
PD 25. Wishart, T. 1974. Sun, A Creative Philosophy. London:
Wishart, T. 2008. Composition 3. Audio compact disc. Universal Edition.
Osaka: Flying Swimming fs 00005. Wishart, T. 1974. Sun, Creativity and Environment.
Wishart, T. 2007. Fabulous Paris—A Virtual Oratorio. London: Universal Edition.
Audio compact disc. Albany, New York: Electronic Note: Compact discs are self-published by the composer (in
Music Foundation OT 103. some cases in collaboration with the Electronic Music
Wishart, T. 2006. Cultures Électroniques: Bourges 2006. Foundation, or EMF) and distributed via EMF, DMA, Integrated
Audio compact disc. Bourges, France: GMEB LDC Circuit Records.
2781131/32.
Wishart, T. 2004. 50 Years Studio TU Berlin. Audio
compact disc. Albany, New York: Electronic Music and rhythm, pitch contour, vocal quality). I’ve
Foundation EM159. also written various new “meta-programs” that
Wishart, T. 2003. Das Dreidimensionale Möbiusband. combine existing CDP signal-processing tools to
Audio compact disc. Osaka: Flying Swimming fs do more specialized tasks (e.g., emphasizing or
00001. replacing accentuation in a vocal phrase). I’m also
Wishart, T. 2002. ETC. Audio compact disc. Albany, thinking of producing a multi-channel work, but
New York: Electronic Music Foundation EM153. this must be portable to the schools and other
Wishart, T. 2002. Journey-into-Space. Audio compact places where I made the original recordings so the
disc. London: Paradigm Records PD 18.
local community can hear the results of my efforts.
Wishart, T. 1999. Or Some Computer Music. Audio
At the same time, the piece must be musically
compact disc. London: Or Records.
Wishart, T. 1999. Voiceprints. Audio compact disc. comprehensible to people who speak not a word
Albany, New York: Electronic Music Foundation EM of English, and performable in grand venues with
129. sophisticated sound-spatialization facilities. I’ve not
Wishart, T. 1992. Red Bird / Anticredos. Audio compact fully resolved this issue yet. Problem solving is part
disc. Albany, New York: Electronic Music Foundation of the excitement of composing for me.
EM 122.
Wishart, T. 1990. Vox Cycle. Audio compact disc.
London: Virgin Classics VC 7 91108-2. References
Wishart, T. 1980. Miniatures. Audio compact disc.
London: Cherry Red Records. Wishart, T. 1994. Audible Design. York, UK: Orpheus the
Wishart, T. 1979. Beach Singularity / Menagerie. Audio Pantomime.
compact disc. London: Paradigm Records PD 03. Wishart, T. 1985. On Sonic Art. London: Hardwood.

Vassilandonakis 23

Вам также может понравиться