Вы находитесь на странице: 1из 8

Paradigms and Computer Music

Author(s): Andrew Gerzso


Source: Leonardo Music Journal, Vol. 2, No. 1 (1992), pp. 73-79
Published by: The MIT Press
Stable URL: http://www.jstor.org/stable/1513212
Accessed: 25-08-2014 23:15 UTC

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
http://www.jstor.org/page/info/about/policies/terms.jsp
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content
in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship.
For more information about JSTOR, please contact support@jstor.org.

The MIT Press is collaborating with JSTOR to digitize, preserve and extend access to Leonardo Music Journal.

http://www.jstor.org

This content downloaded from 143.107.252.215 on Mon, 25 Aug 2014 23:15:34 UTC
All use subject to JSTOR Terms and Conditions

TH EORETICAL

PERS

Paradigms

PE CTIVE

and

Computer

Music
AndrewGerzso

omputer
music is about 25 years old. A
greatnumberof ideashavecome out of the music-research
community,some of which have found their wayinto the
practical applicationsof commercial products as well. I
would like to examine some of the ideas underlyingthe
machinesand software(mostlythe latter)used in music,by
tracingwhere these ideas came from, pointing out some
of the conceptual problems that experience has revealed
and, finally, making some suggestionsfor the future. In
particularI will discussa familyof systemsthat have been
used in the research community at large (the Music N
languages [1]), two softwaresystemsthat were developed
at the Institutde Recherche et CoordinationAcoustique
/Musique (IRCAM)(Max and Patchwork) [2] and the
sequencer,which is one of the main softwaretools used in
the commercialmusicbusiness [3].
The systemsI discussbelowwere all producedby people
of exceptional talent and imagination.Their accomplishments are milestonesin the field of computermusic.However,pointingout problems(whichis alwayseasyto do with
hindsight) is one of the waysto develop better systemsfor
musicin the future.

PARADIGMSOF REPRESENTATION
Any time we use a systemfor makingmusic,be it software,
hardwareor a combinationof the two,we are dealingwith
an object that implicitlyor explicitlyembodiesa paradigm
for makingmusic.Whatdo I mean by a paradigm?
In its most intuitivemeaning, a paradigmis a guiding
conceptualmodel for how somethingis thoughtto exist or
behave.Therefore,the paradigmunderlyinga musicalsystem is a model of what music is thought to be, the wayit
behavesand how it is created.The paradigmcan be quite
(evenexcessively)loose or quite constraining.Accordingto
Webster'sDictionary,a paradigmis "anoutstandinglyclear
or typicalexampleor archetype"[4]. Butit is ThomasKuhn
who has given a definition that is closer to our concerns
here. He defines paradigmsas "universally
recognizedscientificachievementsthatfora timeprovidemodelproblems
and solutionsto a communityof practitioners"[5]. Whatis
interestinghere is the notion of 'model . . . solutions'that
are 'validfor a time'. Some of the systemsdescribedbelow,
in particularMusicV, havebeen model solutionsfor many
years.
A mature musical systemreflects what it is we want to
accomplish, so in a very direct way the softwarewe use
reflectswhat it is we want to do. How does this reflection
takeplace?The softwarepresentsin some form or another
the objectswe wantto manipulateand the kindsof manipuK) 1992 ISAST
Pergamon Press Ltd. Printedin Great Britain.
0961-1 215/92 $5.00+0.00

lationsthatwewantto makeon
those objects.Whatdo I mean
by objectsand manipulations?
For example, with a word
processor,the objectsare characters and words and the manipulationsare copying,deleting, correcting the spelling,
A B S T RA C T
changing the font, etc. In music, the objectscan be notes or
Any systemusedformaking
chords that can be transposed,
musicembodiesa particular
way
edited, played,etc. The objects
of lookingat music.Theseways
arecalledparadigms.
Theauthor
can alsobe soundtracksthatwe
definesthe generalnotionsof
can manipulate, by changing
paradigm
andrepresentation
and
the order, for example. They
explainshowthese conceptsapply
can also be sound files thatcan
to the designof computer-music
systems.Severalrealsystemsare
be manipulatedby mixing, filexaminedfromthe paradigmatic
teringand so on.
pointof view.
All the objects that we are
manipulatingare represented
in some formor anotherby the
softwarewe areusing.The questionof representation,then,
is centralto the designof anysoftwaresystem.Whatis it that
we wantto represent?How do we wantto manipulatewhat
is represented?Thesearethe basicquestionsinvolvedin the
design of anysoftwaresystem.
Computermusic is not computerscience, but manyof
the conceptsand tools thatare currentlyused in computer
music depend on the ideas that have been developed in
computerscience. This is especiallytrue of the concept of
representation.If we are going to representsomethingin a
computer,the waywe do this depends on our concept of
representation.So, in fact, we are talkingabout the paradigmsof representation.It would seem appropriate,therefore, to quickly summarizehow the paradigmof representationhas evolvedover time.
Recentyearshaveseen the rise of a new disciplinecalled
cognitive science [6]. In fact, it is more a collection of
disciplineswith common goals than a discipline in itself.
According to an interesting recent account by Varela,
Thompson and Rosch [7], thought about cognitivismhas
alreadygone through two stages and we are now on the
thresholdof the third.Whatis not clear in this account is
whether each stage replaces or builds upon the previous
one. In any case, central to this evolution is the debate
concerningthe paradigmfor representationitself.
Andrew Gerzso (computer music researcher), IRCAM,31, rue Saint-Merri,F-75004
Paris, France.
Manuscript solicited by Marc Battier.
Received 28July 1992.

LEONARDO MUSICJOURNAL, Vol. 2, No. 1, pp. 7>79, 1992

This content downloaded from 143.107.252.215 on Mon, 25 Aug 2014 23:15:34 UTC
All use subject to JSTOR Terms and Conditions

73

P6

P7

\
\

1
SC

0/

l
I

\
20

491 51 1

resentation, which
will produce a twonote musical
phrase at tempo
60 to the quarter
note. F1 isafunc-

B2

Q
T

J= 60
Z
jr

.t

F2

OSC

51 1

OUT )

1 INSO 1;
2 OSC P5 P6 B2 F1 F1 P30;
3 OSC B2 P7 B2 F2 P29;
4 OUTB2B1;
5 END;
6 GEN O 1 1 0 0 .99 20 .99 491 0 511
7 GEN O 1 2 0 0.99 50.99 205 -.99 '506 -.99 461 0 51 1;
8 NOTO 1 2 1000 .0128 6.70;
9 NOT2 1 1 1000 .0256 8.44;
10TER 3;

In the firststage of cognitivism,the


keyconcept is the symbol."Thecentral
intuitionbehind cognitivismis that intelligence . . . so resemblescomputation in its essentialcharacteristicsthat
cognition can actuallybe defined as
computationsof symbolicrepresentations"[8]. Cognition,therefore,is "informationprocessingas symboliccomputation,[or] rule-basedmanipulation
of symbols"[9]. This stage is exemplified in manyof the systemsdeveloped
tor computermuslc.
In the second stage of cognitivism,
the key concept is emergence. Here,
"thedominanttheme shifts[awayfrom
the symbol]to the notion of emergent
properties"[10]. This approachto representationhas given rise to the connectionist strategiesthat are currently
being explored,of whichthe mostspectacularexample is the neural network
[11]. "Theemergence of global states
in a networkof simple components"
becomes the primarybehavioralcharacteristicof thisapproach[12]. Instead
of havingobjectsthatare explicitlyrepresentedby symbolsthat then are used
in calculation,the emergentglobalstates
,^

74

Fig. 1. (bottom) A
small Music V
program, with
(upper left) a
graphicpatchrep-

tionthatdescribes
the evolution of
the amplitude, or
loudness, of the
sound over time.
F2 is the function
that specifies the
timbre of the
sound.

[the processes] enact a world as a


. , . . .
. . .
domam ot ( .lstlnctlonsthat 1S lnSeparable from the structureembodied by
the cognitive system"[15] (emphasis
mine). This third stage is still in a very
speculativestate.
I have strayeda little here, but my
purposewas to clarifythe relationship
betweenparadigmsfor musicand paradigmsfor representation.In computer
music we attemptto create paradigms
thatwill guide the design of systemsin
which we try to represent the objects
thatwe are interestedin manipulating.
The techniques used for representation aregroundedin ideasaboutrepresentationitself.At the sametime,in the
backgroundso to speak,the paradigm
for representationis evolvingthrough
the effortsof computerscienceand the
cognitive sciences. Representationin
the cognitivesciences has, then, gone
through the three phases discussed
above:the symbolic,the emergentand
the enactive.

WHATPARADIGMS
DO WE USE?
The Music N Family

The field of computer music was first


concerned with making sound. Max
Mathews'sMusicV (circa1960)wasthe
firstlanguagefor programmingsounds.
of highly interconnectedsimple com- Later,JohnChowninginventedthe freponents are the objects themselves. quencymodulation(FM)synthesistechnique (circa1970) for the efficientcalThe objectsthat are representedcome
culation of rich musical timbres. In
into being by themselves(or emerge)
order to meet the demandsthat music
through interactionwith the environmakeson processing,Peter Samsonof
ment of the perceiver,ratherthan beSystemsConceptscreated the Samson
ing objects 'out there' that are then
Box for the Center for ComputerRerepresented(or simplytranscribed)'in searchin MusicandAcoustics(CCRMA)
here'.The simpleinput/outputscheme at StanfordUniversity,California,and
of cyberneticsis abandoned. In com- IRCAM,
in Paris,developedseveralgeneputer music this tendencyis reflected, rationsof real-timemachines[16]. But
for example,in real-timeperformance withouta doubt it wasMathews'swork
systems,whichare highlyinteractive.
that has shaped at least three decades
In the thirdstage,the keyconcept is of workin computermusic.
that of enaction. "Brainsmake memoCentralto Mathews'sapproachwas
ries, which change the wayswe'll sub- the paradigmof musicas mathematical
sequentlythink.The principalactivities function. Generallyspeaking,the simof brainsare makingchangesin them- plestfunctionone canimagineisy =( x),
selves"[13]. This stageis characterized where the valueof y is the resultof the
by the notion of processesthat change action done by the functionon the
themselves. Fundamental to this ap- parameterx. In Mathews'stext on Muproach is the attitude "thatwe move sic V we find: "Since the essence of
awayfrom the idea of a worldas inde- sound depends on the nature of the
pendent and extrinsicto the idea of a variationsin pressure [in the air], we
worldas inseparablefromthe structure willdescribea soundwavebya pressure
of these processesof self-modification" function p(t)" [17]. The unit gener[14]. Thus,thisapproachinheritsfrom ators of MusicV can be viewed,then,
the concept of emergence. "Insteadof as parameterizedfunctionsthat,when
representing an independent world, linked together, make a sound (see

Ge7zso,Paradigms and Computer Music

This content downloaded from 143.107.252.215 on Mon, 25 Aug 2014 23:15:34 UTC
All use subject to JSTOR Terms and Conditions

Fig.
Music
for
.

Fig. 1). The linkage itself is based on


the paradigmof the telephone switchboardSwhich uses patch cords to link
together people who want to talk.The
patch paradigmhas been applied in
innumerablehardwareandsoftwarepr(}
jects [18], and MusicV led to the family
of Music N languages created during
the 1970sand 1980s.All these systems
were extensively used at IRCAM,
CCRMA,
Centerfor MusicExperiments,
Universityof California, San Diego,
MassachusettsInstituteof Technology
(MIT)and hundredsof otherplaces,in
a multitudeof artisticprojects.
Intuitively,
Mathews's
functionalparadigm viewsmusic as a constantflow of
data (due to the emphasisput on signal-processingaspects),insteadof as a
seriesof eventsin time.The unit generatorscreatethisflowasa functionof the
parametersgiven to them. Note that
eventsare createdby startingand stopping the flow.In Music10,for example,
we find two kindsof variables:I_TIME
and R_TIME.The firstcorrespondsto
variablesto whichwe assignvaluesonly
when a note begins.All the variablesof
thiskindaregroupedtogetherin a block
of code called 'I_C)NLYcode' [19].
The second corresponds to variables
thatare modifiedwhen each sampleof
sound is calculated. The purpose of
I_TIMEand R_TIMEvariablesis to
make the distinctionbetween context
and flow,respectively.The need for the
I_C)NLY
code wasone of the firstsigns
of the shortcomingsof the pure flow
approach.
I have alreadypointed out that the
paradigm of music as mathematical
function excludes from the beginning
the conceptof musicalevent.The evaluation of a mathematicalfunction itself
does not even implytime.The function
is evaluated that is all. But discrete
events are important in music and,
more generally,in any context where
the question of creatinga languageof
expression is a concern. In Charles
Hockett's interesting analysis of the
prerequisitesfor havinga languageof
any kind, discreteness (in the mathematical sense of a clearly delimited
thing) is extremelyimportant[20]. This
discretenessallowsfor the creationof a
differentiatedvocabularyand for the
possibilityof a grammarthrough the
combinationof discreteobjects.
Another problem is that this ap-

the
2.fromSchematic
performtrack
.

-aL
y

mathematicalfunctionitselfbut alsoin
the way a traditionalmusical score is
understood. For example, consider a
quarternote playedat a tempo of 60 to
the quarternote. Accordingto current
professionalstandardswe need 44,100
numbers(calledsamples)per channel
to representthis second of sound. The
pitch of this note can be represented
withone number,say440, for example.
So, the generatortakesone parameter,
the pitch number,and turnsthe result
into 1 sec of sound thatis 44)100numbers long. But does this produce a satisfying result? We now know that a
sound thatis musicallyrich is made up
of manyfrequenciesinvolvingtinyvariationsover time.These variationsmust
be specified in some way. Therefore,
the number of parametersneeded to
representthe sound then startsto approach the number of samplesin the
signal, and the dividing line between
generationand controlbecomesmore
and more fuzzy.
Still another problemis that all the
parametersare implicitly put on an

diagramof setup

t7

ance Dialoguede
I'Ombre
Daubleby
PierreBoulez.

1 of the tapeis
sent to the VCA
controlunitna
tne aucuoconsole. The musicis
thenspatialised
amongsix speakers usingboth the
timinginformation comingfrom
track2 of the
tape recorderand
the spatialconfiguratlonsmemo-

controlunit.

equal footing from a musicalpoint of


view.Thereis a parameterfor pitchand
another for timbre. Moreover, both
categories (pitch and timbre) are implicitly assumed to correspond to a
'scale'or continuumof some sort.This
is true for pitch, of course,but not for
timbre.In fact, a paradigmfor timbre
that would allow us to establishsome
sort of classificationmethod for musicalpurposesstilldoes not exist.Increasingly, timbre is being describedintuitivelyby musiciansin termsof typesof
processesor typesof behavior[21] . For
example, we can analysesounds and
look for periodic (as opposed to chaotic) behavior[22]. Whena more specific and completedescriptionof these
types emerges we will be closer to a
betterwayto representtimbre.
Whatparadigmof representation
does
MusicVcorrespondto,assumingVarelaSs
analysisis correct?Withouta doubt,the
one correspondingto the first phase,
since MusicV processesa continuous
flowof symbols the samplesof sounds
themselves.

<-r

/
|

|
* track

track

> 2

' 2 2 S Del

SMP1 E code
-

1-

|;
|

,,

-T

audio l
l
input
l
l
UCR CONTBOL U
audlo

output

proach explicitly makes a distinction


between the generation of a sound (via
the mathematical function) and its control (via the parameters). This distinction has its roots in the nature of the

Gerzso,Paradigms and Computer Music

This content downloaded from 143.107.252.215 on Mon, 25 Aug 2014 23:15:34 UTC
All use subject to JSTOR Terms and Conditions

75

-I

=0-.>=

X:0X-:

lW

= -->-DOD
_!

CUTMiDi

bang reset

D1 patcher

patcher LOCAL
sourse

Roman
|
= U2_ -a?,,,_ :

:_3

=."

___

. _

/s

ON

=.

X PJ
.

xb,m;

OFF

EDIED113:>11321C3ED31:3
1:1 EDl
patcher

patcher

Vv

|-

..

1:2

patcher MiDisource RQ

C1

>0
_W*=.

Egtcher
t1

,Jsigle initlal

/74

Fer

J 1 4-5 ++

21

r space-ss

/89

Btther

/73

on

T 1-

*+

mcher
t3

off
T2=3

T 5-6 + +

/57

Cj

XieFinai
| Sl9

Wtcher

23/68

L]

[:

Fatcher envelope tontrol


patcher | space tont I
=--: 1ro

p3Space-cs
!
t

Dialosuede liombredouble(roman)
4 .X*89
1 ** set reArencevatues
2 ** choosesection
C

Ltcher

REF
IX

Eginpace-tabl
>>>>>>>>>

71l 3-4

<<<(<<<<e

patcher

iZ5o b

T 1-2

- _

T 4=S

T 5-6
,

1Qm

Fig. 3. The Max program for D?aloguede l 'OmbreDoubleby Pierre Boulez. Each one of the
little boxes labeled 'patcher' contains subprograms that control the sound spatialisation.
The square items with the circles inside are called 'bangs', which are trigger buttons for
setting off a subprogram. The long vertical rectangles are potentiometers, which are used
interactively to set loudness or speed.

in Max is done graphically.Max has


been extremelysuccessfulin both the
researchand commercialmusic communities [24].
Max is based on the idea of objects
that send each other messages. Max
interacts with the outside world by
acceptingand sending MIDIinformation. The objects,whichare themselves
operators or functions made up of
Maxoperators,areinterconnectedin a
patch-likefashion. Messagesintermit-

Max

The Max programming language,


named in honor of MaxMathews,was
initiallyconceived as a programming
environmentfor creatingreal-timemusical applications.The most common
applicationsoutput MIDIdatafor controllingsynthesizers.In its most recent
version,whichrunson the IRCAMMusical Workstation[23], it operatesnot
only at the controllevel but at the sample level as well. All programming

Sampler A a
_
/
S a mp 1e r B

#
/
#
#
>
Arrssosz>sovfossif>>

'
#
#
0
#

#
#

Control
sound:

76

>>zAw

#
#
#
#
#
#
t

Fig. 4. Schematic
diagram of equipment setup tor a
performance of

r,xXblosante-rtxe
Dy

Pierre Boulez. The


notes from the solo
flutist are sent to a
Maxprogram containing the score follower. Depending
on the actions to be
taken at each cue in
the score, the score
follower then issues
commands for the
control of the samplers, spatialisation
(Matrix32) or
sound transformation (second Max
program controlling the 4X realtime digital signal
processor).

tently travel along the patch lines.


Instead of having a constant flow of
information through the network of
patch lines, as in the Music N family,
informationflowsonlywhen a message
is sent [25]. Anothermajordifference
is thatin Maxa largevarietyof messages
(numbers, lists, 'bangs' [trigger messages], etc.) travelbetween operators,
whereasin the MusicNfamilyit is numbers representingsignals (air-pressure
variations) that travel between the
operators.
Onewayto describeMaxis to saythat
it is a patch language with delays. If
there were no delays,then everything
would be executed as quicklyas possible. The delaysareeitherexplicitlyprogrammedor inducedbythe systemwhen
it is waitingfor an external signal to
proceed.The temporalstructureof the
musical emerges as a result of these
delays.
I have had two occasionsfor using
Maxin the contextof a musicalproduction [26]. The firstwas in the realization of Dialogue de LVmbreDouble
(1985) by the FrenchcomposerPierre
Boulez, in which a live clarinet dialogues with a clarinet on tape that is
spatialisedusing an audio matrix[27]
controlled by a Max program (Figs 2
and 3). The tape has two tracks.The
firstcontainsthe recordedmusicthatis
to be spatialised,and the second contains SMPTEcode that is sent to the
Maxprogram.
The second composition, also by
Boulez, was Explosante-Fixe
(1991) for
chamberensemble,soloMIDIflute[28],
digitalsignalprocessor[29], audiomatrix and samplers [30] (Fig. 4). Here
the solo flute sends MIDIinformation
to a Max programcontaining a score
follower (Fig. 5). The score follower
determineswhere the flutist is in the
score and triggersevents (sound transformations,spatialisations
or sampling)
associatedwith particularnotes in the
score. The main advantageof scorefollowingresidesin the accuratemusical synchronythatis madepossible.An
exampleof the burden put on this system by Explosante-Fixe
is that, in the
spaceof 6 min, the flutisttriggerssome
250 cues.
Two relatedproblemswere encounteredduringthese twoproductionprojects,particularly
duringExplosante-Fixe.
The first concerns what I call the
'state problem'. At any given point in
time, one cannot determine what state
a particular Max program is in. Since
the overall paradigm stresses message
flow and continuity (as opposed to

Gerzso,Paradigms and Computer Music

This content downloaded from 143.107.252.215 on Mon, 25 Aug 2014 23:15:34 UTC
All use subject to JSTOR Terms and Conditions

states),it is not surprisingthatwe should


encounter this problem. No provision
Fig. 5. Schematic
exlsts tor exp lclt y representlnga par- diagram of the
ticularstateof a Maxpatch.Thiscan be score follower.
notes
troublesomewheneverthe initial state Notes coming in
from
f 1ute
played
by
the inof a programis important.Practically
strumentalist
are
speaking,I solvedthe problembywritng extenslve lnltla lzatlon code. In compared by the
score follower
the commercialversionof Max,an ad program with the
hoc solutionwasfound in the form of score stored in
'presets'[31].
the computer
Related to the state problem is the memory. The sysproblemregardingcontext.If one sim- tem must 'look'
to the left and
ply plays a composition from start to right
in the score
finish,there is no problem.Butmaking to catch mistakes
notes stored
music involvesrehearsalsthat, in turn, on the part of the
ln memory
require startingat different places in instrumentalist.
the scoreof a piece. Mostmusicis made
up of a numberof processesthat overlap in time, so startingat a particular
piler outside of Max. One has to be a
point in a piece requiresknowingwhat programming
expert to handle these
state one is in at that point, as well as code objects
easily.
whatcontextone is startingin. Weneed
One could arguethatMaxembodies
to have a simple mechanismfor deter- the symbolicparadigmof
representaminingwhatprocesseswerealreadyac- tion.LikeMusicV and itschildren,
Max
tive and what processesshould be in- processesa flow of symbols,although
itializedand started.
the vocabularyof symbolshas become
The context problemis found again much largersince it can include both
in the 'follow'module that is responsi- samplesat the level of sound producble for score-followingin Max.AsI have tion and morecomplexmessagesat the
pointedout, thismodulewasabsolutely control level. But in Maxone can also
crucialto the overallstrategyof having very easilycreate musicalautomatons,
the flutistbe masterof all the different for example,whose interestingbehavtypes of events triggeredin real timeS ior is a priori quite unpredictable.In
and it workedextremelywell.However, this respect Max embodies, in a weak
the score followerlooks for notes in a sense, the emergentparadigmas well.
list without taking into consideration
tempoor metriccontext[32].The main
advantageof havingsucha contextsen- Patchwork
sitivitywould be that predictionwould Patchworkis a programmingenvironbecome possible.A system'sknowledge ment that resides in a Common LISP
of the tempo and the currentrhythmic interpreter.Itwasdesignedasa toolbox
valuewouldbe sufficientfor predicting for computer-aidedmusical composiwhen the next event is most likely to tion.Patchworkcan be viewedat several
levels. At the most elementary level
happen.Preparation(forsendinglarge
Patchworkcan be used as an environamounts of data or for calculatingrement for programminggraphicallyin
sults just before they are needed) is
CommonLISP.(Asopposedto the situthen possible,insteadof the mad dash
ation in Max,going backand forth bethat resultsfrom the triggerstrategy.
tweenLISPand Patchworkis veryeasy.)
Max also maintains the sound At a
higherlevel,whichmakesuse of the
production/sound control dichotomy EsquisseMusicalToolbox
[33], the user
found in the MusicN languages This can create
compositional algorithms
is true in whateverversionof Maxone thatproduce
musicalmaterialdestined
uses,be it the Macintoshversionor the for score sketchesor that can be used
IRCAM/NeXTversion.
for creatingdatafiles thatwillbe input
One final point has to do with the into a sound synthesisprogram,for exunavailabilityof an ordinaryprogram- ample (see Fig. 6). From these algoming language (such as C, Basic,LISP, rithmsone can easilycreateentire peretc.) easily accessible from Max. The sonalizedlibrariesof musicalfunctions.
present strategy(in the versionfor the The functions are representedon the
Macintosh) forces the user to create screen as little boxes that contain a
'code resources' (a concept special to number of inputs (correspondingto
AppleComputer),whichare chunksof the parametersof the function) and
executable code generated by a com- one output (a result of the fact that
.

Time

!
t;

fl,

a LISP function always returns one


value).The outputsoffunctionscan be
connected to the inputsof other functions via patch lines, hence the name
Patchwork.
Wecanalreadysee thatthere
is also a strongconceptualtie between
Patchworkand LISP.
Patchwork,especially the Esquisse
MusicalToolbox, is useful as long as
one uses a functionalapproachto the
generationof music.A simpleexample
of the functional approach is chord
transposition.The three parameters
given to the function are the initial
chord, the interval and direction of
transposition(up or down). The result
is the transposedchord.A moresophisticatedexampleis chord interpolation.
The parametershere are the twochords
betweenwhichone willinterpolateand
the number of steps in the interpolation. At each step the function calculatesa chordwhose notes are obtained
by interpolatingbetween corresponding notes in the two initialchords.But
whathappenswhen the problemto be
solvedcannot be handled by the functional paradigm?A typicalrequest on
the partof composersinvolvesthe use
of constraints.For example, perhaps
one would like to obtain a family or
a collection of chords that obey constraintssuch as intervalcontent, register or density. Programmingexperts
will point out that constraintprogramming (whichis derivedfrom the paradigm of logic programming[34]) can
be done in LISPveryeasily.Thisis true,
of course (the expertsarealwaysright),
but it avoidsthe real issue.The important point is that when programming
is needed, the environment should
providea toolbox of directlyaccessible
programming paradigms (functional
paradigms,constraintparadigms,etc.).

Gerzso,Paradigms and Computer Music

This content downloaded from 143.107.252.215 on Mon, 25 Aug 2014 23:15:34 UTC
All use subject to JSTOR Terms and Conditions

77

MN6

ER

standard

music

up it is probablybecause
cropping
cies
of the strong tie between
Because
towardsotherareas,
interest
of
shift
a
of
and I,ISP,PatchworkcorrePatchwork
composition
musical
of
act
the
as
such
to the first of Varela's three
sponds
itself.
Thisis not surprising,sinceLISP
zases.
P
the fact that compositionis
Despite
that stressessymboliccallanguage
a
is
personalaffair,composersare
ahighly
LISP, by the way, stands
culation.
thatcommonconcernsdo exrealizing
Processingand,byimplication,
LISt
for
is to formalizethis
challenge
The
ist.
of symbolsin a list.
processing
the
Early efforts at
ground.
common
surveyof systemsthus far reveals
My
musicalcompositionhave
formalizing
dependence on the patch, or
aheavy
traditionalmusicor, rather,mutaken
paradigm.Whatis remarkfunctional,
history,as a sort of progressto be
sical
is thatthis paradigmhas been useable
The basic idea is that one
formalized.
forso long in so many different
ful
early counterpoint,
formalizes
first
But this dependence undercontexts.
fugues, etc., until
then
harmony,
then
again the fact that the first few
lines
presentday is reached.While at a
the
of computermusicwere chardecades
level this can be an interesting
certain
by the searchfor systemsfor
acterized
for musicologists,it is of little
exercise
sound.Ifwe nowfeel inadequamaking
:

Fig. 6. (left) An
example of a
Patchwork program of an interpolation between
the chord on the
(upper) left and
the one note
chord on the
(upper) right. The
interpolation is
calculated by the
module labeled
'interchord'. (bottom) The result in

l
00

c ho r d

"

notationis shown.

chordllde

I s

lFurs llvels

"t

78

Gso

I
|

Z-

111t'

1S+'

ti

tothe composerwho is more invalue


in noveltyand not simplyin
terested
from the past. This apextrapolating
because
primarily
troublesome
is
proach
and it
history,
in
anchored
too
it is
that music is somethingto be
assumes
in the same way that the
discovered
discoversthe principlesof elecscientist
Musicis not discovered
tromagnetism.
It is invented over
all.
for
and
once
should
formalization
Therefore,
time.
A
getin the wayof creativity. more
not
strategyshould be based on the
useful
of the elements of the
incorporation
as they emerge in
vocabulary
musical
conceptual and
the
practice,
musical
representationof these elegraphical
in as manywaysas needed, the
ments
of structuringand restrucpossibility
the elements in novel ways
turing
of
finally,
and, the acousticexpression
different
complex elements using
the
techniques.
sound-production

Sequencer
The

are the backboneof musiSequencers


productionin commercialstudios.
cal
paradigmof the sequenceris based
The
the tape recorder.This is not suron
given the fact that makingreprising,
is essentiallywhatcommercial
cordings
is all about. In the most basic
music
there is a mastersequence
approach,
controls subsequences, each of
that
hasa numberof tracksassociated
which
it. The informationon the tracks
with
the 'instruments' (the synthedrives
drum machines, sound files,
sizers,
the sounds of which are then reetc.),
on a multitrackdigitalor analog
corded
recorder.
tape
The sequencer embodies very conideas about music. A fixed
ventional
orchestraand a totally determined
scoreare two underlyingassumptions.
Theomnipresentloop is relatedto musicalforms that contain sections that
repeated.The idea of
aresystematically
real-timegeneration of a score is absent.Also absentis an approachto muas
sicalform that viewsa performance
possible
many
from
chosen
one path
onesin a score [35].
From the composer'spoint of view,
it would be useful to be able to make
localmodificationsof musicalmaterial

S;

Paradigms and Computer Music

This content downloaded from 143.107.252.215 on Mon, 25 Aug 2014 23:15:34 UTC
All use subject to JSTOR Terms and Conditions

b;

by calling on a personalized toolbox of


IUnCtlOnS.
.llS lmp les detlnlng a strategy for importing functions from outside of the sequencer. A good model to
follow is HyperCard. If the user finds
intuitive programming with Hypertalk
inadequate, then he or she can create
functions using languages such as C or
PASCAL and import them as external
commands (XCMDs). There is a standard interface designed so that the
XCMDs can communicate easily with
the HyperCard internal data structures,
which are invisible to the user.
Now, of courseS tape recorders are
disappearing and once they are replaced by hard-disk systems or something else it will be interesting to see
how the sequencer paradigm changes.
No doubt the track concept will disappear and be replaced by something less
rigid and more fragment-like in nature.
Interaction (on the control or signal
level) between tracks might be interesting. A toolkit for building sequencers,
which would include tools for writing
and generating music (using a local
programming language), might be
more useful. Dynamic 'orchestras' are
another possibility.
r

, ^

SOME MUSTS
FOR THE FUTURE

Lastly, general systems that try to do


eveIything should be forgotten and
replaced by communities of programs
that can live independently and at the
same time cooperate and interact with
each other. Aboareall, computer music
should quit living on borrowed paradigms.
References and Notes
1. Music V was invented circa 1960 by Max
Mathews. The generations of languages that followed include Music 4BF by Hubert S. Howe, Jr.,
Music 10 by Tovar, Csound by BarryVercoe, 4CED
by Curtis Abbott, CPAT by Andrew Gerzso, and
others.
2. IRCAMis part of the Centre Georges Pompidou
art center in Paris, France. Max was invented
by Miller Puckette at IRCAM. Sold since 1991 as
a commercial product by Opcode Systems (Palo
Alto, California), Max runs on Apple computers.
Another version of Max runs on the IRCAMMusic
Workstation, which uses the NeXT computer.
Patchwork is the result of an international project
principally involving Mikael Laurson (the inventor) of Finland and Jacques Duthen and Camilo
Rueda at IRCAM.I managed this project in its last
phase of development, which led to an exportable
version of Patchwork in Common LISP.
3. The most widely used sequencers are Studio
Vision by Opcode Systems; Digital Performer by
Markof the Unicorn; and Cubase by Steinberg. All
three run on the Macintosh family of computers
from Apple Computer.

18. In Donald Buchla's analog systems, modules


are patched together. The patch languages 4CED
and CPAT were designed for IRCAM's4C and 4X
processors by Curtis Abbott and myself, respectively. It is interesting to note that Miller Puckette's
Max was initially called Patcher.
19. Marc Battier, "Le langage de synthese MUSIC
10",IRCAMinternal document, 1980, pp. 2-6. See
also Tovar, "The Music Manual", CCRMAinternal
document.
20. Charles Hockett, "The Problem of Universals
in l anguage" inJoseph Greenberg, ed., Universals
of Language(Cambridge, MA:MIT Press, 1963) pp.
1-29.
21. The notion of process is quite present in the
minds of French composers. In a recent lecture at
the College de France, Pierre Boulez suggested that
composition should be viewed as the organization
of musical processes. The notion of process on the
level of sound production is a major preoccupation
of composers such as Tristan Murail and Marc-Andre Dalbavie (both of France).
22. This approach to sound analysis and re-synthesis was developed by Xavier Serra at CCRMA.
23. The IRCDAM
MusicalWorkstation is built on the
NeXT computer, which can contain up to three
IRCAM digital signal-processing cards, each of
which makes use of two Intel i860 chips.
24. Since it appeared on the market in the early
part of 1991, dozens of musical applications have
been written.
25. This is true only of the version of Max that does
not generate sound signals (which are continuous)

4. Websters Ninth New Collegiate Dictionaty


(Springfield, MA:Merriam-Webster,1984).

26. I used the pre-commercial version of Max


called Patcher.

5. Thomas S. Kuhn, The Structureof ScientificRerJF


lutions (Chicago IL: Univ. of Chicago Press, 1962)

27. The IRCAIW


MATRIX32wasdesigned by Michel
Starkier and Didier Roncin.

p. Vii.

Computer music needs new paradigms.


Time, context, state, musical writing
(as opposed to sound generation), dynamic musical languages, toolboxes of
computational paradigms, and the construction and reconstruction of representations are but a few of the important concerns for the future.
Important to these concerns are what
Marvin Minsky calls "Level-Bands",
which involve making "strongconnections at a certain level of detail . . . [and
making] weaker
connections at higher
and lower levels" [36] (emphases mine).
Too often in computer music, systems
are provided from which-in principle-more complex systems can be created. In realitySthe complex level is simply never achieved7 because the level of
detail one starts with is simply too low.

6. According to Webster's International Dictionary, 'cognition' is "the act or process of knowing


includiIlg both awareness and judgment".
7. Francisco Varela, Evan Thompson and Eleanor
Rosch, TheEmbodiedMind (Cambridge, MA: MIT
Press, 1991 ) .

28. The MIDI flute is an IRCAM invention by


Michel Starkier. Information from the key positions and a microphone placed near the embouchere is turned into MIDI note-on/note-off
messages.
29. The IRCAM4X processor.

8. Varela, Thompson and Rosch [7] p. 40.

30. The AKAIS1000.

9. Varela, Thompson and Rosch [7] p. 42.

31. This and many other imaginative and practical


features were invented by David Zicarelli.

10. Varela, Thompson and Rosch [7] p. 87.


11. Neural networks are being applied to many
areas such as character recognition, bird-song
.
.
.
.
earnlng, pltCz detectlon, etc.
12. Varela, Thompson and Rosch [7] p. 99.
13. MarvinMinsky,SocietyofMind (London: Heinemann, 1985) p. 288.
14. Varela, Thompson and Rosch [7] p. 139.
15. Varela, Thompson and Rosch [7] p. 140.

32. This was achieved in earlier work done at IRCAM on score-following, principally by the flutist
Lawrence Beauregard of the Ensemble InterContemporain (with the help of Xavier Chabot) and
BarryVercoe (inventor of Csound) of MIT.
33. This is the result of the collaboration of a
number of people, principally Pierre-FrancoisBaisnee, Jean-Baptiste Barriere, and the composers
Marc-AndreDalbavie, Magnus Lindberg and Kaija
Saariaho.

16. The 4A (1976), 4B, 4C and 4X (1980) realtime digital signal processors were designed by
Giuseppe DiGiugno at IRCAM.The IRCAMMusical Workstation( 1991) wascreated by a team headed
by Eric Lindemann.

35. The basic ideas of this approach are found in


Umberto Eco's L'OeavreOuverte.

17. Max V. Mathews, The Technologyof Computer


Music (Cambridge, MA: MIT Press, 1969) p. 2.

36. MarvinMinsky,SocietyofMind (London: Heinemann, 1985) p. 86.

34. PROLOG is an example of a language using


this paradigm.

Cerzso,

Paradigms and Computer Music

This content downloaded from 143.107.252.215 on Mon, 25 Aug 2014 23:15:34 UTC
All use subject to JSTOR Terms and Conditions

79

Вам также может понравиться