Вы находитесь на странице: 1из 7

The

M-BASE
blog
Timbral Improvisation
Posted by mbase on Wednesday - August 1, 2007 9 Comments
WordPress.com
Music is one of the various sonic projections of who we are as humans (language being
another one). Although all sonic projections are symbolic, music represents the more emotive
side of human experience. The basic emotions of happiness/sadness, attraction/repulsion,
courage/fear, love/hate, anticipation/despair, affection/anger, pleasure/pain (and any of
these can be combined with surprise) can all be expressed using music. There are also
emotions that serve as a subset of these, for example:
Love (affection, longing, lust)
Joy (cheerfulness, contentment, pride, optimism, relief, anticipation, hope)
Anger (irritation, rage, hate, dislike, frustration, disgust, envy, torment)
Sadness (disappointment, shame, neglect, sympathy, hopelessness)
Fear (horror, nervousness)
The systematized expression of these emotions are what can be called the beginnings of a
language. Obviously the symbols in language can go further than these basic expressions.
It has been said that language is related to the word tongue. Strategic interactions of the tongue
with other components of the vocal tract, particularly the teeth and the palate, lead to the living synthesis
of human speech. I believe that initially languages and music developed out of the same root,
patterns of gestures and sounds, and eventually the intonation and articulation of the sounds
were more specifically described and developed. Eventually various methods of transcription
of these sounds and gestures resulted in written notation systems, including phonetic
transcription for spoken languages and pitch and rhythmic transcription for music.
However, whereas the guiding principle behind the development of spoken languages seems
to have been the communication of ideas, and followed the available physical options
available to the human voice, the development of music seemed to be linked to both the need
to communicate ideas and also acoustic considerations. We need to keep in mind that much
instrumental music has traditionally be performed as an accompaniment to vocal music.
Therefore, the spoken word and the musical sounds are present, and there is a greater chance
of the listener associating the musical sounds with the ideas being expressed.
Early in the history of the spontaneously composed music in the United States (the ArmstrongParker-Coltrane continuum, and probably in most music) there seemed to be more emphasis
on expression, therefore things like timbre and phrasing were the most important elements.

However, rhythm and pitch (when and how high/low) are the basic elements of any music
system.
I have spent most of my career concentrating more on the rhythm/pitch/form aspects of music
versus timbral considerations. I have certainly not ignored timbre, but I have not really delved
into a systematized study of it either. And the musicians that I favor tend to be those that have
highly developed and specific rhythmic and tonality languages. With these musicians I feel
that the timbral elements are aids for expressing the sophisticated rhythmelodies. Of course
there would be those who completely disagree with me and that is why their music would
tend to run in directions that stress timbral qualities. For myself I prefer a more subtle
expression of timbre.
I feel strongly that the younger generation that is involved in creative music today are
foregoing the detailed rhythmic and melodic developments demonstrated by the older
masters (which take an incredible amount of concentration to develop) in favor of more
effects. These trends tend to pendulum back and forth, as each generation reacts to the
excesses of the previous generation by moving in the opposite direction. However, the
concept of Orchestration (as distinct from composition) is largely concerned with the timbral
combination of instrumental (and sometimes vocal) sounds. The preeminent Danish composer
Per Nrgrd once told me that the composition of a piece takes him a short amount of time,
but the orchestration and arranging can take years. He thus distinguished timbral concerns
from composition proper, using timbre more as a means to amplify his expression.
Peace,
Steve Coleman
Filed under Philosophy of Music Tagged with improvisation, jazz, mbase, music, music theory,
philosophy, steve coleman

Comments
9 Responses to Timbral Improvisation
altered7th says:
Wednesday - August 1, 2007 at 20:40
I feel strongly that the younger generation that is involved in creative music today are
foregoing the detailed rhythmic and melodic developments demonstrated by the older
masters (which take an incredible amount of concentration to develop) in favor of more
effects.
From roughly the 60s onward we see the dichotomy of radically new emerge to spawn
neo-isms that completely reject the radical. This type of schism does nothing to advance
whatever artfield it occurs in as one group strives to totally reject its history and the other
embraces it all too much. The great innovators are those who are able to temper their
radical ideas within the larger framework of their arts history and traditions. Its only with
this synthesis of old with new that styles can evolve. Coltrane respected and extended a
soprano playing tradition that began with Sidney Bichet; Cecil Taylor was a fine bebop
player in the Monk tradition but he has extended it and synthesized it from other elements

into something totally new; Charlie Parkers Au Privave very cleverly reworks the main
motif of a Bach organ prelude a synthesis of something very old with something very
new. And so it goes on.
When young players obsess with mimicry of one dimension of an older masters sound
(Im assuming youre talking about sax players playing harmonics like Coltrane?) its
akin to asking the wrong question about Coltrane and his greatness. Its relatively easy to
figure out what Coltrane played. The deeper and more revealing question to ask is
why he played what he played. Part of the answer lies outside of Coltrane in the context
of his times and within the contexts of jazzs traditions.
I had the good fortune a dozen or so years ago to attend a workshop with Cecil Taylor. A
group of young talented players performed for him. What did you play and why did you
play it? Responses were along the lines of free improvisation and to more esoteric
responses like I envisaged a corridor and rooms in my head, no doubt thinking Cecil
sought to hear something mystical. He repeated the question several times and the
answers didnt progress past the what did you play? part of the question.
Reply
Brandon says:
Thursday - August 16, 2007 at 20:05
I like the idea you present here, but i think that is slightly incongruous with psychologys
evaluation of music and language. While certainly both symbolic and representative, and
as a result expressive, cognitive science, and neuropsychological studies indicate more
that language is a distinct, evolutionary, survivalist adaptation, while music is something
less well understood, and much later developed. A leading researcher in Harvard
Universitys cognitive neuroscience lab has the opinion, based on experimental results
that, as we like cheesecake, so do we like music. That is, in the way that cheesecake
stimulates senses which are primarily used for survival purposes (our sense of taste,
normally reserved for sorting out what we eat) so does music stimulate our auditory
senses, and cause activiation in language areas of our brain, suggesting that it is exercising
these symbolic processing pathways in a novel way. Full elaboration on this subject
requires more space, but I thought Id comment, because it seemed you might find
interesting some of the studies published pertaining to this topic. Particularly of interest
would be the works on the psychology of music by Peter Cariani, as a starting point.
Great blog, by the way, i enjoy reading into the deep technical issues you include.
Reply
james60 says:
Thursday - July 23, 2009 at 21:55
I am guessing you are referring to Pinker, who has an incredible string of disproven
ideas going. I am also guessing Mr. C has read the studies and feels more
comfortable with Steven Mithens conjectures.
Reply
mbase says:
Thursday - July 23, 2009 at 23:31

Hmm, I have no idea what you are talking about. I am unfamiliar with Pinker and
Mithen. Can you be more specific. First of all whose post are you responding to? Is
what you are writing about related to music?
james60 says:
Friday - July 24, 2009 at 15:27
Brandon,
I am assuming you are referring to Steven Pinkers music as epiphenonemon of
language idea. Steven Mithen has written a book that posits a line of human cognitive
development that places music PRIOR to language. It is a direct refutation of Pinker
and yourself and more in line with Mr. Colemans idea. The book is called The Singing
Neanderthal and the publishers blurb on Amazon gives a nice synopsis. peace
Reply
Jonathan says:
Friday - August 17, 2007 at 14:51
Steve,
Thanks for sharing your thoughts and ideas on this excellent blog.
I feel strongly that the younger generation that is involved in creative music today are
foregoing the detailed rhythmic and melodic developments demonstrated by the older
masters (which take an incredible amount of concentration to develop) in favor of more
effects.
Due to a failure of music education by parents, educational institutions, media, etc., many
people can only hear sounds that are physically present. Most people cannot hear sounds
that are not physically present. The consequences of this: people do not audiate a tonality
so they cannot anticipate or predict the next chord in a progression. They do not audiate
meter so they cannot anticipate or predict what is going on rhythmically. They are not
perceiving syntax in music, and thus, cannot experience or participate in music with
comprehension.
Without tonality and meter, the elements to keep listeners attention that remain are the
lyrics of a song (because most parents, schools, and media nurture and encourage children
with spoken language), and of course, interesting sounds effects.
Reply
Chris says:
Saturday - August 9, 2008 at 15:07
Its definitely true that timbre has become a more critical part of musical expression in the
past century. This has not necessarily been at the expense of rhythm and melody. It is a
development made possible by amplification and recording technologies and is part of the
general emergence of the musical personality in jazz, popular and even symphonic music.
Think about Louis Armstrong. His technique in singing and horn playing relies on a
distinctive and innovative use of timbre, as well as enormous rhythmic, melodic and
harmonic mastery. It would be impossible to adequately annotate much of what
Armstrong performs in his music using traditional means. His expression is based heavily
on using timbral tricks if you will, but in deeply expressive and not gratuitous ways.

Only through recording of his performances can we appreciate and, in some measure,
emulate his achievements. And the sum of his achievements is not just a body of work, but
the expression of a unique and socially critical individual personality.
Reply
mbase says:
Tuesday - January 27, 2009 at 2:00
Agreed, and to some extent what you say is true of ALL music, not just the continuum
that Armstrong was a part of. But I dont know if any of this is because of todays
technology.
Im going to go into this talk about this, but it does not bear totally on what you were
saying. But I will say that Armstrong is not the only the only spontaneous composer
whose music is difficult to notate, you can say this about most of the greats in this
music. There is no notation at all for emotional projects, timbre and context, and also
the notation for rhythm is inadequate as it only addresses the rhythms that are on what I
call on the grid, meaning discreet quantized values. The rhythmic notation of music
that is spontaneously created is basically a kind of averaging, notating the closest value
according to whomever is doing the interpretation.
While we gain some technologies we lose others. But human endeavors go on no
matter what the technology, because the technology is us, just as the music is us. You
take away humans and all the rest goes with it. So we have developed it all. I might not
refer to what Louis Armstrong was doing as tricks anymore than I would refer to some
of the effects that Bela Bartok composed or effects we heard from Muddy Waters as
tricks, but I think I get what you mean. The only difference is that Bartok comes from a
tradition where one of the norms is to write down instructions for performers to
execute, and with Armstrong and Waters they compose and execute simultaneously
(although reportedly Bartok was also a very good spontaneous composer).
I would not necessarily say (and Im not sure that you are saying this either) that there
is any advantage or disadvantage in any particular time, because humans are very
good at working with whatever they have at hand. Armstrong was great, but so were
many who came after him who had the tool of sound recordings, and many before him
who had access to other tools. In the end its what is inside of us that is revealed, the
instruments (whatever they are, saxophone, piano, computer, etc) just aid us to bring
out what is inside. Now these tools can be used for other purposes, and misused,
because they are tools and it is the humans who are making the decisions and
creating.
I see music as a language, meaning it facilitates the exchange of ideas. I know that
others have a very different view of what music is for, from entertainment to
background sounds for motion pictures or stage plays, and many other purposes. We
have the Internet today, I also see it as a great tool for communication and the
unfettered exchange of ideas. Others see it primarily as a means to pursue commerce.
So its not really about the tools, we pick different tools for different jobs. Its about us,
what is it we are trying to do. When I look back at what has been accomplished in all
eras, it is simply stunning what humans have been able to accomplish with different

tools, many things that we cannot accomplish today with our so-called more advanced
technology.
Reply
Luke Andrews says:
Thursday - August 18, 2011 at 13:18
Hey Steve! Great Talks!
Again with a great subject that hasnt been explored in detail is the relationship between
the physics of tones, timbre, interval, chordal structure and EMOTION.
Why does a major triad elicit a (usually) positive or happy influence,
and a minor triad a negative or sad melancholy sound?
What about the combinations?
Minor/major 7th = Mysterious?
Altered 7th = Tension ,,
Diminished = ? etc
Is it in the intervals:
Major third = ahhhhhhh pretty!
Tritone = THE DEVIL! lol j/k
How are the chordal structures used in todays music affecting our collective psyche and
the consciousness of the planet?
Perhaps relationships to structures in the natural world are the linking factor We like
pentatonic scales because our skull is five-sided, we have five limbs, five fingers, etc
Is there something in the actual physics of tone structure, or is it dependent upon the
listeners unique experience?
Indeed, not everyone hears or feels happy in a major triad! Everyone hears something
different so to speak! What kind of information is being transmitted through tonal or nontonal structure, not forgetting the musicians personal intent!
Some music (such as free jazz) may be heard by some to sound as white noise while
others hear it as true expressions of the Heart, Sounds of Spirit. Perspective obviously has
a real bearing here.
The book Interference by Richard Merrick delves into some of these topics, using a polar
reflective, symmetrical basis for understanding all of the cosmos in a kind of Unified Field
Theory of Music.
Any thoughts?
Huuuuu
Luke
Reply

The M-BASE blog Steve Coleman


Blog at WordPress.com. The Structure Theme.

Вам также может понравиться