Академический Документы
Профессиональный Документы
Культура Документы
Recording Arts
By Shannon Gunn
SSL Console
Image source: https://en.wikipedia.org/wiki/Solid_State_Logic
DEDICATION
This book is dedicated to high school students who wish to study the art of audio
production. In the words of Harry Watters, Always aim for the stars, and even if
you dont reach it, at least youll land on the moon.
CONTENTS
Dedication
Pg 1
Pg 31
Pg 91
Chapter 5: Microphones
Pg 107
Chapter 6: Cables
Pg 125
Pg 137
Pg 153
Pg 161
Pg 173
CHAPTER 1
ADVANCED MUSIC
TECHNOLOGY CURRICULUM
OUTLINE
Unit 1: Introduction to Advanced Music Technology
1. What is Music Technology?
2. Careers in Music Technology
3. A Brief History of Recording
4. Analog vs. Digital recording
5. Introduction to class recording equipment
6. Sound Board Level 1 Certification (done in person, not in book)
a. Student can turn the system on and off correctly.
b. Set levels for wireless mics.
c. Set levels for mp3 and CD player.
d. Set up a projector and adjust the screen with keystone
settings.
e. Understand signal flow.
f. Troubleshoot feedback.
g. Troubleshoot issues such as the LR button or power issues.
h. Set up a portable PA system with wired mic, CD, and mp3
player.
Unit 2: Physics of Sound Sound Waves
1. Sound waves
2. Compression and Rarefaction
3. Frequency
4. Resonate frequency (optional)
5. Frequency Spectrum
6. Law of Interference
7. Pure and Complex Tones
8. Waveforms
9. Nodes vs. Antinodes
10. Harmonics (optional)
11. Overtones (optional)
12. Harmonics (optional)
13. Amplitude
14. Decibels
7
15.
16.
17.
18.
19.
20.
Practice
Theory
Compostion
Music Technology
The Practice strand involves learning how to set levels, run a sound board,
troubleshoot, and set up sound system. The Theory strand involves understanding
the physics of sound, acoustics, and electronics. The Composition strand involves
the process of creating sounds with MIDI or loops, which is a rising interest in
many adolescents. It would be incomplete to teach one without the other. For
instance, you need theory (physics of sound and electronics) to understand how to
properly use the sound board (practice). Each one of these strands informs the
other and each is necessary to accomplish a well-rounded education in music
technology.
Most of the careers in audio production focus more on the theory and practice and
less on the composition. Therefore, this book is structured more toward the
theory/practice side. This book is actually the second book in a two book series.
The first book teaches more of the composition side, with assignments geared
toward creating songs with loops, MIDI, and synthesizers. In the first book,
students learn how to construct sounds from scratch using oscillators and synth
plugins as well as do some light recording with podcasting. The goal of this book is
to get students more comfortable running and troubleshooting a live sound system
as well as giving them training in the art of audio engineering.
10
Every class may have three parts: new theory concepts, applying those concepts to
a sound board, and then working on a computer with music technology software.
Some of the activities for the practice strand have been incorporated into this
book.
At the end of each lesson, an activity will be listed which would be used to apply
that lesson to practical use in a live sound setup.
If you are a teacher and you wish to use this book in your classes, you are welcome
to access the keys, tests, quizzes, source audio files, and PowerPoint presentations
used in class. Please use a valid teacher email address and contact Shannon Gunn at
jazztothebone@gmail.com. Put in the subject something like, Request for Book
Keys. She will send you a zip file with everything on it and may ask you to write a
review on Amazon in exchange. Amazon reviews will help the book gain traction
as others are looking for resources to teach music technology. Thanks for your
interest and hopefully this book can be helpful to you!
Regarding the structure of the course, here is a suggested Curriculum Map:
Week
Theory
Practice
Careers
History of
Recording
Analog vs. Digital
Sound Boards
Level 1
Sound Boards
Level 1
Portable PA
systems
3
4
Composition/Computer
Skills
7
8
Noise Induced
Hearing Loss
(NIHL)
Psychoacoustics
Wavelength
Speed of Sound
Review, Unit 1
Physics of Sound
Test
10
Microphones Unit
through Law of Interference
14
Phasing
Diffraction
15
Cables Unit
through
18
19 21 Electronics Unit
Hearing Test
22 23
Digital Electronics
Unit
25 28
Sound Boards
(More Advanced)
29 36
Processing
EQ
Reverb
Compression
12
Composition/Music Assignments:
1st Year Assignments
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
First Song - Using loops and the keyboard, create your own first song.
Halloween Song Create a scary song using audio effects and loops.
Game Song Create a song that is the introduction to a game.
Winter Song Create a song inspired by the sounds of winter. May or may
not be holiday oriented.
Ring Tone Create a ring tone for your phone.
Superhero Song Everyone in the class will write a superheros name on a
sticky sheet. Then these will be balled up and placed in a basket. Each class
member will take one out of the basket and then write an original theme
song for that character.
Song for Somebody Create a song for someone you care about. Share it
with them for Valentines Day.
Podcast Record a podcast with your team members about a topic of your
choice. All topics must be school-appropriate.
Sample Song Create a song that incorporates a vocal sample.
Synth Song Create a song that incorporates a synth such as the
MiniMoog.
Summer Song Create a song for summer.
iPad Commercial Create background music for an iPad commercial.
13
14
Chapter 2
Introduction to the Recording Arts
Welcome to the recording arts! In this class you will learn the art of recording,
including the use of sound boards, mics, and cables. Additionally, you will learn
about how to do a mixdown properly, how to create a spatial sense of the sound,
and how to make your songs sound more produced. This course utilizes Mixcraft
5 software, which can be downloaded at http://acoustica.com. Assignments will
focus on skills such as running a sound system, audio editing, mixing, and
troubleshooting sound systems. Additionally, we will discuss the music industry,
which is constantly changing due to new technology.
What is Music Technology?
Music technology can have many definitions to many people. For some, it may
include more of a compositional slant, including the creation of new music. For
others, it may relate more to beats and hip-hop. For the intents and purposes of
this class, the definition of music technology is focused on the art of live and studio
sound recordings. The skills involved in running live sound, troubleshooting
equipment, and audio editing are in high demand. Additionally, audio engineering
skills transfer very easily to video editing and video production. There are
opportunities in the music industry like never before, such as the creation of new
apps and resources for music-minded people. The industry is constantly changing
due to new technology.
Careers in Music Technology
All students should download and read the Berklee report: Music Careers in
Dollars and Cents, 2012. You can find this by clicking on this link here:
http://www.berklee.edu/pdf/pdf/studentlife/Music_Salary_Guide.pdf .
15
Generally, colleges and universities offer three different types of tracks for music
technology. The music technology program may include audio engineering, music
business classes, and/or composition. You will need to determine which of these
three strands interests you when looking at colleges and universities. Typically, you
have to get a degree in music to get the music technology related degree. If the
study of classical music does not interest you, you can always get a degree in
business or related field and then work in a music related company. Additionally,
many colleges and universities are offering media arts degrees which are similar to
audio engineering but do not require the upper level classical music classes. There
are also programs at the local studios, usually ranging from nine months to two
years. Studios do not give you a degree, just a certification. The most important
part of your post-high school education is the quality internship. Be sure to look
for a professor or studio program that is well-connected in the industry and can
place you in a major or successful company. Internships tend to lead to jobs if you
work hard and do well.
Students who wish to become successful singer-songwriters or beat creators will
have difficulty finding a college program that will teach either of these two topics.
Basically, you have to network and meet people and work for free until there is
demand for your music. Once demand becomes overwhelming you can start
licensing beats according to the number of downloads allowed. Singer-songwriters
will have to utilize social media, email, text, and publicity to gain a following and
create demand for their songs. This track is very entrepreneurial.
Careers in the music industry may include:
2. Audio Engineering
a. Live Sound Reinforcement
i. Sound and Lights
ii. Technical Theatre
b. Studio work
i. Mixer
ii. Mastering
iii. Audio tech
3. Film/TV/Video work
i. A/V Editor
ii. Sound Designer
iii. Foley
iv. ADR
v. Composer
1. Orchestrator
2. Arranger
3. Synthesizer
4. Installations
a. AV installer
5. Related Fields
a. Copyright Law (Intellectual Property)
b. Working as an accountant, marketing, or management of a music
oriented company, such as
i. Independent Record labels
ii. Sound Exchange, ASCAP, BMI, Copyright office, etc.
c. Radio
i. NPR
ii. Other radio stations program manager, audio editor, etc.
d. DJ
e. Music Supervisor for film, tv
f. Every organization needs interns and secretaries
g. Every venue needs sound techs
17
18
Then, in 1889, a German immigrant to the U.S. named Emile Berliner invented the
Gramophone. The Gramophone was similar to the Graphophone in that it used
wax, but instead of cylinders it used a disc. The groove was etched onto a metal
zinc master disc covered in wax. The master disc could be reproduced onto a hard
rubber material which could then be mass produced.
For the first time, people could purchase music that they could listen to on
demand. The first gramophones did not require electricity you would turn the
crank and then it would cause the record to rotate which would then be read by the
stylus. This was called a record because it was a recording of sound.
Demonstration of gramophone player:
https://www.youtube.com/watch?v=AApsSZq0g-c
19
Eventually people wanted to listen to longer sessions on their recordings. The first
popular format was the 78 which was named as such because the disc would
rotate 78 times per minute, or 78 rpm. You could listen to about three to five
minutes on each side, depending on if it was 10 or 12.
Then, after World War II, Columbia Recording Company started manufacturing
the first long playing record. This is the standard vinyl record that we know
today that turns 33 1/3 times per minute, or 33 1/3 rpm. The concept is similar to
Edisons phonograph, though. A stylus, or needle, lays on top of the disc which
moves with the groove and is then amplified by a speaker.
Above are four kinds of phonograms with their respective playing equipment.
From left: phonograph cylinder with phonograph, 78 rpm gramophone
record with wind-up gramophone, reel-to-reel audio tape recording with tape
recorder and LP record with electric turntable. Photo from the exhibition "To
preserve sound for the future", showcased at Arkivens dag ("Day of the
Archives") at sv:Arkivcentrum Syd in Lund, Sweden, November 2012. Photo
credit FredrikT on Wikimedia Commons.
Much of the above information is sourced from the website
http://www.recording-history.org/recording/?page_id=12.
20
All of the above types of recordings are considered analog because they recorded
a direct copy of the sound energy to some sort of medium such as a disc or
cylinder. At the same time as the invention of the phonograph and gramophone,
sound recording began to evolve to record to tape for better audio quality. This
wasnt like sticky tape but was a type of plastic film with a magnetic coating. The
coating of the tape was made of iron filaments. A microphone was attached to a
stylus which would move with the sound energy. When the stylus moved, it caused
a disturbance in the organization of iron filaments. This could then be read back by
another stylus attached to a speaker, essentially. This is the same concept as the
phonograph but using magnetic tape instead of cylinders or discs to record the
sound.
Recording to tape was cheaper and easier than creating masters on metal discs
covered with wax. You could erase the tape with a magnet and then start over
again. Additionally, engineers figured out how to record two tracks on one piece of
tape, creating the first stereo recording. The word stereo indicates a different
signal for the left and right speaker. The Beatles band championed the first multitrack recording process by recording to four track in 1963 and then eight track in
1968. This allowed the band to experiment with multiple takes, overdubs, and layer
multiple instruments. Before 1963, all recordings were made to sound like a live
performance and were typically mixed down to mono, or one track. In todays
digital age, you can layer as many as 128 tracks at one time. At that time, an audio
engineer would have to get the levels exactly right before they pressed record so
that when the sound was imprinted to tape, all the relationships between the levels
of instruments would be acceptable. Now, audio engineers can lay down tracks and
then fix it later if levels are off.
Examples of analog recording mediums include reel-to-reel, 8-track, cassette tape,
and vinyl.
21
Reel-to-reel recorder:
The 8-track format was significant because it was the first type of tape player that
was available in a car. You could take your music with you on the road. The format
wasnt highly sustainable, though, because the tape would get twisted and ruined
because of the design of the machine that played it.
22
Cassettes
Cassettes became popular because they were much more portable than a record
and more reliable than an 8-Track.
This is a picture of the inside of a good quality cassette recorder.
Inside you can see the magnetic head which can read, erase, or record the audio
signal onto the iron filaments on the magnetic tape.
23
In the 1980s, the Walkman became popular as a way to listen to your songs on a
portable cassette player.
Additionally, in the late 1980s, people would record songs to a blank cassette in a
certain order. This was called a mix tape. The term mix tape did not indicate
any sort of desire for fame, but it was used for personal listening and at social
events. You could record from one cassette to another using a dual cassette deck.
You could also record directly from the radio.
All of the above types of audio formats are considered analog because the format
represents an exact representation or imprint of the sound. Digital recording was
introduced in the late 1980s and is created when all sound wave information is
converted into binary code made up of ones and zeroes.
Digital Recording
Digital recording is different from analog recording because of the concept of
non-linear editing. Basically, if you wanted to change an analog recording, you
had to re-record it on new tape or overdub the original. This can get very expensive
with new tape required for each take. With digital recording, you can edit, change,
or add to a recording without changing the original. The implications for recording
technology are huge. As computer technology has grown, the number of possible
tracks has grown tremendously. It doesnt cost extra to re-record or add layers to a
recording because the hard drive space is generally available. Studios have had to
consistently upgrade equipment to keep up with the latest technology. A console
that would take up an entire room in the 1970s can now be replicated on a tablet
that can be held in one hand.
Digital recording formats used to include ADAT, which is a type of digital tape, but
now most studios record to a hard disc on a computer.
Digital consumer formats include audio files and CDs. Please refer to the Digital
Electronics unit for more information on digital recording.
24
25
26
Please note there are three knobs for main loudness on the Alesis the headphone
levels, the direct/USB mix, and the main levels. The Direct/USB mix will
determine how much of the computer and how much of the recording you will
hear in playback. The main level knob controls the levels of all the tracks combined
as they go to the computer. The headphone levels determine how loud the sound is
in the headphones. There are two possible inputs for recording, and each has their
own levels as well.
27
4.
28
Click Record.
2. Make sure it is set to Mic/Line level. Adjust the gain for the track so that
you can hear it in the headphones. Make sure your Monitor/USB mic is at
about 12 oclock so you can hear your recording and the computer
playback.
3. Arm your audio track in Mixcraft.
4. Click on the drop down menu next to the Arm button and select the iO2
Express or whatever audio interface you are using. Select the channel you
are using (left or right input on the device.)
5. Click Record.
Please note: for recording with the Alesis, you will need to select the input to be the
iO2 Express left channel.
29
Assignment 1: Create a song on the computer using loops. Learn to record into
the computer using a microphone and audio interface.
30
Chapter 3
Physics of Sound
31
32
Sound Waves
Sound waves exist as variations of pressure in a medium such as air. They are
created by the vibration of an object, which causes the air surrounding it to
vibrate. The vibrating air then causes the human eardrum to vibrate, which the
brain interprets as sound. The source location is where the sound originates
and is the most intense area of vibration.
33
The medium is the material through which sound can travel. The medium for
sound waves can be air particles, or solid particles, or liquid. Remember that
Earth's atmosphere is actually very dense as compared to other parts of the
universe, so it's the gases in the air that vibrate.
Because sound travels in a medium (air, solid, liquid), it cannot travel in a vacuum,
and therefore there is no sound in space. If there was a spaceship battle in space,
the explosions would be silent. Well, there may be some sound heard in any gases
which may be emitted from the fire, but the vacuum is so great that the gases (and
sound) would dissipate immediately. The astronauts, when talking on the moon,
talked to each other through their radio in their helmet. You would not hear
someone scream while in space. They might hear themselves through the material
of their body, but the sound itself wouldnt travel through the air to another
person. Light waves and radio waves travel in space, but sound waves do not.
34
35
Frequency
FREQUENCY the number of cycles in a second
As you recall, when something vibrates, it causes the air particles around it to
vibrate, which causes the mechanical sound energy to move out in a wave fashion.
These particles don't actually move the distance of the sound wave, they just
vibrate in a cycle within their own little area. One cycle is when a particle moves
from its starting position to the maximum displacement distance in one direction,
back to its starting position, and then to the maximum displacement distance in the
other direction.
In sound terms, 1 cycle is known as 1 Hertz, or 1 Hz.
1000 cycles is known as 1000 Hz, or 1 kHz (1 kilo hertz).
Particles can vibrate thousands of times per second in this fashion. The number of
cycles completed in one second is called the FREQUENCY of vibration.
Frequency is interpreted by the human ear as the pitch, or how high or low
the sound is. (Note: high and low meaning opera singer versus subwoofer, not
talking about loud or soft here, yet.)
Frequency = pitch.
Normal human hearing is between 20 Hz and 18,000 Hz, but some humans can
hear from 16 Hz to 20,000 Hz.
The picture below is an example of how the sound waves are closer together as the
frequency gets higher. The X axis is time. You can see that there are more cycles
per time in the 2200 Hz as opposed to the 1000 Hz examples.
36
Name____________________
Date _____ Period ______
Frequency Vocabulary
1. Define Frequency
2. Define Hertz
6. What is the frequency range of extremely good hearing? ___ Hz to ___ kHz
37
38
http://shannongunn.net/audio/2012/08/31/acoustics/video-of-glassbreaking-due-to-resonant-frequency/
Tacoma Narrows Bridge Break when wind passes through at a rate of its resonant
frequency
http://www.youtube.com/watch?v=3mclp9QmCGs
39
Frequency Spectrum
The frequency spectrum ranges from 20 Hz to 20,000 Hz for human hearing. Each
pitch has its own frequency within that range. A piano ranges from about 28 Hz to
about 4 kHz. Below is a picture of the frequency range for different instruments
grouped by category. This picture gives you a good idea of the frequency range of
each instrument as it relates to the notes on the piano. At the top are the actual
frequency numbers in Hz.
40
Octaves (optional)
When you play a piano, you notice that the same note is repeated several times. Each note sounds
the same, just higher or lower. An octave is the distance from one note to the same note 12 tones
higher or lower. This applies to all instruments, including the keyboard.
Each octave on the keyboard is labelled as octave 0, 1, 2, 3, 4, etc. Each note after C in that octave
has that number.
Note that the actual frequencies dont quite line up to the frequencies on the
picture. Thats because we use well-tempered tuning instead of even-tempered
tuning. Basically, we adjust the upper notes by ear so that they sound good.
41
Law of Interference
LAW OF INTERFERENCE: The Law of Interference is a physics rule that
states that when two sound waves hit each other, they will either reinforce each
other or cancel each other out. Two or more sound waves will travel through any
medium and combine together to make a new complex tone.
Read this tutorial to see pictures of the law of interference in action:
http://www.physicsclassroom.com/class/waves/u10l3c.cfm
Example:
A piano consists of a hammer that hits three metal strings at the same time. Each
string vibrates at a certain frequency and they combine together to create the
pianos own distinct tone.
Piano Hammer Action Animation:
http://www.youtube.com/watch?v=xr21z1CZ54I
Inside the grand piano: http://www.youtube.com/watch?v=I6SvIbKIWPQ
The picture below shows how when you superimpose two waves that are having
similar displacement, a new wave is created that is twice as big. On the right, you
can see if two waves hit each other that are in opposite phase, they will cancel each
other out.
Photo credit
https://he.wikipedia.org/wiki/%D7%92%D7%9C_%D7%A2%D7%95%D7
%9E%D7%93.
42
43
44
45
46
47
Common Waveforms
WAVEFORMS the shape of the sound wave
There are two types of sound waves: one with a definite pitch we call a note, the
other with no definite pitch we call noise. Music has both of these properties
think of cymbals (noise) in a rock song (pitch). We can definitely hear the
difference, but what is the difference in acoustic terms? Well, a pitch contains
regular vibrations (period motion) and a noise contains irregular vibrations (nonperiodic motion.)
There are a few common wave forms that are found in popular music, especially
synthesizers. Live musical instruments tend to produce sine wave forms, unless
playing an instrument with a buzzy sound such as a distorted guitar. There are
different aesthetics among cultures as to how much of a pure sound is actually
beautiful. Synthesizers are designed to allow the user to control the timbre of the
sound through filters of different harmonics. The following are the four main wave
forms used as building blocks to create new synthesized sound in electronic music:
sine, square, sawtooth, and triangle.
48
Harmonics (optional)
FUNDAMENTAL TONE: the most intense vibration frequency, or the main
pitch that we hear.
The FUNDAMENTAL TONE is the most intense vibration frequency in any
given note on the instrument. It is also the lowest vibrating frequency (pitch) on
that instrument with that particular fingering. It is also the loudest resonant
frequency. Within each complex tone there are multiple frequencies present.
These additional frequencies are known as harmonics.
HARMONICS multiples of the fundamental frequency
It just so happens that each HARMONIC is a multiple of the fundamental
frequency (x2, x3, x4, x5, x6, x7, etc.) and are named as such. The presence of
different harmonics within a complex tone give the instrument its timbre, or tone
color. The harmonics present within a complex tone are what make the
instruments sound different even if they play the same note.
For instance, when you bow across a violin string, it causes the string to vibrate at a
certain frequency, which is the most intense amount of particles moving, and thus
heard as the main frequency, or FUNDAMENTAL TONE. But there are other
parts of the violin body that are vibrating at two, three, four, or even five times the
main frequency. These are the harmonics and they are present in every note.
You can segregate the harmonics and hear them by themselves if you bow while
you press the string lightly. This is because the length of the string has changed and
therefore the harmonic becomes the fundamental tone.
How is this possible? After all, an A is 440 Hz, whether it's played on a piano or a
harp or a tuba, right? That A at 440 Hz is just one number, it's not like we're calling
it 440 Hz plus a little 880 Hz and some 1720 Hz on the side. Well, actually, when
instruments vibrate, they have many different levels of vibration going on,
including all of the frequencies mentioned above. These upper frequencies, or
harmonics, are very soft (not intense) and not easily heard to the human ear. We
call that tone 440 Hz because the 440 Hz is the loudest part of the sound heard,
and is the fundamental for that particular string.
Why should you learn this for music technology? The entire world of synthesizers
exists around manipulating harmonics because they give each sound its
characteristic timbre. The entire world of audio engineering rests upon the
understanding that within each sound are fundamental frequencies and harmonics
that can be boosted or attenuated.
49
Harmonic
Harmonic is
multiple of the
fundamental
Frequency of the
Harmonic
1st Harmonic
1f
55 Hz
2nd Harmonic
2f
110 Hz
3rd Harmonic
3f
165 Hz
4th Harmonic
4f
220 Hz
5th Harmonic
5f
275 Hz
6th Harmonic
6f
330 Hz
7th Harmonic
7f
385 Hz
8th Harmonic
8f
440 Hz
9th Harmonic
9f
495 Hz
10th Harmonic
10f
550 Hz
11th Harmonic
11f
605 Hz
Frequency
=f
55 Hz
50
51
52
____ Hz = 65 Hz x 1
2nd Harmonic =
____ Hz = 65 Hz x 2
rd
____ Hz = 65 Hz x 3
th
4 Harmonic =
____ Hz = 65 Hz x 4
5th Harmonic =
____ Hz = 65 Hz x 5
6th Harmonic =
____ Hz = 65 Hz x 6
7th Harmonic =
____ Hz = 65 Hz x 7
8th Harmonic =
____ Hz = 65 Hz x 8
9th Harmonic =
____ Hz = 65 Hz x 9
10th Harmonic =
____ Hz = 65 Hz x 10
3 Harmonic =
53
54
Overtones (optional)
OVERTONES: Overtones are the same as harmonics, except the 1st overtone is
the same as the second harmonic, and so forth.
In physics, we call the multiples of the fundamental harmonics and refer to the
fundamental tone as the first harmonic. In band class, they call the fundamental
tone the fundamental, and the 2nd harmonic is called the 1st overtone. The second
overtone is the 3rd harmonic, and so forth. This can be confusing to switch back
and forth between the two nomenclatures.
When you pluck or bow a string, it will vibrate at the fundamental frequency. The
picture below demonstrates when you pluck a bow or string at the halfway, 1/3rd
way, and point on the string. Each time you change the length of the string as a
multiple of the original length, you will play a multiple of the fundamental, or a
harmonic.
Over time, instrumentalists have figured out the Overtone Series or the tones
that are resonant on a particular instrument also line up with the harmonics. A
skilled musician can play all of the notes below with one fingering on a brass
instrument.
56
Name ______________________
Date _____ Period _____
Octaves Review
1. What is the frequency one octave above 400 Hz? _____ (400 x 2 )
2. What is the frequency one octave above 800 Hz? _____ (800 x 2)
3. What is the frequency one octave above 1000 Hz? _____
4. What is the frequency one octave above 326 Hz? _____
5. What is the frequency one octave below 400 Hz? _____ (400/2)
6. What is the frequency one octave below 500 Hz? _____ (500/2)
7. What is the frequency one octave below 440 Hz? _____
8. What is the frequency one octave below 110 Hz? _____
57
58
Amplitude
AMPLITUDE the maximum displacement of the particles from their original
place. Amplitude is measured as the intensity of the sound pressure level (SPL).
Amplitude is known as the strength or power of a wave signal. In acoustics, it is
the height of a wave when viewed as a graph. It is heard as volume, or loudness.
Thus the name amplifier for a device which makes the guitar louder. As the
sound wave continues to displace particles in a wave fashion, it is displacing energy.
That is why the sound gets weaker as it goes farther from its source. The energy is
displaced in the form of heat.
Amplitude is graphed as the height of the sound wave. The higher the wave, the
more the particles are being displaced, thus the denser the air, and the louder the
sound.
59
Decibels
DECIBELS are the term we use to measure perceived loudness. Or, more
specifically, the term to measure SOUND PRESSURE LEVEL. The sound
pressure level is the intensity of the displacement of particles.
This chart below describes the amount of time you can listen to a certain loudness
before you have hearing loss or hearing damage.
This is a chart that describes the perceived loudness of different sound sources. On
the left is the measurement in dB, or decibels. On the right is measurement in Pa,
or Pascals. Note, if there is hearing loss, hopefully it is temporary, and can be
rectified by letting the ears rest.
60
This is a picture of the inverse square law as it relates to light waves. The concept is
the same for sound waves.
Source: https://commons.wikimedia.org/wiki/File:Inverse_square_law.svg
61
62
63
64
Psychoacoustics
PSYCHOACOUSTICS the study of how humans perceive sound.
The pinna scoops the sound forward, focusing energy into the ear canal. It also
blocks high frequencies from behind you. The ear drum is a transducer which
converts acoustic mechanical energy to neurons. The nerve pulses then travel to
the brain where they are perceived. It is important in audio engineering to take into
account not only the mechanics of the environment, but also the fact that the brain
and ear of the listener are involved in a person's listening experience.
Inside the cochlea are tiny little hair cells that move when hit by a sound wave.
Hearing damage occurs when those hairs get bent. They go down after a certain
amount of time and eventually don't come up. Here's a video on how hearing
works: http://hearinghealthfoundation.org/template_video.php?id=3
65
66
Hearing Loss
How long can you listen to music on a phone without having hearing loss?
Please refer to the image on the following hyperlink to see different levels of
loudness for different devices.
http://www.betterhearing.org/hearingpedia/hearing-lossprevention/noise-induced-hearing-loss
NIHL = Noise Induced Hearing Loss
NIHL is 100% preventable!
Hearing Loss
Symptoms of Hearing Loss
You should suspect a hearing loss if you:
1. have a family history of hearing loss
2. have been repeatedly exposed to high noise levels
3. are inclined to believe that "everybody mumbles" or "people don't speak
as clearly as they used to"
4. feel growing nervous tension, irritability or fatigue from the effort to hear
5. find yourself straining to understand conversations and watching people's
faces intently when you are listening
6. frequently misunderstand or need to have things repeated
7. increase the television or radio volume to a point that others complain of
the loudness
8. have diabetes; heart, thyroid, or circulation problems; reoccurring ear
infections; constant ringing in the ears; dizziness; or exposure to ototoxic
drugs or medications
Click here for an interactive website on safe hearing levels:
http://www.cdc.gov/niosh/topics/noise/noisemeter.html
67
Psychoacoustics Continued
PSYCHOACOUSTICS the study of how humans perceive sound
ANECHOIC CHAMBER - a room where there is no sound reflected off the
walls. All sound is absorbed into the walls and other materials. In the chamber
you can hear your stomach, heart, and even your ear.
http://dsc.discovery.com/life/worlds-quietest-room-will-drive-you-crazy-in-30-minutes.html
http://youtu.be/u_DesKrHa1U
During a rock concert, there is a temporary threshold shift. The brain grabs
muscles on the ear drum and tightens it to turn down the volume within
your ear. As a result, engineers will gradually turn up the volume throughout
the evening. After a while, the muscles get tired and don't hold back the
volume levels as much.
BINAURAL HEARING - We hear with two ears and it's separated by a
space, or baffle (your brain)
DYNAMIC RANGE OF HUMAN HEARING: 0 120 dB, normal is 10
dB to 120 dB
STEREOPHONIC SOUND Stereophonic sound developed in the late
1940s. Also known as Stereo, it is a method of sound reproduction that
creates an illusion of directionality and audible perspective. This is usually
achieved by using two or more independent audio channels along with two or
more loudspeakers in such a way as to create the impression of sound heard
from different directions.
MONITOR SETUP Monitors are the speakers we use to listen to
recordings. In a recording studio, the two monitor speakers should be placed in
an equilateral triangle with 60 degree angles to closely emulate the human
hearing experience.
PHANTOM IMAGES - The fact that people hear sound as if it's coming from
the middle, even though it is coming out of two different speakers.
68
69
This is important to understand while mixing down music because it makes a big
difference regarding which frequencies to emphasize and de-emphasize in your
music. This is why you have to turn up the bass to hear it. Thus the use of the
subwoofer or speaker dedicated to bass sounds only.
70
71
72
Wavelength
WAVELENGTH The length of one complete cycle of the wave. It is also known
as the distance between two of the same points in a sound wave.
The larger the wavelength, the lower the frequency, and vice versa.
Notice the wave has to go down, then up, then down again to complete the cycle.
Speed of Sound
340 m/s
The speed of sound is the same for all frequencies, and is typically about 340 m/s.
SPEED OF SOUND The speed of sound is actually the same for all sounds. It
typically travels at 343 meters per second or 1,126 feet per second. How is this
possible? Well, you have to remember that frequency is how often the air particles
vibrate per second. The air particles don't actually move in a trajectory, but their
energy is passed from one to the other like a hot potato. The speed of sound is
how fast that energy travels, and is the same for all frequencies.
The Speed of Sound is also known as the Velocity.
74
Name ___________________________
Date _______ Period ______
WAVEFORMS VOCABULARY
1. What is the mathematical relationship between wavelength, velocity,
and frequency?
2. If a sound wave is travelling in air and has a frequency of 20kHz,
what is the wavelength? (take the velocity of sound in air to be 340
m/s) ___________ m
4. Assuming the sine wave travels at 340 m/s, what is the frequency of
a sine wave with a wavelength of 10 meters? ______________ Hz
5. If a sine wave is 10 meters long, where is the first node? ___ meters
6. If a sine wave is 10 meters long, where is the first anti-node? ___ meters
7. If a sine wave is 10 meters long, where is the second anti-node? ___ meters
75
76
77
78
Here is an example from a recording project. I have recorded two separate stereo
tracks, same exact sound, and then bounced it down to the third track which is the
resulting waveform. Its basically the same, just louder.
Here is another example where I have recorded two tracks but the frequencies are
slightly different from each other. The two frequencies are out of phase. The top
two tracks are now different, resulting in a much different waveform at the third
track.
79
MICROPHONES OUT OF PHASE: when two mics cancel each other out
The same concepts of cancellation and reinforcement for sound waves can apply to
microphones as well. Let's say you're recording with two microphones at the same
time. If they are both picking up the same exact frequency, and one is placed at a
point of high pressure, while the other is placed at the point of low pressure, then
they will actually cancel each other out when you listen to the two tracks combined
on the recording. This is known as being out of phase. This is really important
when applied to putting two microphones on one guitar amp or using multiple
microphones on a drum set.
Here is a video that demonstrates how double microphone placement can cause
different frequencies to go in and out of phase for a guitar amp:
http://youtu.be/7_h9WjfjhMw
Steve Reich is a very famous composer who often uses phasing to create music.
http://youtu.be/JW4_8KjmzZk
Piano Phase Song
By embracing the phasing issues, new sounds and music are created.
Search: phase issues in YouTube:
http://www.youtube.com/results?search_query=phase+issues&oq=phase+
issues&gs_l=youtube.3..0.1329.2967.0.3206.12.8.0.4.4.0.87.543.8.8.0...0.0...1ac.
YSulc0FqtKA
80
81
82
83
84
Notice that in a carpeted room, 60% to 65% of the 2 kHz and 4 kHz frequencies
will be absorbed.
A wood floor will absorb a higher percentage of low frequencies than high
frequencies.
85
86
87
88
Name ______________________
Date _____ Period ______
SOUND WAVE INTERACTIONS VOCABULARY
1. Which frequency is more likely to reflect off of hard surfaces?
4. You are on the beach and you notice that you can hear people talking
from 20 feet away. Would you be able to hear this easier at night or
during the day, and why, assuming nobody else is around?
89
90
CHAPTER 4 ELECTRONICS
PRIMER
Concepts/Terms
Voltage
Current
Resistance
Impedance
Power
Skills
91
92
If you understand electricity, you can fix cables, amps, sound boards, and
mics. This makes you extremely valuable to any organization!
According to the engineer at Cue Recording studios (from our field trip last
year) the ability to fix electronics is the number 1 skill needed in studios
right now.
Introduction to electricity:
https://www.youtube.com/watch?v=EJeAuQ7pkpc
93
Voltage
Why does the US use 120V and the rest of the world uses 240V?
http://www.straightdope.com/columns/read/1033/howcome-the-u-s-uses-120-volt-electricity-not-240-like-the-rest-ofthe-world
http://askville.amazon.com/difference-110-volt-220-EuropeAsia-Pro-Con/AnswerViewer.do?requestId=724312
94
Current
In electronics, real current usually describes the flow of electrons, which are
negatively charged.
Circuit diagrams are drawn using conventional current, while, in reality, the
current flows in the opposite direction.
http://www.mi.mun.ca/users/cchaulk/eltk1100/ivse/ivse.htm
Below is a simple electric circuit, where current is represented by the letter i. The
relationship between the voltage (V), resistance (R), and current (I) is V=IR; this is
known as Ohm's Law.
95
Direct current flows in one direction and is the type of current available
with a battery
Alternating current flows in two directions and is the type of current that
comes out of the wall socket
Alternating Current
96
Parallel circuits give the electrons multiple pathways that they can travel in
order to complete the circuit, so if you unhook one of them, the other
devices still work
Electricity and circuits
https://www.youtube.com/watch?v=D2monVkCkX4
Series circuits give the electrons only one pathway to travel, so if you
unhook one device, none of the other devices works
In the picture below, the left diagram is in series and the right diagram is in parallel.
https://commons.wikimedia.org/wiki/File:Series_and_parallel_circuits.png
If you have multiple devices on a series circuit, if one of the devices stops working,
it stops the flow of electrons for all of the devices after that point. Example:
Christmas tree lights.
97
Impedance
98
Important Formulas:
These are all the same formula
V = IR
I = V/R
Current = Voltage/Resistance
Amps = Volts/Ohms
R=V/I
Resistance = Voltage/Current
Ohms = Volts/Amps
99
100
101
If you add another 8 Ohm speaker in parallel to each channel, then the resistance
becomes 4 Ohms on each channel (8/2).
102
Impedance Matching
Impedance matching is the process of hooking up your speakers to your amp in
a way where the impedance of the speakers matches the impedance of the amp. If
the speakers have too high of an impedance, then they will not be powered enough
by the amp because they will have too much resistance. If the speakers have too
little impedance, then the amp will overheat and turn off (or blow up if its an old
one). Therefore, if the amp is rated to be able to handle as low as 6 Ohms, which is
typical, then you want to make sure you hook up your speakers so that its not
going to go below 6 Ohms.
Example:
An amp has two outputs one for the left channel, one for the right.
Lets say you want to hook 4 speakers up to one channel.
If each channel on the amp is rated at 4 ohms, you could hook two speakers at 8
ohms into each channel in parallel. However, if you hooked two speakers at 4
ohms each into each channel, then each channel would be accepting information at
2 ohms and you would have a potential for overload, especially if you played it too
loudly for too long.
Basically, you have to match the impedance of the speakers to the impedance of
the amp.
103
Power
W (power) = V (volts) x I (current, in amperes)
Power measures the amount of work done per unit time. In electronics,
electrical power is the rate at which work is done when current flows
through a circuit.
One Watt is the rate at which work is done when one ampere of current
flows through an electrical potential difference of one volt.
Implications for Audio Engineering:
Make sure your speakers are powerful enough to handle the power from
the amp. Otherwise you may put too much power through a speaker and
thus overpower the speaker. The speaker will start smoking and you will
smell an electrical fire. This is very dangerous and should be avoided at all
costs!
For instance, if you have a 150 watt amp, then each channel in the back of
the amp is going to be 75 watts each. You can power a 75 watt speaker with
that. If you hook a 50 watt speaker up to a 75 watt channel, and turn it up
all the way, you run the risk of overpowering the speaker. If you plug a 300
watt speaker into a 75 watt channel, though, you should be fine because the
speaker can handle a lot more power than what will be put through it.
Additionally, you need to know the power of your PA system. For instance,
if you are providing a PA system for a band with three 300 watt guitar
amps, and your PA system is only rated at 80 watts, you wont be able to
hear the vocals. Never underestimate the power of a guitar amp.
104
Volts, amps, and Ohms are metric units. As such, metric prefixes apply.
Amperage = Voltage/Resistance
105
106
Chapter 5 Microphones
107
108
Types of Microphones
Condenser Microphones:
Most popular type of mic for the studio. Good for picking up an entire ensemble
or individual parts. Needs phantom power. Large diaphragm: better for low
frequencies. Small Diaphragm great for capturing high frequencies (cymbals,
violins, fifes.)
Dynamic Microphones:
Handle a lot of sound pressure level (volume). Good for drums, amps, rock vocals.
Can bang these around, resilient.
Ribbon Microphones:
Thin ribbon of aluminum instead of mylar (dynamic mics). Popular with brass
players for the ability to get a nice warm sound at a very high pressure level. Good
for old timey sound. Fragile and expensive.
Dynamic Microphones
Not as sensitive
Hardy
Handles high sound pressure levels
No phantom power
Small Diaphragm
Except on bass drum dynamic microphones
Good for live sound
Use on certain instruments for studio recordings
Good for rock vocals, drums, amps
Condenser Microphones
Sensitive
Fragile
Dont put this on a bass drum or an amp it may be ruined by the high
sound pressure levels
Needs phantom power
48 volts extra boost to work
Be careful with this
Large or Small Diaphragm
May cause feedback if used for live sound because the mics are sensitive
Good for vocals, acoustic guitars, over the drum set
109
Ribbon Microphones
Sensitive
Fragile
Handles loud sounds like brass very well
No phantom power.
Phantom power can cause damage.
Ribbon
Good for studio and old-timey recordings
Good for brass, old timey sound
Shure SM-57
$100
110
AKG D112
Large diaphragm bass drum mic
$130
Sennheiser MD441-U
This is a supercardoid dynamic mic used for vocals and instruments. Works really
well on stage. Also known as the Elton John Mic. Used for recording sessions as
well, usually on an instrument.
$1500
111
Sennheiser MD-421-II
This is a dynamic mic used with instruments such as saxophones. Used for
recording sessions.
$479
112
Shure KSM-141
$400 each, $800 total
Matched Pair - Choose between cardoid and omni settings
AKG C414
$1000
Vintage large diaphragm mic.
113
114
115
Supercardoid the microphone picks up in a cardoid pattern but with a little bit
in the back of the mic as well.
116
Notice the line starts to drop off around 200 Hz on the left. This means that it
doesnt pick up those low frequencies very well. Its really good at picking up
frequencies in the 4k, 5k, and 6k range, though. This is the upper range of the
piano and goes into the s and brings out the bright frequencies. It dips between
7k and 8k, then rises back up at 10k, then rolls off after that. The little bump at
10k also gives it a brighter sound.
117
Mic Placement
Microphone placement is a very important part of audio engineering.
There is a sweet spot for every instrument, and the type of microphone
will also determine what sounds you will get. A typical recording studio
setup will include an hour and a half to getting good tones on the
instruments.
Mic Placement for Vocals: You want to make sure that you point the
diaphragm to the vocals. Use a pop filter to get rid of F and P
sounds (plosives and sibilance).
Mic Placement for Guitars: Make sure that you place the microphone
within one inch of the guitar so that it can pick up the widest range of
frequencies. SM-58s are near-field mics, which means they will pick up
only sounds within one to three inches of the microphone. To get a
stereo sound, place one mic up on the neck for the higher frequencies,
and a second mic on the hole for the low frequencies.
Mic Placement for Amps: You will want to get down on your hands an
knees and listen closely to the guitar player to determine where the best
tone for the amp is. Then place the mic in that area. You must listen to
how it sounds before you place it.
118
119
Audix
Included in the DP7 drum microphone package is the D6, Audix's flagship kick
drum mic, two- D2's for rack toms, one D4 for floor tom, the i5 for snare, and a
pair of ADX51s for overhead micing. Also included are four D-Vice rim mount
clips for the snare and tom mics, and three heavy duty tension fit mic clips for the
other three mics. Everything is conveniently packaged in a foam-lined aluminum
carrying case road case for safe keeping when the mics are not in use.
120
121
122
MS Mic Setup
A great tutorial with pictures: http://www.uaudio.com/blog/mid-side-micrecording/
All pictures credit to https://en.wikipedia.org/wiki/Stereophonic_sound .
123
124
CHAPTER 6 CABLES
125
126
RCA to RCA - use to connect a device to a sound system, such as a record player
to speakers, or DVD player to TV. Also known as a phono plug.
XLR use for microphones. The XL cable has 3 pins called positive, negative, and
ground. The XLR is a balanced cable.
127
TRS is used for headphones and long speaker wires. TRS stands for Tip Ring
Sleeve. It contains three signals: positive, negative, and ground. It is also called a
stereo cable.
TS (1/4) is used for instruments and speakers. TS stands for Tip Sleeve and
contains two signals: a positive and a negative. The TS cable is also known as a
phone cable.
Other Cables:
Banana Clip Cable
The banana clip is a type of port on the back of old amps. You may need a banana
clip to TS cable to hook the amp to the Speaker (amp = banana, speaker = TS).
Speakon Cable
This cable has a blue end that snaps into place. You have to twist the silver part to
pull it in/out. Used for speakers in live sound.
128
Digital Cables
Digital Cables transmit data using 1s and 0s (binary code)
Analog Cables use changes in voltage to transmit a signal that is shaped similar to
the source.
The first generation of video and audio cables were designed with analog signals in
mind. An analog signal represents the information by presenting a continuous
waveform similar to the information itself. For example, for a 1000 Hertz sine
wave, the analog signal is a voltage varying from positive to negative and back again
1000 times per second. When that signal is hooked up to a speaker, it drives the
speaker cone to physically move 1000 times a second and we hear the 1000 Hz sine
wave tone as a result.
A digital signal, unlike an analog signal, bears no resemblance to the information it
seeks to convey. Instead, it converts the 1000 Hz signal to a series of "1" and "0"
bits which is then transmitted through the cable and then gets decoded on the
other end.
Optical Cable
ADAT Machines used them (Alesis Digital Audio Tape). An optical cable is most
often used with audio interfaces, sound cards, and home consumer sound systems.
129
S/PDIF Cable
The S/PDIF cable looks like an RCA cable. It stands for Sony/Philips Digital Interface Format.
Used with ProTools. The back of a sound card may have a S/PDIF cable.
AES/EBU
This is on its way out. It stands for Audio Engineering Society/European Broadcasting Union. The
cable looks like an XLR.
130
AV Cables
VGA Cable
This is the old fashioned analog cable that connects a computer to a monitor, or a
laptop to a projector. Most PC laptops have this. MACs dont have this. This
transmits video only.
HDMI Cable
This cable transmits HD video and audio. It comes as HDMI, HDMI skinny, and
HDMI mini. The skinny version works with new Mac laptops. The MINI version
works with tablets. You need an adapter to convert from MINI or skinny to regular
HDMI.
Mini Display Port or Thunderbolt
This cable is for Mac laptops only. The port is the same, but the insides changed so
that the thunderbolt is faster. The Mini Display port is for Mac laptops prior to
2013 or so. You have to get a Thunderbolt or MiniDisplay Port to VGA adapter to
put a Mac through the projector. This cable transmits video only, not audio.
131
Parts of a Cable
XLR = Mic Cable
The XLR cable has three pins connected to three wires inside - one positive, one negative, and one
ground.
TS = Tip Sleeve
The TS cable is used for instruments such as guitar or piano.
The TS cable has two wires inside. Top goes to positive, and sleeve goes to negative. There is no
shield or ground.
RCA
The RCA cable may or may not be shielded. The tip is positive, the rim is negative, and there may
or may not be another wire connected to the rim that will act as a shield. This depends on if you buy
cheap or nice RCA cables.
132
Activity: take apart cables to see the multiple wires and shielding inside the rubber casing.
133
Balanced/Unbalanced Cables
Balanced Cables: Cables that deflect noise by flipping the signal 180 degrees.
Balanced cables have three wires inside: one wire that is normal, one wire that has
the electrical signal flipped 180 degrees, and one wire that goes to the shield which
adds additional insulation.
Unbalanced Cables: cables that do not deflect noise by flipping the signal 180
degrees. Unbalanced cables may or may not have a shield, depending on how many
wires are inside the cable.
Regarding long distances:
134
A typical balanced cable contains two identical wires, which are twisted together
and then wrapped with a third conductor (foil or braid) that acts as a shield. The
two wires form a circuit carrying the audio signal; one wire is in phase with respect
to the source signal, the other wire is 180 out of phase. The in-phase wire is called
non-inverting, positive or "hot" while the out-of-phase wire is called inverting,
phase-inverted, anti-phase, negative or "cold". The hot and cold connections are
often shown as In+ and In ("in plus" and "in minus") on circuit diagrams.[1]
135
Microphone Stands
Mic Clip
Dont forget the mic clip, which connects the microphone to the stand!
136
137
138
Sound Boards
Sound boards come in all shapes and sizes. They look complex, but they are really a
pattern divided into two sections:
1. Tracks
2. Outputs
Tracks are lined up vertically and usually the outputs are in the center.
Tracks have the same knobs going across which usually include:
Gain (trim), EQ, Aux, Pan, and Effects.
Sound boards can be analog or digital, depending on how the insides work.
Depending on the board, you can have several monitor mixes going through
multiple aux outputs.
139
Signal Flow
Its important to understand signal flow before digging into the use of a sound
system.
Live Sound Signal Flow
Mics go to the Sound Board which then goes to the Amps which then go to
Passive Speakers.
2. Main Speakers
There is signal going from the overhead mics (6) on stage to the sound board, and
then that signal gets mixed in with all the other tracks to go to the amps which are
back stage, and then go to the main speakers.
3. Monitors:
Monitors are speakers placed on stage that face the performers. Performers need
this to hear themselves or a backing track so that they can sing in tune and know
where they are in the song.
The signal goes from the sound board to the monitors from a separate mix called
the auxiliaries. You have up to 6 possible mixes you can send out with six auxiliary
outputs. Right now the board is set up to have the monitor mix go through aux 1.
The monitors are set up in a daisy chain fashion. That means that the mix from
one monitor goes to another. We could set it up so that the left monitor gets the
Aux 1 mix and the right monitor gets the Aux 2 mix. Why not? Mostly because we
have a lack of the adapter necessary to convert the output in the back of the board
to the XLR plug needed for the snake.
140
141
Mixing Console
142
Outputs
Mains
Groups
Auxiliaries
Send
Return
Inputs
Microphone jack
TS jack
Other considerations
143
144
Gain
EQ
Aux
Pan
Solo/Mute
Faders/group buttons
Aux, including FX
Groups
Mono
146
Inputs
Please note: this channel strip is missing the pad and phantom power buttons,
which are often found at the top of the channel strip. Also missing is a roll off
button. This is a Mackie Mackie CR1604-VLZ mixing console.
Image source:
https://en.wikipedia.org/wiki/Mixing_console#/media/File:MackieMixer
.jpg
147
Outputs
Bus
The word bus is used to describe any signal flow out of a track. The word bus is
often utilized to describe the actual cable that would be connecting the track to its
alternate output. In the old days, audio engineers would have to connect a cable
from the track output to an external device and then connect another cable back
into the mixer. Now, the entire bus concept is created using pathways within the
digital software.
Sub Mix
A sub mix is the word used to define the process of mixing several tracks down to
stems, or group buses. For instance, you could lump several drum tracks together
into one sub mix, mix that down, and then have one stereo track with just the
drums. The word Sub Mix can also be used in live sound reinforcement to
describe the process of combining certain tracks together to one sub group before
it goes to the master. You can then add effects or turn the volume up and down for
the sub mix and it will apply to all the tracks at once.
148
Inserts
Inserts:
Inserts are ports on the back of the sound board that allow signal to go out and
come back in. Usually, they are used to add effects such as compression or reverb
to the individual track.
In order to use an insert, you have to have a cable that has the capacity to carry two
signals. Usually, you use an unbalanced TRS cable. The info goes out of the ring
and comes back in at the tip.
A little tip: I have used the output portion of the insert jack to extend a mixer. If
you put a TS cable into the Insert port until you hear one click, it will take signal
out of the mixer. (out only)
An insert port on the back of a mixer will include a send and a return
signal.
149
Aux Output
An Aux Bus is an output from the board that goes through the Aux output port.
Usually, aux outputs are used for monitors.
Monitors:
Monitors are speakers facing the performers on stage.
o Each track has its own aux pot
o You can control the mix in the monitors by adjusting each tracks
aux pot
o Vocalists and instrumentalists will want a certain amount of each
element in their mix. For instance, they might want to hear a lot of
the bass, piano, and vocals but no drums. You need an aux track to
do this so that it doesnt affect the main mix coming out of the
main speakers.
View from the top of the board where you control Auxiliary output volumes
View from the Back of the Board - These are the output jacks
150
151
Groups allow you to group different tracks together so that one fader
controls the volume for all the tracks in the group.
o Route with the group buttons next to each track
152
153
Sample Rate
Sample Rate = the number of times per second that the information is sampled,
or read
The sample rate is also the number of times per second that the CD spins
The more samples, the more accurate the digital representation of the
sound
This is a picture of an analog signal (light blue line) that represents the actual
sound. The vertical red lines (the ones with the dots at the top) represent a
fixed number of samples.
154
This is a picture of a low and then high sample rate. Notice the sound would be
more accurate with a higher sample rate.
155
Bit Depth
Bit Depth = the number of 1s and 0s that are part of the word that creates the
digital code that measures amplitude.
Example:
A Bit Depth of 4 has 16 possibilities: 1 1 1 1, 1 0 0 0, 1 1 0 0 , 1 1 1 0, 0 0 1 1, 0 1 1
0, etc.
A Bit Depth of 7 has 128 possibilities
A Bit Depth of 16 has over 65,000 possibilities!
The more possibilities of numbers the more accurate the sample can be.
On the picture below, the first top picture is two bit, and the bottom picture has
multiple bits.
On the picture below, the top picture has 8 bits and the bottom picture has 16 bits.
Both have the same sampling rate. You can see that the 16 bit version is more
accurate to the analog wave than the 8 bit version.
156
157
Buffer Size
When recording audio into your computer, your audio interface needs some time
to process the incoming information. The amount of time allotted for processing is
called the Buffer Size. Often times a smaller Buffer Size is desirable, however not
one that is too small. Here's why:
If you have a very large Buffer Size, you will notice a lag between when you speak
in to the Mic, and when the sound of your voice comes out your speakers. While
this can be very annoying, a large Buffer Size also makes recording audio less
demanding on your computer.
If you have a very small Buffer Size, you will notice little to no lag at all between
speaking into the Mic and the audio coming out of the speakers. This makes
recording and hearing your own singing much easier, however this can also place
more strain on your computer as it has very little time to process the audio.
You can fix this by increasing your Buffer Size to something slightly larger. After
some experimentation, you will find the right balance.
When recording audio to a computer, increase buffer size and monitor the
recording through the audio interfaces monitor mix. That way, you can get the best
quality. If you monitor through the device rather than the software program, you
will have no delay in sound. If you monitor through the software program, you will
have delay.
When recording MIDI, lower the buffer size. The quality of the audio isnt as
important as having little to no delay.
Latency
Latency is the amount of delay in the sound. It can be the delay between the time
you press down a key to the time you hear it, or the time between when you speak
and you hear your voice. Latency is measured in milliseconds, or thousandths of a
second.
158
Nyquist Frequency
Nyquist Frequency, named after Swedish-American engineer Harry Nyquist, is half
the sampling frequency and indicates the highest sound that can be recorded. So, if
your audio interface is sampling at 44.1 kHz, then it will be able to pick up
frequencies up to 22 kHz (which is more than adequate for the human ear.) If your
audio interface is sampling at 22 kHz, the highest frequency it will be able to record
is only 11 kHz. You can tell the difference because the 22 kHz sounds like its
coming from a phone!
159
160
161
162
*Depending on the send volume type, the audio from a track will be sent at one of
the starred * points in the audio signal flow.
163
Introduction to Pan
Pan
Indicates whether you want the sound to come out of the right or left speaker.
Adjusted in Mixcraft for each individual track using the butterfly shaped parameter
above the Mute button.
Applying Pan to Drums
You will need to decide if you want to apply pan based on the point of view of the
drummer vs. the point of view of the audience. Either way, make sure that the
drum set is consistent based on the location of the drums. (See above)
Below is an example of panning from the drummers perspective. You can do it
either way, just make sure you are consistent!
165
166
Dynamics Processing
Dynamics: loudness. Measured in dB (decibels.) Remember that decibels indicate
perceived loudness, and based on the Fletcher Munson curves, may be different
from absolute loudness. Because of the shape of the pinna and inner ear, humans
are able to hear certain frequencies easier than others.
Noise Floor: the softest sound that humans can hear, which is 0 dB.
Distortion: the point at which a sound becomes so loud, that it changes the
timbre. Distortion adds a certain amount of buzz to the sound. The buzz comes
from the upper harmonics that become present when the sound becomes very
loud.
Drive: Drive is basically like a volume knob, but its designed to add volume at a
level that adds distortion.
Can you have soft distortion?
Yes, by overloading the pre-amps. Remember that there are many different levels
of gain staging and there is potential for distortion at each level. So you could
overload the mic preamp, but you may not hear it very loud in your headphones
because the master fader is down.
Activities:
Add distortion to a track by using the Amp Simulator in Mixcraft.
Check out Boost 11 plugin a free mastering plugin that will boost your songs
loudness. Watch out, though, its designed for rap/hip hop so it will also boost the
bass frequencies. This plugin was designed to create radio mixes (i.e.: songs that
would be heard on the radio.)
167
Dynamics Processors
Expanders/Gates
Limiters/Compressors
168
169
Expander/Gate
Noise Gate: This plugin works by creating silence when the main instruments cut
out and all you can hear is noise.
For instance, when you record with an electric guitar, you will have a certain
amount of noise that will be present with the amp. You dont want that noise to be
part of the mix, though, so you can add a Noise Gate, and that will create silence
when the instrument is not playing. It basically detects the threshold of the noise
and then keeps all sound going through above that threshold.
170
Compression
Compression: an audio effect that makes volumes softer.
Have you ever been in the library and there was too much noise, and the librarian
shushed everyone? Well, this is like compression. Basically, when the music gets to
a certain loudness (threshold), the compressor kicks in and makes everything
softer at a certain ratio.
What does compression sound like?
Search Katy Perry Firework chorus isolated vocals on YouTube to hear this in
action.
Anything by Adam Lambert
http://www.youtube.com/watch?v=X1Fqn9du7xo
Knee: The word knee, when applied to a compressor, is an indication of how the
line looks. If its a curved angle, then its a soft knee. If its a very acute angle, its
called a hard knee. With a hard knee, the compressor will not allow any volume
above the threshold at all. So if the threshold is set at -16 dB, and the source of the
sound gets louder than -16 dB, then a hard knee would keep it from ever being
heard above -16 dB, period. With a soft knee, the compressor will gradually kick in
as the source sound becomes greater than -16 dB.
171
172
Appendix A
Skill Based Tutorials
173
174
Moving clips around by the handle grab the top part of the clip (green)
Mass moving clips you can select multiple clips and then move them all at
the same time by one clips handle
then
175
Zoom: The process of viewing the song closer and farther away. This is very
important when editing!
Zoom buttons for horizontal zoom are for clips that are located at the top
Hold your cursor under a track until you see the two lines and then click and drag
to zoom up and down.
Playhead vs. The Two Notches in a Track: Adjust the playhead by clicking in
the top dark part. Notice the two notches dont follow adjust the two notches by
clicking in a track.
176
3. Click in the track where you want to zoom so you have the two notches at
the start of the first silence.
4. Click on the zoom plus button until you zoom all the way in.
177
5. Right click and split at the point where there is no waveform. Try to be as
exact as possible.
Notice that once you put your cursor in the clip, the white line appears for
the volume which obscures the actual view. Just make sure youre as exact
as possible.
6. Repeat steps 3 5 for the end of the silence.
8. Move the second clip so its almost touching the first clip.
9. Click at the end of the first clip to put the two notches at the end of the
first section so it will be ready to zoom into that area.
10. To get this as exact as possible, zoom almost all the way in.
11. Move the two clips so they are right up next to each other. Listen to see if
there is any tempo change or static. Adjust as necessary.
178
12. Zoom out and repeat steps 3 11 for the second set of silence.
179
Objective: student can crossfade two parts of a song without losing or gaining
time while also keeping the proper chord progression.
To Do this:
1. The file should be located on your desktop.
2. Add the sound file by going to Mix> Add Sound File. Navigate to the
desktop and find it.
3. Delete most of the silence. (select what you want to delete and hit delete)
Hit Delete
180
8. At 42:244, put a marker by right clicking in the timeline and selecting Add
Marker.
You can put your marker in the general area, then zoom in and look at the time at
the bottom to know where you are.
9. Title the marker A and press OK. This is the point where the chord
changes.
10. Now youre going to have to use your musical ears to finish. The
assignment is to move the second clip so the trumpet sixteenth notes lead
into the chord change. The clips will overlap a bit.
Heres how I do this:
The high note on the trumpet needs to go where the marker is.
Listen and figure out where that is.
Then grab it at that point and put it where the marker is.
181
Finished product:
Notice the crossfade actually happens a little before the marker, allowing
one to hear the trumpet sixteenth notes going into the chord change. Also
notice the crossfade extends a little past the marker this is the first clip
getting softer while the second clip is almost at full volume.
182
Youll notice that many instrument sounds are already mixed to stereo. This is a
lead sound in Rapture.
Mono:
This is a mono track. Notice that there is signal in both outputs, but its the same.
183
amount of control)
Graphic EQ: A type of equalizer where the bandwidth is set and you use faders at
preset frequencies to adjust the levels.
184
The frequency in the above picture has a bandwidth of about 1kHz to 5kHz. The
greatest gain is at 2.1 kHz.
Q: the sharpness of the bandwidth (if its a gradual or extreme change)
Filter: the shape of the bandwidth.
Examples include:
Shelf filter: a shelf filter will raise or lower all the frequencies above or below a
certain point. The icon the select that type of filter usually looks like a wishbone.
A low shelf filter would have the following icon below:
A high shelf filter would have the following icon below:
185
Example 1: the picture below is a low shelf filter. All of the frequencies below 100
Hz are softer.
Example 2: the picture below is a high shelf filter. All of the frequencies above 10
kHz are louder.
Low Pass Filter allows only low frequencies to be heard, or low frequencies pass
through.
High Pass Filter Allows only high frequencies to be heard, or high frequencies
pass through.
Notch filter raises or lowers a certain frequency.
You can combine multiple frequency adjustments on one track or over an entire
song. There is an art to creating good EQ for a track, mix, instrument, or song.
186
187
188
Period _____
Listen to the following examples of music and describe the aesthetic of the genre.
Rock The Black Keys Run Right Back
Prominence of the bass guitar Ambience close up or far away Prominence of vocals Sound of the snare Prominence of Bass Drum Prominence of cymbals Live sound or studio?
Change of song from verse to chorus -
Audience
Prominence of the bass guitar Ambience close up or far away Prominence of vocals Sound of the snare Prominence of Bass Drum Prominence of cymbals Live sound or studio?
Change of song from verse to chorus -
Audience
Prominence of the bass guitar Ambience close up or far away Prominence of vocals Sound of the snare Prominence of Bass Drum -
189
Audience
Prominence of the bass guitar Ambience close up or far away Prominence of vocals Sound of the snare Prominence of Bass Drum Prominence of cymbals Live sound or studio?
Change of song from verse to chorus -
Audience
Prominence of the bass guitar Ambience close up or far away Prominence of vocals Sound of the snare Prominence of Bass Drum Prominence of cymbals Live sound or studio?
Change of song from verse to chorus -
Audience
Prominence of the bass guitar Ambience close up or far away Prominence of vocals Sound of the snare Prominence of Bass Drum Prominence of cymbals Live sound or studio?
Change of song from verse to chorus
190
Audience
191
192
193
Remember, due to the work of many scientists, we have learned that humans
hear certain frequencies louder than others, namely the 1 3 kHz range. (Same
range as a babys cry) Make sure you listen to your songs at 83 decibels to get
the most accurate frequency range. If its too soft you wont hear the bass. You will
learn various techniques to differentiate the sound of the different instruments.
You will also learn how to use the volume of the whole song to build and release
tension.
194
Mixdown Project
Problem: Make this song sound good. It is currently distorted.
Assignment: Try to mix down this song using volumes so that the bass is as loud
as possible without distorting (which is what would be appropriate for that genre
a crossover metal/hip hop/electronic feel).
Technique:
1. Always keep the Master Fader at 100%. Do NOT try to compensate by
turning down the master volume.
Good
Bad
2. Adjust the different track volumes to achieve the desired effect. (Like #2
above)
195
196
197
198