Академический Документы
Профессиональный Документы
Культура Документы
Unit 1
MODULATION
SYSTEMS
tone, in order to use that signal to convey a message, in a similar fashion as a musician
may modulate the tone from a musical instrument by varying its volume, timing and
pitch. Normally a high-frequency sinusoid waveform is used as carrier signal. The three
key parameters of a sine wave are its amplitude ("volume"), its phase ("timing") and its
frequency ("pitch"), all of which can be modified in accordance with a low frequency
demod). A device that can do both operations is a modem (short for "MOdulate-
DEModulate").
A simple example: A telephone line is designed for transferring audible sounds, for example
tones, and not digital bits (zeros and ones). Computers may however communicate over a
telephone line by means of modems, which are representing the digital bits by tones, called
symbols. You can say that modems play music for each other. If there are four alternative
symbols (corresponding to a musical instrument that can generate four different tones, one at a
time), the first symbol may represent the bit sequence 00, the second 01, the third 10 and the
fourth 11. If the modem plays a melody consisting of 1000 tones per second, the symbol rate is
1000 symbols/second, or baud. Since each tone represents a message consisting of two digital bits
in this example, the bit rate is twice the symbol rate, i.e. 2000 bit per second.
The aim of digital modulation is to transfer a digital bit stream over an analog bandpass
channel, for example over the public switched telephone network (where a filter limits
the frequency range to between 300 and 3400 Hz) or a limited radio frequency band.
The aim of analog modulation is to transfer an analog lowpass signal, for example an
audio signal or TV signal, over an analog bandpass channel, for example a limited radio
several low pass information signals are transferred simultaneously over the same shared
The aim of digital baseband modulation methods, also known as line coding, is to
transfer a digital bit stream over a lowpass channel, typically a non-filtered copper wire
The aim of pulse modulation methods is to transfer a narrowband analog signal, for
example a phone call over a wideband lowpass channel or, in some of the schemes, as a
information signal.
varied)
• Angle modulation
varied)
o Phase modulation (PM) (here the phase shift of the modulated signal is
varied)
In digital modulation, an analog carrier signal is modulated by a digital bit stream. Digital
modulation alphabet).
• In the case of QAM, a finite number of at least two phases, and at least two
In QAM, an inphase signal (the I signal, for example a cosine waveform) and a
quadrature phase signal (the Q signal, for example a sine wave) are amplitude modulated
with a finite number of amplitudes, and summed. It can be seen as a two-channel system,
each channel using ASK. The resulting signal is equivalent to a combination of PSK and
ASK.
In all of the above methods, each of these phases, frequencies or amplitudes are assigned
a unique pattern of binary bits. Usually, each phase, frequency or amplitude encodes an
equal number of bits. This number of bits comprises the symbol that is represented by the
particular phase.
consisting of N bits. If the symbol rate (also known as the baud rate) is fS symbols/second
represents 4 bits. Thus, the data rate is four times the baud rate.
In the case of PSK, ASK or QAM, where the carrier frequency of the modulated signal is
diagram, showing the amplitude of the I signal at the x-axis, and the amplitude of the Q
PSK and ASK, and sometimes also FSK, are often generated and detected using the
principle of QAM. The I and Q signals can be combined into a complex-valued signal
I+jQ (where j is the imaginary unit). The resulting so called equivalent lowpass signal or
These are the general steps used by the modulator to transmit data:
1. Group the incoming data bits into codewords, one for each symbol that will be
transmitted.
2. Map the codewords to attributes, for example amplitudes of the I and Q signals
spectrum of the equivalent low pass signal, typically using digital signal
processing.
all of the above is normally achieved using digital signal processing, DSP).
5. Generate a high-frequency sine wave carrier waveform, and perhaps also a cosine
quadrature component. Carry out the modulation, for example by multiplying the
sine and cosine wave form with the I and Q signals, resulting in that the
equivalent low pass signal is frequency shifted into a modulated passband signal
processing. In that case the above DAC step should be done after this step.
periodic spectrum
1. Bandpass filtering.
2. Automatic gain control, AGC (to compensate for attenuation, for example
fading).
oscillator sinewave and cosine wave frequency (see the superheterodyne receiver
principle).
the IF signal.
symbol values.
groups);.
10. Pass the resultant bit stream on for further processing such as removal of any
error-correcting codes.
As is common to all digital communication systems, the design of both the modulator and
because the transmitter-receiver pair have prior knowledge of how data is encoded and
the modulator at the transmitter and the demodulator at the receiver are structured so that
Non-coherent modulation methods do not require a receiver reference clock signal that is
than bits, characters, or data packets) are asynchronously transferred. The opposite is
coherent modulation.
o π/4–QPSK
• Wavelet modulation
MSK and GMSK are particular cases of continuous phase modulation (CPM). Indeed,
linearly increasing phase pulse) of one symbol-time duration (total response signaling).
OFDM is based on the idea of Frequency Division Multiplex (FDM), but is utilized as a
digital modulation scheme. The bit stream is split into several parallel data streams, each
transferred over its own sub-carrier using some conventional digital modulation scheme.
The modulated sub-carriers are summed to form an OFDM signal. OFDM is considered
as a modulation technique rather than a multiplex technique, since it transfers one bit
stream over one communication channel using one sequence of so-called OFDM
symbols. OFDM can be extended to multi-user channel access method in the Orthogonal
several users to share the same physical medium by giving different sub-carriers or
less and use less battery power than linear amplifiers of the same output power. However,
modulation (FSK or PSK) and CDMA, but not with QAM and OFDM. Nevertheless,
even though switching amplifiers are completely unsuitable for normal QAM
constellations, often the QAM modulation principle are used to drive switching
amplifiers with these FM and other waveforms, and sometimes sometimes QAM
demodulators are used to receive the signals put out by these switching amplifiers.
The term digital baseband modulation is synonymous to line codes, which are methods
to transfer a digital bit stream over an analog lowpass channel using a pulse train, i.e. a
discrete number of signal levels, by directly modulating the voltage or current on a cable.
Pulse modulation schemes aim at transferring a narrowband analog signal over an analog
lowpass channel as a two-level quantized signal, by modulating a pulse train. Some pulse
signal (i.e. as a quantized discrete-time signal) with a fixed bit rate, which can be
transferred over an underlying digital transmission system, for example some line code.
They are not modulation schemes in the conventional sense since they are not channel
coding schemes, but should be considered as source coding schemes, and in some cases
TRANSMISSION MEDIUM
from one place to another for directing the transmission of energy, such as
Components of transmission lines include wires, coaxial cables, dielectric slabs, optical
from one place to another for directing the transmission of energy, such as
Components of transmission lines include wires, coaxial cables, dielectric slabs, optical
For the purposes of analysis, an electrical transmission line can be modelled as a two-port
In the simplest case, the network is assumed to be linear (i.e. the complex voltage across
either port is proportional to the complex current flowing into it when there are no
reflections), and the two ports are assumed to be interchangeable. If the transmission line
is uniform along its length, then its behaviour is largely described by a single parameter
called the characteristic impedance, symbol Z0. This is the ratio of the complex voltage
of a given wave to the complex current of the same wave at any point on the line. Typical
values of Z0 are 50 or 75 ohms for a coaxial cable, about 100 ohms for a twisted pair of
transmission.
When sending power down a transmission line, it is usually desirable that as much power
as possible will be absorbed by the load and as little as possible will be reflected back to
the source. This can be ensured by making the source and load impedances equal to Z 0, in
Some of the power that is fed into a transmission line is lost because of its resistance.
This effect is called ohmic or resistive loss (see ohmic heating). At high frequencies,
another effect called dielectric loss becomes significant, adding to the losses caused by
resistance. Dielectric loss is caused when the insulating material inside the transmission
line absorbs energy from the alternating electric field and converts it to heat (see
dielectric heating).
The total loss of power in a transmission line is often specified in decibels per metre
(dB/m), and usually depends on the frequency of the signal. The manufacturer often
High-frequency transmission lines can be defined as transmission lines that are designed
to carry electromagnetic waves whose wavelengths are shorter than or comparable to the
length of the line. Under these conditions, the approximations useful for calculations at
lower frequencies are no longer accurate. This often occurs with radio, microwave and
optical signals, and with the signals found in high-speed digital circuits.
LINE LOSSES; actually some line losses occur in all lines. Line losses
INDUCTION LOSSES.
Copper Losses
One type of copper loss is I2R LOSS. In rf lines the resistance of the
This heat loss is a POWER LOSS. With copper braid, which has a
center of the wire increases. Current in the center of the wire becomes
smaller and most of the electron flow is on the wire surface. When the
the center is so small that the center of the wire could be removed
effect.
than copper, most of the current will flow through the silver layer.
STANDING WAVES
lines. Each type of termination has a characteristic effect on the standing waves on the line. From the nature
ac meter when it is moved along the length of the line. As illustrated in figure 3-34, view A, the curve,
provided there are no losses in the line, will be a straight line. If there are losses in the line, the amplitude of
the voltage and current will diminish as they move down the line (view B). The losses are due to dc
IMPEDANCE MATCHING
source equal to the input impedance ZL of the load to which it is ultimately connected,
usually in order to maximize the power transfer and minimize reflections from the load.
The concept of impedance matching was originally developed for electrical power, but
can be applied to any other field where a form of energy (not just electrical) is transferred
source equal to the input impedance ZL of the load to which it is ultimately connected,
The concept of impedance matching was originally developed for electrical power, but
can be applied to any other field where a form of energy (not just electrical) is transferred
RADIO PROPAGATION
Radio propagation is a term used to explain how radio waves behave when they are
This is an illustration showing how radio signals are split into two components (the
ordinary component in red and the extraordinary component in green) when penetrating
into the ionosphere. Two separate signals of differing transmitted elevation angles are
the right. Click the image for access to a movie of this example showing the three
In free space, all electromagnetic waves (radio, light, X-rays, etc) obey the inverse-square
law which states that the power density of an electromagnetic wave is proportional to the
inverse of the square of r (where r is the distance [radius] from the source) or:
Doubling the distance from a transmitter means that the power density of the radiated
electromagnetic radiation are equal, and their field strengths are inversely proportional to
distance. The power density per surface unit is proportional to the product of the two field
strengths, which are expressed in linear units. Thus, doubling the propagation path
distance from the transmitter reduces their received field strengths over a free-space path
by one-half.
its path from point to point. This path can be a direct line of sight path or an over-the-
horizon path aided by refraction in the ionosphere. Factors influencing ionospheric radio
signal propagation can include sporadic-E, spread-F, solar flares, geomagnetic storms,
Lower frequencies (between 30 and 3,000 kHz) have the property of following the
curvature of the earth via groundwave propagation in the majority of occurrences. The
propagation more complex to predict and analyze than in free space (see image at right).
broadcasting have been moved to satellite transmitters. A satellite link, though expensive,
can offer highly predictable and stable line of sight coverage of a given area (see Google
associated with a solar flare ionizes the ionospheric D-region. Enhanced ionization in that
region increases the absorption of radio signals passing through it. During the strongest
solar x-ray flares, complete absorption of virtually all ionospherically propagated radio
signals in the sunlit hemisphere can occur. These solar flares can disrupt HF radio
PROPAGATION
Space Waves, also known as direct waves, are radio waves that travel directly from the
transmitting antenna to the receiving antenna. In order for this to occur, the two antennas
must be able to “see” each other; that is there must be a line of sight path between them.
The diagram on the next page shows a typical line of sight. The maximum line of sight
distance between two antennas depends on the height of each antenna. If the heights are
possible for radio waves to be reflected by these obstacles, resulting in radio waves that
arrive at the receive antenna from several different directions. Because the length of each
path is different, the waves will not arrive in phase. They may reinforce each other or
cancel each other, depending on the phase differences. This situation is known as
multipath propagation. It can cause major distortion to certain types of signals. Ghost
images seen on broadcast TV signals are the result of multipath – one picture arrives
slightly later than the other and is shifted in position on the screen. Multipath is very
troublesome for mobile communications. When the transmitter and/or receiver are in
motion, the path lengths are continuously changing and the signal fluctuates wildly in
satellite is placed in an orbit 22,000 miles above the equator, it appears to stand still in
the sky, as viewed from the ground. A high gain antenna can be pointed at the satellite to
transmit signals to it. The satellite is used as a relay station, from which approximately ¼
of the earth’s surface is visible. The satellite receives signals from the ground at one
frequency, known as the downlink frequency, and retransmits the signal. Because two
frequencies are used, the reception and transmission can happen simultaneously. A
satellite operating in this way is known as a transponder. The satellite has a tremendous
line of sight from its vantage point in space and many ground stations can communicate
In physics, surface wave can refer to a mechanical wave that propagates along the
interface between differing media, usually two fluids with different densities. A surface
wave can also be an electromagnetic wave guided by a refractive index gradient. In radio
Earth.
PATH LOSS:
Path loss (or path attenuation) is the reduction in power density (attenuation) of an
This term is commonly used in wireless communications and signal propagation. Path
loss may be due to many effects, such as free-space loss, refraction, diffraction,
reflection, aperture-medium coupling loss, and absorption. Path loss is also influenced by
medium (dry or moist air), the distance between the transmitter and the receiver, and the
Path loss normally includes propagation losses caused by the natural expansion of the
radio wave front in free space (which usually takes the shape of an ever-increasing
sphere), absorption losses (sometimes called penetration losses), when the signal passes
through media not transparent to electromagnetic waves, diffraction losses when part of
phenomena.
The signal radiated by a transmitter may also travel along many and different paths to a
receiver simultaneously; this effect is called multipath. Multipath can either increase or
small scale fading), resulting in fast fades which are very sensitive to receiver position.
In communications, the additive white Gaussian noise (AWGN) channel model is one
in which the only impairment is the linear addition of wideband or white noise with a
constant spectral density (expressed as watts per hertz of bandwidth) and a Gaussian
distribution of amplitude. The model does not account for the phenomena of fading,
simple, tractable mathematical models which are useful for gaining insight into the
Wideband Gaussian noise comes from many natural sources, such as the thermal
shot noise, black body radiation from the earth and other warm objects, and from celestial
links. It is not a good model for most terrestrial links because of multipath, terrain
blocking, interference, etc. However for terrestrial path modeling, AWGN is commonly
used to simulate background noise of the channel under study, in addition to multipath,
terrain blocking, interference, ground clutter and self interference that modern radio
DIGITAL
COMMUNICATION
telephone systems and 1980s-era electronic musical keyboards. It is also the standard
form for digital audio in computers and the compact disc "red book" format. It is also
standard in digital video, for example, using ITU-R BT.601. However, straight PCM is
not typically used for video in standard definition consumer applications such as DVD or
DVR because the bit rate required is far too high. Very frequently, PCM encoding
facilitates digital transmission from one point to another (within a given system, or
In the diagram, a sine wave (red curve) is sampled and quantized for PCM. The sine
wave is sampled at regular intervals, shown as ticks on the x-axis. For each sample, one
of the available values (ticks on the y-axis) is chosen by some algorithm (in this case, the
floor function is used). This produces a fully discrete representation of the input signal
(shaded area) that can be easily encoded as digital data for storage or manipulation. For
the sine wave example at right, we can verify that the quantized values at the sampling
moments are 7, 9, 11, 12, 13, 14, 14, 15, 15, 15, 14, etc. Encoding these values as binary
numbers would result in the following set of nibbles: 0111, 1001, 1011, 1100, 1101,
1110, 1110, 1111, 1111, 1111, 1110, etc. These digital values could then be further
CPU. Several Pulse Code Modulation streams could also be multiplexed into a larger
aggregate data stream, generally for transmission of multiple streams over a single
physical link. This technique is called time-division multiplexing, or TDM, and is widely
such a device is commonly implemented on a single integrated circuit that lacks only the
converter). These devices will produce on their output a binary representation of the input
whenever they are triggered by a clock signal, which would then be read by a processor
of some sort.
Demodulation
To produce output from the sampled data, the procedure of modulation is applied in
reverse. After each sampling period has passed, the next value is read and the output of
the system is shifted instantaneously (in an idealized system) to the new value. As a result
of these instantaneous transitions, the discrete signal will have a significant amount of
inherent high frequency energy, mostly harmonics of the sampling frequency (see square
wave). To smooth out the signal and remove these undesirable harmonics, the signal
would be passed through analog filters that suppress artifacts outside the expected
frequency range (i.e., greater than , the maximum resolvable frequency). Some systems
use digital filtering to remove the lowest and largest harmonics. In some systems, no
explicit filtering is done at all; as it's impossible for any system to reproduce a signal with
infinite bandwidth, inherent losses in the system compensate for the artifacts — or the
system simply does not require much precision. The sampling theorem suggests that
practical PCM devices, provided a sampling frequency that is sufficiently greater than
that of the input signal, can operate without introducing significant distortions within
similar to those used for generating the digital signal. These devices are DACs (digital-to-
analog converters), and operate similarly to ADCs. They produce on their output a
voltage or current (depending on type) that represents the value presented on their inputs.
This output would then generally be filtered and amplified for use.
Limitations
• Choosing a discrete value near the analog signal for each sample (quantization
error)
theorem this results in any frequency above or equal to (fs being the sampling
frequency) being distorted or lost completely (aliasing error). This is also called
reproduction. If either the encoding or decoding clock is not stable, its frequency drift
will directly affect the output quality of the device. A slight difference between the
encoding and decoding clock frequencies is not generally a major concern; a small
constant error is not noticeable. Clock error does become a major issue if the clock is not
stable, however. A drifting clock, even with a relatively small error, will cause very
in which two or more signals or bit streams are transferred apparently simultaneously as
sub-channels in one communication channel, but are physically taking turns on the
channel. The time domain is divided into several recurrent timeslots of fixed length, one
for each sub-channel. A sample byte or data block of sub-channel 1 is transmitted during
timeslot 1, sub-channel 2 during timeslot 2, etc. One TDM frame consists of one timeslot
per sub-channel. After the last sub-channel the cycle starts all over again with a new
frame, starting with the second sample, byte or data block from sub-channel 1, etc.
In circuit switched networks such as the Public Switched Telephone Network (PSTN)
there exists the need to transmit multiple subscribers’ calls along the same transmission
medium.[2] To accomplish this, network designers make use of TDM. TDM allows
standard DS0 voice signal has a data bit rate of 64 kbit/s, determined using Nyquist’s
Sampling Criterion.[2][3] TDM takes frames of the voice signals and multiplexes them into
a TDM frame which runs at a higher bandwidth. So if the TDM frame consists of n voice
Each voice frame in the TDM frame is called a channel or tributary.[2] In European
systems, TDM frames contain 30 digital voice frames and in American systems, TDM
frames contain 24 digital voice frames.[2] Both of the standards also contain extra space
TDM frames.[2] For example, a European 120 channel TDM frame is formed by
multiplexing four standard 30 channel TDM frames.[2] At each higher order multiplex,
four TDM frames from the immediate lower order are combined, creating multiplexes
higher order frames.[2][3] PDH created larger numbers of channels by multiplexing the
standard Europeans 30 channel TDM frames.[2] This solution worked for a while;
however PDH suffered from several inherent drawbacks which ultimately resulted in the
• Be synchronous – All clocks in the system must align with a reference clock.
• Allow frames of any size to be removed or inserted into an SDH frame of any
size.
links.
• Provide high data rates by multiplexing any size frame, limited only by
technology.
SDH has become the primary transmission protocol in most PSTN networks. [2][3] It was
larger SDH frames known as Synchronous Transport Modules (STM).[2] The STM-1
frame consists of smaller streams that are multiplexed to create a 155.52 Mbit/s frame.[2][3]
SDH can also multiplex packet based frames such as Ethernet, PPP and ATM.[2]
Model), it also performs some switching functions, as stated in the third bullet point
requirement listed above.[2] The most common SDH Networking functions are as follows:
Space-Time crosspoint switch. It connects any channel on any of its inputs to any
Exchanges, where all inputs and outputs are connected to other exchanges.[2]
• SDH Add-Drop Multiplexer – The SDH Add-Drop Multiplexer (ADM) can add or
remove any multiplexed frame down to 1.544Mb. Below this level, standard
TDM can be performed. SDH ADMs can also perform the task of an SDH
Crossconnect and are used in End Exchanges where the channels from subscribers
SDH Network functions are connected using high-speed Optic Fibre. Optic Fibre uses
light pulses to transmit data and is therefore extremely fast.[2] Modern optic fibre
transmitted across the fibre are transmitted at different wavelengths, creating additional
channels for transmission.[2][3] This increases the speed and capacity of the link, which in
STDM is an advanced version of TDM in which both the address of the terminal and the
data itself are transmitted together for better routing. Using STDM allows bandwidth to
be split over 1 line. Many college and corporate campuses use this type of TDM to
terminals with a dedicated 56k connection (178 * 56k = 9.96Mb). A more common use
however is to only grant the bandwidth when that much is needed. STDM does not
reserve a time slot for each terminal, rather it assigns a slot when the terminal is requiring
The T-carrier system, introduced by the Bell System in the U.S. in the 1960s, was the
first successful system that supported digitized voice transmission. The original
transmission rate (1.544 Mbps) in the T1 line is in common use today in Internet service
provider (ISP) connections to the Internet. Another level, the T3 line, providing 44.736
The T-carrier system is entirely digital, using pulse code modulation (PCM) and time-
division multiplexing (TDM). The system uses four wires and provides duplex capability
(two wires for receiving and two for sending at the same time). The T1 digital stream
consists of 24 64-Kbps channels that are multiplexed. (The standardized 64 Kbps channel
is based on the bandwidth required for a voice conversation.) The four wires were
originally a pair of twisted pair copper wires, but can now also include coaxial cable,
optical fiber, digital microwave, and other media. A number of variations on the number
or channelized T1. Another commonly installed service is a fractional T1, which is the
rental of some portion of the 24 channels in a T1 line, with the other channels going
unused.
In the T1 system, voice or other analog signals are sampled 8,000 times a second and
each sample is digitized into an 8-bit word. With 24 channels being digitized at the same
time, a 192-bit frame (24 channels each with an 8-bit word) is thus being transmitted
8,000 times a second. Each frame is separated from the next by a single bit, making a
193-bit block. The 192 bit frame multiplied by 8,000 and the additional 8,000 framing
bits make up the T1's 1.544 Mbps data rate. The signaling bits are the least significant
between distant cities, but required expensive modulators, demodulators and filters for
every voice channel. For connections within metropolitan areas, Bell Labs in the late
coder and decoder among several voice trunks, so this method was chosen for the T1
system introduced into local use in 1961. In later decades, the cost of digital electronics
declined to the point that an individual codec per voice channel became commonplace,
but by then the other advantages of digital transmission had become entrenched.
The most common legacy of this system is the line rate speeds. "T1" now seems to mean
any data circuit that runs at the original 1.544 Mbit/s line rate. Originally the T1 format
Mbit/s, respectively.
Supposedly, the 1.544 Mbit/s rate was chosen because tests done by AT&T Long Lines
manholes were physically 6600 feet apart, and so the optimum bit rate was chosen
empirically--the capacity was increased until the failure rate was unacceptable, then
reduced to leave a margin. Companding allowed acceptable audio performance with only
seven bits per PCM sample in this original T1/D1 system. The later D3 and D4 channel
banks had an extended frame format, allowing eight bits per sample, reduced to seven
every sixth sample or frame when one bit was "robbed" for signaling the state of the
channel. The standard does not allow an all zero sample which would produce a long
string of binary zeros and cause the repeaters to lose bit sync. However, when carrying
data (Switched 56) there could be long strings of zeroes, so one bit per sample is set to
"1" (jam bit 7) leaving 7 bits x 8000 frames per second for data.
A more common understanding of how the rate of 1.544 Mbit/s was achieved is as
follows. (This explanation glosses over T1 voice communications, and deals mainly with
the numbers involved.) Given that the highest voice frequency which the telephone
system transmits is 4000 Hz, the required digital sampling rate is 8000 Hz (see Nyquist
rate). Since each T1 frame contains 1 byte of voice data for each of the 24 channels, that
system needs then 8000 frames per second to maintain those 24 simultaneous voice
channel + 1 framing bit = 193 bits), 8000 frames per second is multiplied by 193 bits to
Initially, T1 used Alternate Mark Inversion (AMI) to reduce bandwidth and eliminate
the DC component of the signal. Later B8ZS became common practice. For AMI, each
mark pulse had the opposite polarity of the previous one and each space was at a level of
zero, resulting in a three level signal which however only carried binary data. Similar
British 23 channel systems at 1.536 Mbaud in the 1970s were equipped with ternary
signal repeaters, in anticipation of using a 3B2T or 4B3T code to increase the number of
voice channels in future, but in the 1980s the systems were merely replaced with
European standard ones. American T-carriers could only work in AMI or B8ZS mode.
The AMI or B8ZS signal allowed a simple error rate measurement. The D bank in the
central office could detect a bit with the wrong polarity, or "bipolarity violation" and
sound an alarm. Later systems could count the number of violations and reframes and
The decision to use a 193-bit frame was made in 1958, during the early stages of T1
system design. To allow for the identification of information bits within a frame, two
alternatives were considered. Assign (a) just one extra bit, or (b) additional 8 bits per
frame. The 8-bit choice is cleaner, resulting in a 200-bit frame, 25 8-bit channels, of
which 24 are traffic and 1 8-bit channel available for operations, administration, and
saves bandwidth (by a trivial amount 1.544 vs 1.6 Mbit/s), but from a comment from
AT&T Marketing. They claim that "if 8 bits were chosen for OA&M function, someone
would then try to sell this as a voice channel and you wind up with nothing."
Soon after commercial success of T1 in 1962, the T1 engineering team realized the
mistake of having only one bit to serve the increasing demand for housekeeping
functions. They petitioned AT&T management to change to 8-bit framing. This was flatly
Having this hindsight, some ten years later, CEPT chose 8 bits for framing the European
E1.
Higher T
In the late 1960s and early 1970s Bell Labs developed higher rate systems. T-1C with a
more sophisticated modulation scheme carried 3 Mbit/s, on those balanced pair cables
that could support it. T-2 carried 6.312 Mbit/s, requiring a special low-capacitance cable
with foam insulation. This was standard for Picturephone. T-4 and T-5 used coaxial
cables, similar to the old L-carriers used by AT&T Long Lines. TD microwave radio
relay systems were also fitted with high rate modems to allow them to carry a DS1 signal
in a portion of their FM spectrum that had too poor quality for voice service. Later they
carried DS3 and DS4 signals. Later optical fiber, typically using SONET transmission
In digital modulation, an analog carrier signal is modulated by a digital bit stream. Digital
the carrier signal are chosen from a finite number of M alternative symbols (the
modulation alphabet).
• In the case of QAM, a finite number of at least two phases, and at least two
quadrature phase signal (the Q signal, for example a sine wave) are amplitude modulated
with a finite number of amplitudes, and summed. It can be seen as a two-channel system,
each channel using ASK. The resulting signal is equivalent to a combination of PSK and
ASK.
In all of the above methods, each of these phases, frequencies or amplitudes are assigned
a unique pattern of binary bits. Usually, each phase, frequency or amplitude encodes an
equal number of bits. This number of bits comprises the symbol that is represented by the
particular phase.
consisting of N bits. If the symbol rate (also known as the baud rate) is fS symbols/second
represents 4 bits. Thus, the data rate is four times the baud rate.
In the case of PSK, ASK or QAM, where the carrier frequency of the modulated signal is
diagram, showing the amplitude of the I signal at the x-axis, and the amplitude of the Q
PSK and ASK, and sometimes also FSK, are often generated and detected using the
principle of QAM. The I and Q signals can be combined into a complex-valued signal
I+jQ (where j is the imaginary unit). The resulting so called equivalent lowpass signal or
These are the general steps used by the modulator to transmit data:
1. Group the incoming data bits into codewords, one for each symbol that will be
transmitted.
2. Map the codewords to attributes, for example amplitudes of the I and Q signals
3. Adapt pulse shaping or some other filtering to limit the bandwidth and form the
spectrum of the equivalent low pass signal, typically using digital signal
processing.
all of the above is normally achieved using digital signal processing, DSP).
5. Generate a high-frequency sine wave carrier waveform, and perhaps also a cosine
quadrature component. Carry out the modulation, for example by multiplying the
sine and cosine wave form with the I and Q signals, resulting in that the
equivalent low pass signal is frequency shifted into a modulated passband signal
processing. In that case the above DAC step should be done after this step.
periodic spectrum
1. Bandpass filtering.
2. Automatic gain control, AGC (to compensate for attenuation, for example
fading).
oscillator sinewave and cosine wave frequency (see the superheterodyne receiver
principle).
the IF signal.
symbol values.
groups);.
10. Pass the resultant bit stream on for further processing such as removal of any
error-correcting codes.
As is common to all digital communication systems, the design of both the modulator and
because the transmitter-receiver pair have prior knowledge of how data is encoded and
the modulator at the transmitter and the demodulator at the receiver are structured so that
Non-coherent modulation methods do not require a receiver reference clock signal that is
phase synchronized with the sender carrier wave. In this case, modulation symbols (rather
than bits, characters, or data packets) are asynchronously transferred. The opposite is
coherent modulation.
o π/4–QPSK
letter, word, or phrase) into another form or representation, not necessarily of the same
the reverse process, converting these code symbols back into information understandable
by a receiver.
One reason for coding is to enable communication in places where ordinary spoken or
written language is difficult or impossible. For example, a cable code replaces words
(e.g., ship or invoice) into shorter words, allowing the same information to be sent with
fewer characters, more quickly, and most important, less expensively. Another example
is the use of semaphore, where the configuration of flags held by a signaller or the arms
of a semaphore tower encodes parts of the message, typically individual letters and
SERIAL INTERFACE
letter, word, or phrase) into another form or representation, not necessarily of the same
the reverse process, converting these code symbols back into information understandable
by a receiver.
One reason for coding is to enable communication in places where ordinary spoken or
written language is difficult or impossible. For example, a cable code replaces words
(e.g., ship or invoice) into shorter words, allowing the same information to be sent with
fewer characters, more quickly, and most important, less expensively. Another example
is the use of semaphore, where the configuration of flags held by a signaller or the arms
of a semaphore tower encodes parts of the message, typically individual letters and
numbers. Another person standing a great distance away can interpret the flags and
letter, word, or phrase) into another form or representation, not necessarily of the same
the reverse process, converting these code symbols back into information understandable
by a receiver.
One reason for coding is to enable communication in places where ordinary spoken or
written language is difficult or impossible. For example, a cable code replaces words
(e.g., ship or invoice) into shorter words, allowing the same information to be sent with
fewer characters, more quickly, and most important, less expensively. Another example
is the use of semaphore, where the configuration of flags held by a signaller or the arms
of a semaphore tower encodes parts of the message, typically individual letters and
numbers. Another person standing a great distance away can interpret the flags and
VOCAL offers a comprehensive and fully optimized modem software library, based on over 20 years
success on a wide variety of platforms. Data modulations include V.92 (client/server), V.90
determination procedures (Automode) include those of V.8, V.8bis and PN-2330. V.92/V.90 digital
client is also available for specialized server requirements (i.e. self-test), as well as analog client
V.92/V.90 support.
The higher data protocol layers include V.42 (including MNP2-4), V.42bis, V.44 and MNP-5. PPP
framing support is provided as a runtime option. The application interface of this software can support
an industry standard AT command set or may be used directly by an application. The modulation layer
an analog front end (codec and DAA) or a digital interface such as T1/E1, xDSL, ATM, and ISDN.
VOCAL's embedded software libraries include a complete range of ETSI / ITU / IEEE compliant
algorithms, in addition to many other standard and proprietary algorithms. Our software is optimized
for execution on ANSI C and leading DSP architectures (TI, ADI, AMD, ARM, MIPS, CEVA, LSI
Logic ZSP, etc.). These libraries are modular and can be executed as a single task under a variety of
The public switched telephone network (PSTN) is the network of the world's public
circuit-switched telephone networks, in much the same way that the Internet is the
of fixed-line analog telephone systems, the PSTN is now almost entirely digital, and now
The PSTN is largely governed by technical standards created by the ITU-T, and uses
of equipment and the number of personnel required to deliver a specific level of service.
In the 1970s the telecommunications industry conceived that digital services would
follow much the same pattern as voice services, and conceived a vision of end-to-end
circuit switched services, known as the Broadband Integrated Services Digital Network
(B-ISDN). The B-ISDN vision has been overtaken by the disruptive technology of the
Internet. Only the oldest parts of the telephone network still use analog technology for
anything other than the last mile loop to the end user, and in recent years digital services
have been increasingly rolled out to end users using services such as DSL, ISDN, FTTP
Many observers believe that the long term future of the PSTN is to be just one application
of the Internet - however, the Internet has some way to go before this transition can be
made. The QoS guarantee is one aspect that needs to be improved in the Voice over IP
(VoIP) technology.
There are a number of large private telephone networks which are not linked to the
PSTN, usually for military purposes. There are also private networks run by large
companies which are linked to the PSTN only through limited gateways, like a large
The first telephones had no network but were in private use, wired together in pairs.
Users who wanted to talk to different people had as many telephones as necessary for the
purpose. A user who wished to speak, whistled into the transmitter until the other party
heard. Soon, however, a bell was added for signalling, and then a switchhook, and
networks. Each telephone was wired to a local telephone exchange, and the exchanges
were wired together with trunks. Networks were connected together in a hierarchical
manner until they spanned cities, countries, continents and oceans. This was the
beginning of the PSTN, though the term was unknown for many decades.
Automation introduced pulse dialing between the phone and the exchange, and then
frequency, culminating in the SS7 network that connected most exchanges by the end of
Digital Channel
Although the network was created using analog voice connections through manual
digital switch technologies were used. Most switches now use digital circuits between
exchanges, with analog two-wire circuits still used to connect to most telephones.
designed by Bell Labs, called Digital Signal 0 (DS0). To carry a typical phone call from a
calling party to a called party, the audio sound is digitized at an 8 kHz sample rate using
8-bit pulse code modulation (PCM). The call is then transmitted from one end to another
via telephone exchanges. The call is switched using a signaling protocol (SS7) between
The DS0s are the basic granularity at which switching takes place in a telephone
exchange. DS0s are also known as timeslots because they are multiplexed together using
capacity circuits into a DS1 signal, carrying 24 DS0s on a North American or Japanese
T1 line, or 32 DS0s (30 for calls plus two for framing and signalling) on an E1 line used
in most other countries. In modern networks, this multiplexing is moved as close to the
end user as possible, usually into cabinets at the roadside in residential areas, or into large
business premises.
The timeslots are conveyed from the initial multiplexer to the exchange over a set of
equipment collectively known as the access network. The access network and inter-
exchange transport of the PSTN use synchronous optical transmission (SONET and
SDH) technology, although some parts still use the older PDH technology.
Within the access network, there are a number of reference points defined. Most of these
are of interest mainly to ISDN but one – the V reference point – is of more general
interest. This is the reference point between a primary multiplexer and an exchange. The
protocols at this reference point were standardised in ETSI areas as the V5 interface.
The PSTN was the earliest example of traffic engineering to deliver Quality of Service
of equipment and the number of personnel required to deliver a specific level of service.
In the 1970s the telecommunications industry conceived that digital services would
follow much the same pattern as voice services, and conceived a vision of end-to-end
circuit switched services, known as the Broadband Integrated Services Digital Network
(B-ISDN). The B-ISDN vision has been overtaken by the disruptive technology of the
Internet. Only the oldest parts of the telephone network still use analog technology for
anything other than the last mile loop to the end user, and in recent years digital services
have been increasingly rolled out to end users using services such as DSL, ISDN, FTTP
of the Internet - however, the Internet has some way to go before this transition can be
made. The QoS guarantee is one aspect that needs to be improved in the Voice over IP
(VoIP) technology.
There are a number of large private telephone networks which are not linked to the
PSTN, usually for military purposes. There are also private networks run by large
companies which are linked to the PSTN only through limited gateways, like a large
Early history
The first telephones had no network but were in private use, wired together in pairs.
Users who wanted to talk to different people had as many telephones as necessary for the
purpose. A user who wished to speak, whistled into the transmitter until the other party
heard. Soon, however, a bell was added for signalling, and then a switchhook, and
networks. Each telephone was wired to a local telephone exchange, and the exchanges
were wired together with trunks. Networks were connected together in a hierarchical
manner until they spanned cities, countries, continents and oceans. This was the
beginning of the PSTN, though the term was unknown for many decades.
Automation introduced pulse dialing between the phone and the exchange, and then
LAN:
home, office, or group of buildings e.g. a school. The defining characteristics of LANs, in
contrast to wide-area networks (WANs), include their much higher data-transfer rates,
smaller geographic range, and lack of a need for leased telecommunication lines.
Ethernet over unshielded twisted pair cabling, and Wi-Fi are the two most common
technologies currently, but ARCNET, Token Ring and many others have been used in the
past.
The first LAN put into service occurred in 1964 at the Livermore Laboratory to support
atomic weapons research. LANs spread to the public sector in the late 1970s and were
many competing systems created at this time, Ethernet and ARCNET were the most
popular.
The development and proliferation of CP/M and then DOS-based personal computers
meant that a single site began to have dozens or even hundreds of computers. The initial
attraction of networking these was generally to share disk space and laser printers, which
were both very expensive at the time. There was much enthusiasm for the concept and for
several years, from about 1983 onward, computer industry pundits would regularly
In reality, the concept was marred by proliferation of incompatible physical layer and
network protocol implementations, and confusion over how best to share resources.
Typically, each vendor would have its own type of network card, cabling, protocol, and
network operating system. A solution appeared with the advent of Novell NetWare which
provided even-handed support for the 40 or so competing card/cable types, and a much
more sophisticated operating system than most of its competitors. Netware dominated[1]
the personal computer LAN business from early after its introduction in 1983 until the
mid 1990s when Microsoft introduced Windows NT Advanced Server and Windows for
Workgroups.
Of the competitors to NetWare, only Banyan Vines had comparable technical strengths,
but Banyan never gained a secure base. Microsoft and 3Com worked together to create a
simple network operating system which formed the base of 3Com's 3+Share, Microsoft's
LAN Manager and IBM's LAN Server. None of these were particularly successful.
using TCP/IP based networking. Although this market segment is now much reduced, the
technologies developed in this area continue to be influential on the Internet and in both
Linux and Apple Mac OS X networking—and the TCP/IP protocol has now almost
completely replaced IPX, AppleTalk, NBF and other protocols used by the early PC
LANs.
Technical aspects
Although switched Ethernet is now the most common data link layer protocol and IP as a
network layer protocol, many different options have been used, and some continue to be
popular in niche areas. Smaller LANs generally consist of a one or more switches linked
to each other - often with one connected to a router, cable modem, or DSL modem for
Internet access.
Larger LANs are characterized by their use of redundant links with switches using the
spanning tree protocol to prevent loops, their ability to manage differing traffic types via
quality of service (QoS), and to segregate traffic via VLANs. Larger LANS also contain a
wide variety of network devices such as switches, firewalls, routers, load balancers,
LANs may have connections with other LANs via leased lines, leased services, or by
'tunneling' across the Internet using VPN technologies. Depending on how the
Area Network (MAN), a Wide Area Network (WAN), or a part of the internet.
systems
Physical Layer
Transforms basic physical services to enable the transmission of units of data called
frames. Frames carry data between two points on the same type of physical network,
and maybe relayed if the network is extended. They normally contain low level
addressing information and some error checking. This layer may be involved in
Network
implements the network connections. i.e. specifies how addresses are assigned and
how packets are forwarded from one end of the network to another.
Transport
Provides an interface for the upper layers to communications facilities. The presence
of this layer obscures the underlying network hardware and topology from the
applications. A very complex set of protocols are required for this layer!
Session
The protocols for this layer specify how to establish a communication session with a
for security details such as authentication using passwords are described in this
layer.
Presentation
Layer 6 protocols specify how to represent data. Such protocols are needed because
different brands of computer use different internal representation for integer and
characters. Thus layer 6 protocols are needed to translate from the representation
This is where the application using the network resides. Common network
applications include remote login, file transfer, e-mail, and web page browsing.
COMMUNICATIONS
regularly over points on the Earth over time. If such a satellite's orbit lies over the equator
and the orbit is circular, it is called a geostationary satellite. The orbits of the satellites
are known as the geosynchronous orbit and geostationary orbit. Another type of
regularly over points on the Earth over time. If such a satellite's orbit lies over the equator
and the orbit is circular, it is called a geostationary satellite. The orbits of the satellites
are known as the geosynchronous orbit and geostationary orbit. Another type of
Geostationary satellites appear to be fixed over one spot above the equator. Receiving
and transmitting antennas on the earth do not need to track such a satellite. These
antennas can be fixed in place and are much less expensive than tracking antennas. These
applications.
One disadvantage of geostationary satellites is a result of their high altitude: radio signals
take approximately 0.25 of a second to reach and return from the satellite, resulting in a
small but significant signal delay. This delay increases the difficulty of telephone
conversation and reduces the performance of common network protocols such as TCP/IP,
but does not present a problem with non-interactive systems such as television
broadcasts. There are a number of proprietary satellite data protocols that are designed to
proxy TCP/IP connections over long-delay satellite links -- these are marketed as being a
partial solution to the poor performance of native TCP over satellite links. TCP presumes
that all loss is due to congestion, not errors, and probes link capacity with its "slow-start"
algorithm, which only sends packets once it is known that earlier packets have been
received. Slow start is very slow over a path using a geostationary satellite.
since ground stations at higher than roughly 60 degrees latitude have difficulty reliably
receiving signals at low elevations. Satellite dishes in the Northern Hemisphere would
need to be pointed almost directly towards the horizon. The signals would have to pass
through the largest amount of atmosphere, and could even be blocked by land
topography, vegetation or buildings. In the USSR, a practical solution was developed for
this problem with the creation of special Molniya / Orbita inclined path satellite networks
with elliptical orbits. Similar elliptical orbits are used for the Sirius Radio satellites
Fiber optics is the overlap of applied science and engineering concerned with the design
and application of optical fibers. Optical fibers are widely used in fiber-optic
communication, which permits transmission over longer distances and at higher data rates
than other forms of communications. Fibers are used instead of metal wires because
signals travel along them with less loss, and they are immune to electromagnetic
interference. Optical fibers are also used to form sensors, and in a variety of other
applications.
Light is kept in the "core" of the optical fiber by total internal reflection. This causes the
fiber to act as a waveguide. Fibers which support many propagation paths or transverse
modes are called multimode fibers (MMF). Fibers which support only a single mode are
called singlemode fibers (SMF). Multimode fibers generally have a large-diameter core,
and are used for short-distance communication links or for applications where high power
must be transmitted. Singlemode fibers are used for most communication links longer
tical fiber can be used as a medium for telecommunication and networking because it is
communications, because light propagates through the fiber with little attenuation
compared to electrical cables. This allows long distances to be spanned with few
repeaters. Additionally, the light signals propagating in the fiber can be modulated at
rates as high as 40 Gb/s [3], and each fiber can carry many independent channels, each by
such as networking within a building, fiber saves space in cable ducts because a single
fiber can carry much more data than a single electrical cable. Fiber is also immune to
electrical interference, which prevents cross-talk between signals in different cables and
connections, and there are concentric dual core fibers that are said to be tap-proof.
Because they are non-electrical, fiber cables can bridge very high electrical potential
differences and can be used in environments where explosive fumes are present, without
danger of ignition.
Although fibers can be made out of transparent plastic, glass, or a combination of the
two, the fibers used in long-distance telecommunications applications are always glass,
because of the lower optical attenuation. Both multi-mode and single-mode fibers are
used in communications, with multi-mode fiber used mostly for short distances (up to
500 m), and single-mode fiber used for longer distance links. Because of the tighter
tolerances required to couple light into and between single-mode fibers (core diameter
Fiber benefits
voice-grade copper systems longer than a couple of kilometers require in-line signal
repeaters for satisfactory performance, it is not unusual for optical systems to go over 100
install new cabling within existing duct systems. The relatively small diameter and light
weight of optical cables makes such installations easy and practical, and saves valuable
Long lengths
Long, continuous lengths also provide advantages for installers and end-users. Small
diameters make it practical to manufacture and install much longer lengths than for
metallic cables: twelve-kilometer (12 km) continuous optical cable lengths are common.
maximum length of 2 km or less. Multimode cable lengths are based on industry demand.
Long lengths make optical cable installation much easier and less expensive. Optical
fiber cables can be installed with the same equipment that is used to install copper and
bend radius of optical cables. Optical cables can typically be installed in duct systems in
spans of 6000 meters or more depending on the duct's condition, layout of the duct
system, and installation technique. The longer cables can be coiled at an intermediate
Non-conductivity
Another advantage of optical fibers is their dielectric nature. Since optical fiber has no
including radio frequency interference (RFI). Areas with high EMI include utility lines,
power-carrying lines, and railroad tracks. All-dielectric cables are also ideal for areas of
Security
Unlike metallic-based systems, the dielectric nature of optical fiber makes it impossible
to remotely detect the signal being transmitted within the cable. The signal in a fiber can,
however, be "tapped" by bending the fiber and detecting light that then leaks from its
core. The resistance to remote signal interception makes fiber attractive to governmental
Fiber optics is affordable today, as electronics prices fall and optical cable pricing
remains low. In many cases, fiber solutions are less costly than copper. As bandwidth
Optical fibers can be used as sensors to measure strain, temperature, pressure and other
parameters. The small size and the fact that no electrical power is needed at the remote
location gives the fiber optic sensor an advantage over a conventional electrical sensor in
certain applications.
Optical fibers are used as hydrophones for seismic or SONAR applications. Hydrophone
systems with more than 100 sensors per fiber cable have been developed. Hydrophone
sensor systems are used by the oil industry as well as a few countries' navies. Both
bottom mounted hydrophone arrays and towed streamer systems are in use. The German
company Sennheiser developed a microphone working with a laser and optical fibers[4].
Optical fiber sensors for temperature and pressure have been developed for downhole
measurement in oil wells. The fiber optic sensor is well suited for this environment as it
temperature sensing).
Optical fibers can be made into interferometric sensors such as fiber optic gyroscopes,
which are used in the Boeing 767 and in some car models (for navigation purposes). They
simultaneously with very high accuracy[5]. This is particularly useful when acquiring
A fiberoptic optical time-domain reflectometer can be used as the basis of a system for
1. Intrinsic, in which the fiber itself acts as the sensing medium, that is the
propagating light never leaves the fiber and is altered in some way by an external
phenomenon.
2. Extrinsic, in which the fiber merely acts as a light delivery and collection system;
the propagating light leaves the fiber, is altered in some way, and is collected by
Extrinsic sensors
Extrinsic fiber optic sensors use a fiberoptic cable, normally a multimode one, to transmit
modulated light from a conventional sensor. A major feature of extrinsic sensors, which
makes them so useful in such a large number of applications, is their ability to reach
places which are otherwise inaccessible. One example of this is the insertion of fiber
optic cables into the jet engines of aircraft to measure temperature by transmitting
radiation into a radiation pyrometer located remotely from the engine. Fiber optic cable
can be used in the same way to measure the internal temperature of electrical
techniques impossible.
Extrinsic fiber optic sensors provide excellent protection of measurement signals against
noise corruption. Unfortunately, the output of many forms of conventional sensor is not
in a form which can be transmitted by a fiber optic cable. Conversion into a suitable form
must therefore take place prior to transmission. For example, in the case of a platinum
resistance thermometer (PRT), the temperature changes are translated into resistance
changes. The PRT must therefore have an electrical power supply. The modulated
voltage level at the output of the PRT can then be injected into the fiber optic cable via
the usual type of transmitter. This complicates the measurement process and means that
low-voltage power cables must be routed with the fiber optic cable to the transducer.
Intrinsic sensors
Intrinsic sensors can modulate the intensity, phase, polarization, wavelength or transit
time of light. A particularly useful feature of intrinsic fiber optic sensors is that they can,
Light intensity is the simplest parameter to manipulate in intrinsic sensors because only a
Fibers are widely used in illumination applications. They are used as light guides in
medical and other applications where bright light needs to be shone on a target without a
clear line-of-sight path. In some buildings, optical fibers are used to route sunlight from
the roof to other parts of the building (see non-imaging optics). Optical fiber illumination
is also used for decorative applications, including signs, art, and artificial Christmas trees.
Swarovski boutiques use optical fibers to illuminate their crystal showcases from many
different angles while only employing one light source. Optical fiber is an intrinsic part
Optical fiber is also used in imaging optics. A coherent bundle of fibers is used,
sometimes along with lenses, for a long, thin imaging device called an endoscope, which
is used to view objects through a small hole. Medical endoscopes are used for minimally
fiberscope or borescope) are used for inspecting anything hard to reach, such as jet
engine interiors.
An optical fiber doped with certain rare-earth elements such as erbium can be used as the
gain medium of a laser or optical amplifier. Rare-earth doped optical fibers can be used to
provide signal amplification by splicing a short section of doped fiber into a regular
(undoped) optical fiber line. The doped fiber is optically pumped with a second laser
wavelength that is coupled into the line in addition to the signal wave. Both wavelengths
of light are transmitted through the doped fiber, which transfers energy from the second
stimulated emission.
Optical fibers doped with a wavelength shifter are used to collect scintillation light in
physics experiments.
Optical fiber can be used to supply a low level of power (around one watt) to electronics
powered antenna elements and measurement devices used in high voltage transmission
equipment.
Principle of operation
An optical fiber is a cylindrical dielectric waveguide that transmits light along its axis, by
the process of total internal reflection. The fiber consists of a core surrounded by a
cladding layer. To confine the optical signal in the core, the refractive index of the core
must be greater than that of the cladding. The boundary between the core and cladding
Index of refraction
The index of refraction is a way of measuring the speed of light in a material. Light
travels fastest in a vacuum, such as outer space. The actual speed of light in a vacuum is
300,000 kilometers per second, or 186,000 miles per second. Index of refraction is
calculated by dividing the speed of light in a vacuum by the speed of light in some other
value for the cladding of an optical fiber is 1.46. The core value is typically 1.48. The
larger the index of refraction, the more slowly light travels in that medium.
When light traveling in a dense medium hits a boundary at a steep angle (larger than the
"critical angle" for the boundary), the light will be completely reflected. This effect is
used in optical fibers to confine light in the core. Light travels along the fiber bouncing
back and forth off of the boundary. Because the light must strike the boundary with an
angle less than the critical angle, only light that enters the fiber within a certain range of
angles can travel down the fiber without leaking out. This range of angles is called the
acceptance cone of the fiber. The size of this acceptance cone is a function of the
In simpler terms, there is a maximum angle from the fiber axis at which light may enter
the fiber so that it will propagate, or travel, in the core of the fiber. The sine of this
maximum angle is the numerical aperture (NA) of the fiber. Fiber with a larger NA
requires less precision to splice and work with than fiber with a smaller NA. Single-mode
Multimode fiber
Such fiber is called multimode fiber, from the electromagnetic analysis (see below). In a
step-index multimode fiber, rays of light are guided along the fiber core by total internal
reflection. Rays that meet the core-cladding boundary at a high angle (measured relative
to a line normal to the boundary), greater than the critical angle for this boundary, are
completely reflected. The critical angle (minimum angle for total internal reflection) is
determined by the difference in index of refraction between the core and cladding
materials. Rays that meet the boundary at a low angle are refracted from the core into the
cladding, and do not convey light and hence information along the fiber. The critical
angle determines the acceptance angle of the fiber, often reported as a numerical aperture.
A high numerical aperture allows light to propagate down the fiber in rays both close to
the axis and at various angles, allowing efficient coupling of light into the fiber.
However, this high numerical aperture increases the amount of dispersion as rays at
different angles have different path lengths and therefore take different times to traverse
In graded-index fiber, the index of refraction in the core decreases continuously between
the axis and the cladding. This causes light rays to bend smoothly as they approach the
cladding, rather than reflecting abruptly from the core-cladding boundary. The resulting
curved paths reduce multi-path dispersion because high angle rays pass more through the
chosen to minimize the difference in axial propagation speeds of the various rays in the
fiber. This ideal index profile is very close to a parabolic relationship between the index
Singlemode fiber
Fiber with a core diameter less than about ten times the wavelength of the propagating
understand behaviors such as speckle that occur when coherent light propagates in multi-
mode fiber. As an optical waveguide, the fiber supports one or more confined transverse
modes by which light can propagate along the fiber. Fiber supporting only one mode is
called single-mode or mono-mode fiber. The behavior of larger-core multimode fiber can
also be modeled using the wave equation, which shows that such fiber supports more
than one mode of propagation (hence the name). The results of such modeling of multi-
mode fiber approximately agree with the predictions of geometric optics, if the fiber core
confined in the core. Instead, especially in single-mode fibers, a significant fraction of the
The most common type of single-mode fiber has a core diameter of 8 to 10 μm and is
designed for use in the near infrared. The mode structure depends on the wavelength of
the light used, so that this fiber actually supports a small number of additional modes at
Special-purpose fiber
propagation.
Photonic crystal fiber is made with a regular pattern of index variation (often in the form
of cylindrical holes that run along the length of the fiber). Such fiber uses diffraction
effects instead of or in addition to total internal reflection, to confine light to the fiber's
core. The properties of the fiber can be tailored to a wide variety of applications.
Manufacturing
Glass optical fibers are almost always made from silica, but some other materials, such as
wavelength infrared applications. Like other glasses, these glasses have a refractive index
of about 1.5. Typically the difference between core and cladding is less than one percent.
Plastic optical fibers (POF) are commonly step-index multimode fibers with a core
diameter of 0.5 mm or larger. POF typically have higher attenuation co-efficients than
glass fibers, 1 dB/m or higher, and this high attenuation limits the range of POF-based
systems.
Process
Standard optical fibers are made by first constructing a large-diameter preform, with a
carefully controlled refractive index profile, and then pulling the preform to form the
long, thin optical fiber. The preform is commonly made by three chemical vapor
deposition methods: inside vapor deposition, outside vapor deposition, and vapor axial
deposition.[6]
With inside vapor deposition, a hollow glass tube approximately 40 cm in length known
as a "preform" is placed horizontally and rotated slowly on a lathe, and gases such as
silicon tetrachloride (SiCl4) or germanium tetrachloride (GeCl4) are injected with oxygen
in the end of the tube. The gases are then heated by means of an external hydrogen
burner, bringing the temperature of the gas up to 1900 kelvins, where the tetrachlorides
react with oxygen to produce silica or germania (germanium oxide) particles. When the
the tube volume, in contrast to earlier techniques where the reaction occurred only on the
The oxide particles then agglomerate to form large particle chains, which subsequently
deposit on the walls of the tube as soot. The deposition is due to the large difference in
temperature between the gas core and the wall causing the gas to push the particles
outwards (this is known as thermophoresis). The torch is then traversed up and down the
length of the tube to deposit the material evenly. After the torch has reached the end of
the tube, it is then brought back to the beginning of the tube and the deposited particles
are then melted to form a solid layer. This process is repeated until a sufficient amount of
material has been deposited. For each layer the composition can be modified by varying
the gas composition, resulting in precise control of the finished fiber's optical properties.
In outside vapor deposition or vapor axial deposition, the glass is formed by flame
deposition the glass is deposited onto a solid rod, which is removed before further
processing. In vapor axial deposition, a short seed rod is used, and a porous preform,
whose length is not limited by the size of the source rod, is built up on its end. The
porous preform is consolidated into a transparent, solid preform by heating to about 1800
kelvins.
The preform, however constructed, is then placed in a device known as a drawing tower,
where the preform tip is heated and the optic fiber is pulled out as a string. By measuring
thickness