Академический Документы
Профессиональный Документы
Культура Документы
COURSE
FILE
Contents
1. Cover Page
2. Syllabus copy
3. Vision of the Department
4. Mission of the Department
5. PEOs and POs
6. Course objectives and outcomes
7. Brief notes on the importance of the course and how it fits into the curriculum
8. Prerequisites
9. Instructional Learning Outcomes
10. Course mapping with PEOs and POs
11. Class Time Table
12. Individual Time Table
13. Micro Plan with dates and closure report
14. Detailed notes
15. Additional topics
16. University Question papers of previous years
17. Question Bank
18. Assignment topics
19. Unit wise Quiz Questions
20. Tutorial problems
21. Known gaps ,if any
22. Discussion topics
23. References, Journals, websites and E-links
24. Quality Control Sheets
25. Student List
26. Group-Wise students list for discussion topics
GEETHANJALI COLLEGE OF ENGINEERING AND TECHNOLOGY
Department Of Electronics and Communication Engineering
(Name of the Subject / Lab Course) : Digital Communications
(JNTU CODE –A60420) Programme : UG
Distribution List :
Prepared by :
2) Sign : 1) Name :
3) Design : 2) Sign :
2) Sign :
3) Date:
2. Syllabus Copy
Jawaharlal Nehru Technological University Hyderabad, Hyderabad
DIGITAL COMMUNICATIONS
Advantages of digital communication systems, Bandwidth- S/N trade off, Hartley Shannon
law, Sampling theorem
introduction , ASK, ASK Modulator, Coherent ASK detector, non-Coherent ASK detector,
FSK, Band width frequency spectrum of FSK, Non-Coherent FSK detector, Coherent FSK
detector, FSK Detection using PLL, BPSK, Coherent PSK detection, QPSK, Differential
PSK
UNIT III: Base Band Transmission And Optimal Reception of Digital Signal
Pulse shaping for optimum transmission, A Base band signal receiver, Probability of error,
optimum receiver, Optimum of coherent reception, Signal space representation and
probability of error, Eye diagrams for ASK,FSK and PSK, cross talk,
Information Theory
Information and entropy, Conditional entropy and redundancy, Shannon Fano coding, mutual
information, Information loss due to noise, Source codings,- Huffman code, variable length
coding. Source coding to increase average information per bit, Lossy source Coding,
Matrix description of linear block codes, Error detection and error correction capabilities of
linear block codes,
Encoding, decoding using state, Tree and trellis diagrams, Decoding using Viterbi algorithm,
Comparison of error rates in coded and uncoded transmission.
Use of spread spectrum, direct sequence spread spectrum(DSSS), Code division multiple
access, Ranging using DSSS, Frequency Hopping spread spectrum, PN Sequences:
generation and characteristics, Synchronization in spread spectrum system
TEXT BOOKS:
REFERNCES:
Course Objectives
Design digital communication systems, given constraints on data rate, bandwidth,
power, fidelity, and complexity.
Analyze the performance of a digital communication link when additive noise is
present in terms of the signal-to-noise ratio and bit error rate.
Compute the power and bandwidth requirements of modern communication systems,
including those employing ASK, PSK, FSK, and QAM modulation formats.
Design a scalar quantizer for a given source with a required fidelity and determine the
resulting data rate.
Determine the auto-correlation function of a line code and determine its power
spectral density.
Determine the power spectral density of band pass digital modulation formats.
Course outcomes :
Ability to understand the functions of the various parts, analyze theoretically the
performance of a modern communication system.
Ability to compare analog and digital communications in terms of noise, attenuation,
and distortion.
Ability to recognize the concepts of digital baseband transmission, optimum reception
analysis and band limited transmission.
Characterize and analyze various pass band modulation techniques
Ability to Explain the basic concepts of error detection/ correction coding and
perform error analysis
8.PREREQUISITES:
Engineering Mathematics
Basic Electronics
Signals and systems
Analog Communications
DC1: Analyse the elements of digital communication system, the importance and
Applications of Digital Communication.
DC 3: Conversion of analog signal to digital signal and the issues occur in digital
transmission techniques like Bandwidth- S/N trade off.
DC 5: Analyse the importance of Hartley Shannon law in calculating the BER and the
channel capacity.
DC11: Describe and differentiate the different shift keying formats used in digital
communication.
DC 12: Compute the power and bandwidth requirements of modern communication
systems modulation formats like those employing ASK, PSK, FSK, and QAM.
DC 13: Explain the different modulators like ASK Modulator, Coherent ASK detector,
non-Coherent ASK detector, Band width frequency spectrum of FSK, Non-Coherent FSK
detector, Coherent FSK detector.
DC 15: Differentiate the different keying schemes -BPSK, Coherent PSK detection,
QPSK & Differential PSK.
DC16: Identify the need of pulse shaping for optimum transmission and get the
knowledge of Base band signal receiver model.
DC 19: Explain the Eye diagram and its importance in calculating error.
DC 20: Describe cross talk and its effect in the degradation of signal quality in digital
communication.
Information Theory
DC 21: Identify the basic terminology used in coding of Digital signals like Information
and entropy and calculate the Conditional entropy and redundancy.
DC 23: Solve problems based on mutual information and Information loss due to noise.
DC 24: Compute problems on Source coding methods like - Huffman code, variable
length codes used in digital communication.
DC 25: Explain Source coding and drawbacks of Lossy source Coding and how to
increase the average information per bit.
UNIT 4: Error control codes
DC 26: Illustrate the different types of codes used in digital communication and the
Matrix description of linear block codes.
DC 27: Analyze and find errors, solve the numerical in Error detection and error
correction of linear block codes.
DC 28: Explain cyclic codes, the difference between linear block codes and cyclic
codes.
DC 29: Compute problems based on the representation of cyclic codes and encoding and
decoding of cyclic codes.
DC 30: Solve problems to find the location of error in the codes i.e., syndrome
calculation.
Convolution Codes
DC 31: Identify the difference between the different codes digital communication.
DC 33: Solve problems on error detection & correction using state Tree and trellis
diagrams.
DC 35: Compute numerical on error calculations and compare the error rates in coded
and uncoded transmission.
DC 36: Analyze the need and use of spread spectrum in digital communication and gain
knowledge of spread spectrum techniques like direct sequence spread spectrum (DSSS).
DC 37: Describe Code division multiple access, ranging using DSSS Frequency Hopping
spread spectrum.
Digital
1 Communication 56026 II √ √
Communications
*When the course outcome weightage is < 40%, it will be given as moderately correlated (1).
*When the course outcome weightage is >40%, it will be given as strongly correlated (2).
Pos 1 2 3 4 5 6 7 8 9 10 11 12 13
Digital Communications 2 2 1 1 1 1 2 2 2 2 2
COMMUNICATION
CO 2:Demonstrate generation and 2 2 2 1 1 1 2 2 2
reconstruction of different Pulse Code
Modulation schemes like PCM, DPCM etc.
spread spectrum
58 PN sequences: generation and 1 Regular BB
characteristics
59 Synchronization in spread spectrum system 1 Regular BB
60 Advancements in the digital communication 1 Missing BB
61 08 Tutorial Class-8 1 Regular BB
62 Solving University papers 1 Regular OHP,BB
61 Assignment test-4 1
62 Total No. of classes 62
14.Detailed Notes
UNIT 1 :
Elements Of Digital Communication Systems
Model of digital communication system,
Sampling theorem
The term communication (or telecommunication) means the transfer of some form of
information from one place (known as the source of information) to another place
(known as the destination of information) using some system to do this function
(known as a communication system).
In this course, we will study the basic methods that are used for communication in
today’s world and the different systems that implement these communication methods.
Upon the successful completion of this course, you should be able to identify the
different communication techniques, know the advantages and disadvantages of each
technique, and show the basic construction of the systems that implement these
communication techniques.
Pigeons
Horseback
Smoke
Fire
Post Office
Drums
Problems with Old Communication Methods
Slow
Difficult and relatively expensive
Limited amount of information can be sent
Some methods can be used at specific times of the day
Information is not secure.
Examples of Today’s Communication Methods
Fast
Easy to use and very cheap
Huge amounts of information can be transmitted
Secure transmission of information can easily be achieved
Can be used 24 hours a day.
Basic Construction of Electrical Communication System
Added Noise
Channel
Input (distorts Output
Input Transmitter Receiver Output
Transducer transmitted Transducer
signal)
Analog Signals: are signals with amplitudes that may take any real value out of an infinite
number of values in a specific range (examples: the height of mercury in
a 10cm–long thermometer over a period of time is a function of time that
may take any value between 0 and 10cm, the weight of people setting in a
class room is a function of space (x and y coordinates) that may take any
real value between 30 kg to 200 kg (typically)).
Digital Signals: are signals with amplitudes that may take only a specific number of
values (number of possible values is less than infinite) (examples: the
number of days in a year versus the year is a function that takes one of
two values of 365 or 366 days, number of people sitting on a one-person
chair at any instant of time is either 0 or 1, the number of students
registered in different classes at KFUPM is an integer number between 1
and 100).
Noise: is an undesired signal that gets added to (or sometimes multiplied with) a
desired transmitted signal at the receiver. The source of noise may be
external to the communication system (noise resulting from electric
machines, other communication systems, and noise from outer space) or
internal to the communication system (noise resulting from the collision
of electrons with atoms in wires and ICs).
Signal to Noise Ratio (SNR):is the ratio of the power of the desired signal to the power of
the noise signal.
Bandwidth (BW): is the width of the frequency range that the signal occupies. For example
the bandwidth of a radio channel in the AM is around 10 kHz and the
bandwidth of a radio channel in the FM band is 150 kHz.
Since the introduction of digital communication few decades ago, it has been gaining a steady
increase in use. Today, you can find a digital form of almost all types of analog
communication systems. For example, TV channels are now broadcasted in digital form
(most if not all Ku–band satellite TV transmission is digital). Also, radio now is being
broadcasted in digital form (see sirus.com and xm.com). Home phone systems are starting to
go digital (a digital phone system is available at KFUPM). Almost all cellular phones are now
digital, and so on. So, what makes digital communication more attractive compared to analog
communication?
Famous Types
Amplitude Modulation (AM): varying the amplitude of the carrier based on the
information signal as done for radio channels that
are transmitted in the AM radio band.
Phase Modulation (PM): varying the phase of the carrier based on the
information signal.
Frequency Modulation (FM): varying the frequency of the carrier based on the
information signal as done for channels transmitted
in the FM radio band.
Purpose of Modulation
a) TV in the 1970s:
b) TV in the 2030s:
c) Fax machines
Disadvantages:
1. It is unreliable as the messages cannot be recognised by signatures. Though software can
be developed for this, yet the softwares can be easily hacked.
2. Sometimes, the quickness of digital communication is harmful as messages can be sent
with the click of a mouse. The person does not think and sends the message at an impulse.
3. Digital Communication has completely ignored the human touch. A personal touch cannot
be established because all the computers will have the same font!
4. The establishment of Digital Communication causes degradation of the environment in
some cases. "Electronic waste" is an example. The vibes given out by the telephone and cell
phone towers are so strong that they can kill small birds. In fact the common sparrow has
vanished due to so many towers coming up as the vibrations hit them on the head.
5. Digital Communication has made the whole world to be an "office." The people carry their
work to places where they are supposed to relax. The whole world has been made into an
office. Even in the office, digital communication causes problems because personal messages
can come on your cell phone, internet, etc.
6. Many people misuse the efficiency of Digital Communication. The sending of hoax
messages, the usage by people to harm the society, etc cause harm to the society on the
whole.
Advantages of Digital -
Less expensive
More reliable
Easy to manipulate
Flexible
Compatibility with other digital systems
Only digitized information can be transported through a noisy channel without degradation
Integrated networks
Disadvantages of Digital -
Sampling Error
Digital communications require greater bandwidth than analogue to transmit the same
information.
The detection of digital signals requires the communications system to be synchronized,
whereas generally speaking this is not the case with analogue systems.
1.The first advantage of digital communication against analog is it’s noise immunity. In any
transmission path some unwanted voltage or noise is always present which cannot be
eliminated fully. When signal is transmitted this noise gets added to the original signal
causing the distortion of the signal. However in a digital communication at the receiving end
this additive noise can be eliminated to great extent easily resulting in better recovery of
actual signal. In case of analog communication it’s difficult to remove the noise once added
to the signal.
4. Signal when travelling through it’s transmission path gets faded gradually. So on it’s path
it needs to be reconstructed to it’s actual form and re-transmitted many times. For that reason
AMPLIFIERS are used for analog communication and REPEATERS are used in digital
communication. Amplifiers are needed every 2 to 3 Kms apart where as repeaters are needed
every 5 to 6 Kms apart. So definitely digital communication is cheaper. Amplifiers also often
add non-linearity that distort the actual signal.
5. Bandwidth is another scarce resource. Various Digital communication
techniques are available that use the available bandwidth much efficiently than analog
communication techniques.
6. When audio and video signals are transmitted digitally an AD (Analog to Digital)
converter is needed at transmitting side and a DA (Digital to Analog) converter is again
needed at receiver side. While transmitted in analog communication these devices are not
needed.
7. Digital signals are often an approximation of the analog data (like voice
or video) that is obtained through a process called quantization. The digital representation is
never the exact signal but it’s most closely approximated digital form. So it’s accuracy
depends on the degree of approximation taken in quantization process.
Sampling Theorem:
6.1 Introduction
Quite a few of the information bearing signals, such as speech, music, video, etc., are analog
in nature; that is, they are functions of the continuous variable t and for any t = t1, their value
can lie anywhere in the interval, say − A to A. Also, these signals are of the baseband variety.
If there is a channel that can support baseband transmission, we can easily set up a baseband
communication system. In such a system, the transmitter could be as simple as just a power
amplifier so that the signal that is transmitted could be received at the destination with some
minimum power level, even after being subject to attenuation during propagation on the
channel. In such a situation, even the receiver could have a very simple structure; an
appropriate filter (to eliminate the out of band spectral components) followed by an amplifier.
If a baseband channel is not available but have access to a passband channel, (such as
ionospheric channel, satellite channel etc.) an appropriate CW modulation scheme discussed
earlier could be used to shift the baseband spectrum to the passband of the given channel.
Interesting enough, it is possible to transmit the analog information in a digital format.
Though there are many ways of doing it, in this chapter, we shall explore three such
techniques, which have found widespread acceptance. These are: Pulse Code Modulation
(PCM), Differential Pulse Code Modulation (DPCM)
and Delta Modulation (DM). Before we get into the details of these techniques, let us
summarize the benefits of digital transmission. For simplicity, we shall assume that
information is being transmitted by a sequence of binary pulses. i) During the course of
propagation on the channel, a transmitted pulse becomes gradually distorted due to the non-
ideal transmission characteristic of the channel. Also, various unwanted signals (usually
termed interference and noise) will cause further deterioration of the information bearing
pulse. However, as there are only two types of signals that are being transmitted, it is possible
for us to identify (with a very high probability) a given transmitted pulse at some appropriate
intermediate point on the channel and regenerate a clean pulse. In this way, be completely
eliminating the effect of distortion and noise till the point of regeneration. (In long-haul PCM
telephony, regeneration is done every few Kilometers, with the help of regenerative
repeaters.) Clearly, such an operation is not possible if the transmitted signal was analog
because there is nothing like a reference waveform that can be regenerated.
ii) Storing the messages in digital form and forwarding or redirecting them at a later point in
time is quite simple.
iii) Coding the message sequence to take care of the channel noise, encrypting for secure
communication can easily be accomplished in the digital domain.
iv) Mixing the signals is easy. All signals look alike after conversion to digital form
independent of the source (or language!). Hence they can easily be multiplexed (and
demultiplexed)
6.2 The PCM system
Two basic operations in the conversion of analog signal into the digital is time discretization
and amplitude discretization. In the context of PCM, the former is accomplished with the
sampling operation and the latter by means of quantization. In addition, PCM involves
another step, namely, conversion of quantized amplitudes into a sequence of simpler pulse
patterns (usually binary), generally called as code words. (The word code in pulse code
modulation refers
to the fact that every quantized sample is converted to an R -bit code word.)
Fig. 6.1 illustrates a PCM system. Here, m(t ) is the information bearing
message signal that is to be transmitted digitally. m(t ) is first sampled and then
quantized. The output of the sampler is
The quantizer converts each sample to one of the values that is closest to it from among a
pre-selected set of discrete amplitudes. The encoder represents each one of these quantized
samples by an R -bit code word. This bit stream travels on the channel and reaches the
receiving end. With fs as the sampling rate and R -bits per code word, the bit rate of the PCM
System is
The decoder converts the R -bit code words into the corresponding (discrete) amplitudes.
Finally, the reconstruction filter, acting on these discrete amplitudes, produces the analog
signal, denoted by m’(t ) . If there are no channel errors, then m’(t ) approx= m(t ) .
777773333333333333333333333333333333333333333333333333
444444477744444444ggggggggggggggggggg77777777774444477777777777
Pulse-Amplitude Modulation:
1, 0 t T
h (t ) 1 , t 0, t T (3.11)
2
0,
otherwise
The instantane ously sampled version of m( t ) is
m ( t ) m( nT
n
s ) ( t nTs ) (3.12)
m ( t ) h ( t )
m ( )h ( t )d
m( nT ) (
n
s nTs )h ( t )d
m( nT )
n
s
( nTs )h (t )d (3.13)
Using the sifting property , we have
h(
m ( t )The PAM m(snT
t ) signal (t ) sis)h (t nTs )
n
(3.14)
s (t ) m (t ) h(t ) (3.15)
S ( f ) Mδ ( f ) H ( f ) (3.16)
Recall (3.2) g (t ) fs G ( f mf
m
s ) (3.2)
M ( f ) f s M( f k f
k
s ) (3.17)
S ( f ) fs M( f k f
k
s )H ( f ) (3.18)
The instantaneous amplitude of the analog (voice) signal is held as a constant charge
on a capacitor for the duration of the sampling period Ts.
This technique is useful for holding the sample constant while other processing is
taking place, but it alters the frequency spectrum and introduces an error, called
aperture error, resulting in an inability to recover exactly the original analog signal.
The amount of error depends on how mach the analog changes during the holding
time, called aperture time.
To estimate the maximum voltage error possible, determine the maximum slope of the
analog signal and multiply it by the aperture time DT
Recovering the original message signal m(t) from PAM signal :
In pulse width modulation (PWM), the width of each pulse is made directly proportional
to the amplitude of the information signal.
In pulse position modulation, constant-width pulses are used, and the position or time of
occurrence of each pulse from some reference time is made directly proportional to the
amplitude of the information signal.
Pulse Code Modulation (PCM) :
As in the case of other pulse modulation techniques, the rate at which samples are
taken and encoded must conform to the Nyquist sampling rate.
The sampling rate must be greater than, twice the highest frequency in the analog
signal,
fs > 2fA(max)
Quantization Process:
Quantization Noise:
Figure 3.11 Illustration of the quantization process
Let the quantizati on error be denoted by the random
variable Q of sample value q
q m (3.23)
Q M V , ( E[ M ] 0) (3.24)
Assuming a uniform quantizer of the midrise type
2m m ax
the step - size is (3.25)
L
m m ax m m m ax , L : total number of levels
1
, q
f Q (q) 2 2 (3.26)
0 , otherwise
1 2 2
E[Q ]
2 2 2
q f Q (q ) dq q dq
2
2
Q
2
2
(3.28)
12
- law
log(1 m )
(3.48)
log(1 )
dm log(1 )
(1 m ) (3.49)
d
A - law
A( m) 1
0 m
1 log A
A (3.50)
1 log( A m ) 1
m 1
1 log A A
1 log A 1
dm 0 m
A A (3.51)
d (1 A) m
1
m 1
A
Figure 3.15 Line codes for the electrical representations of binary data.
Advantages of PCM
2. Efficient regeneration
4. Uniform format
6. Secure
n
mq n sgn(ei )
i 1
n
eq i (3.55)
i 1
Two types of quantization errors:
3. Set of adders ( )
p
J E x n 2 wk E xn xn k
2
k 1
Assume X (t ) is stationary process with zero mean ( E[ x[n]] 0)
Linear adaptive prediction :
wk n 1 wk n
1
g k , k 1,2, ,p (3.69)
2
1
where is a step - size parameter and is for convenienc e
2
of presentation.
R X 0 , R X 1 , , R X p
Substituti ng (3.64) into (3.63) yields
p p
J min X
2
2 wk R X k wk R X k
k 1 k 1
p
X
2
wk R X k
k 1
X
2
rX
T
w0 X
2
rX
T
R X1rX (3.67)
r R T
X
1
X rX 0, J min is always less than 2
X
J P
gk 2 R X k 2 w j R X k j
wk j 1
p
2 E xnxn k 2 w j E xn j xn k , k 1,2, , p (3.70)
j 1
p
ˆ k n 1 w
w ˆ k n xn k xn w ˆ j nxn j
j 1
wˆ k n xn k en , k 1,2, , p (3.72)
p
where en xn w
ˆ j n xn j by (3.59) (3.60) (3.73)
j 1
Usually PCM has the sampling rate higher than the Nyquist rate .The encode signal contains
redundant information. DPCM can efficiently remove this redundancy.
Need for coding speech at low bit rates , we have two aims in mind:
• The amplitude (or height) of the sine wave varies to transmit the ones and zeros
• One amplitude encodes a 0 while another amplitude encodes a 1 (a form of amplitude
modulation)
A cos2f 2t binary 1
st
A cos2f 2t binary 0
FSK Bandwidth:
DBPSK:
• Differential BPSK
– 0 = same phase as last signal element
– 1 = 180º shift from last signal element
A cos 2f c t
11
4
3
A cos 2f ct
s t
01
4
3
A cos 2f ct 00
4
A cos 2f c t
4
10
Concept of a constellation :
M-ary PSK:
Using multiple phase angles with each angle having more than one amplitude, multiple
signals elements can be achieved
R R
D
L log 2 M
QAM:
Figure 6.26 Block diagrams for (a) binary FSK transmitter and (b) coherent binary FSK
receiver.
Fig. 6.28
6.28
Figure 6.30 (a) Input binary sequence. (b) Waveform of scaled time
function s1f1(t). (c) Waveform of scaled time function s2f2(t). (d)
Waveform of the MSK signal s(t) obtained by adding s1f1(t) and
s2f2(t) on a bit-by-bit basis.
Figure 6.29 Signal-space diagram for MSK system.
Duo-binary Signaling :
1 if symbol bk is 1 ck ak ak 1
ak
1 if symbol bk is 0
1, | f | 1 / 2Tb
H Nyquist ( f )
0, otherwise
d k bk d k 1
ck ak ak 1
0 if data symbol bk is 1
ck
2 if data symbol bk is 0
ck ak ak 1
dk bk dk 2
symbol 1 if either symbol bk or d k 2 is 1
symbol 0 otherwise
|ck|=1 : random guess in favor of symbol 1 or 0
If | ck | 1, say symbol bk is 1
If | ck | 1, say symbol bk is 0
N N
w c(t ) (t kT ) w c(t kT )
k N
k
k N
k
N
p(nT ) w c((n k )T )
k N
k
Nyquist criterion for distortionless transmission, with T used in place of Tb,
normalized condition p(0)=1
Adaptive Equalizer :
Least-Mean-Square Algorithm:
E en2
Ensemble-averaged cross-correlation
e y
2 E en n 2 E en n 2 E en xn k 2 Rex (k )
wk wk wk
Rex (k ) E en xn k
0 for k 0, 1,...., N
wk Mean-square error is a second-order and a parabolic function of tap weights as a
multidimentional bowl-shaped surface
Adaptive process is a successive adjustments of tap-weight seeking the bottom of the
bowl(minimum value )
Steepest descent algorithm
– The successive adjustments to the tap-weight in direction opposite to the
vector of gradient )
– Recursive formular ( : step size parameter)
1
wk (n 1) wk (n) , k 0, 1,...., N
2 wk
wk (n) Rex (k ), k 0, 1,...., N
Least-Mean-Square Algorithm
– Steepest-descent algorithm is not available in an unknown environment
– Approximation to the steepest descent algorithm using instantaneous estimate
Rex (k ) en xn k
wk (n 1) wk (n) en xn k
LMS is a feedback system
In the case of small , roughly similar to steepest descent algorithm
Operation of the equalizer:
Implementation Approaches:
Analog
– CCD, Tap-weight is stored in digital memory, analog sample and
multiplication
– Symbol rate is too high
Digital
– Sample is quantized and stored in shift register
– Tap weight is stored in shift register, digital multiplication
Programmable digital
– Microprocessor
– Flexibility
– Same H/W may be time shared
h0 xn hk xn k hk xn k
k 0 k 0
Using data decisions made on the basis of precursor to take care of the postcursors
– The decision would obviously have to be correct
wn(1) x
cn (2) vn n
wn an en an cnT vn wn(1)1 wn(1)1 1en xn
wn(2)1 wn(2)1 1en an
Eye Pattern:
In the case of an M-ary system, the eye pattern contains (M-1) eye opening, where M
is the number of discreteamplitude levels
Interpretation of Eye Diagram:
Information Theory
Information and entropy,
Conditional entropy and redundancy,
Shannon Fano coding,
mutual, information,
Information loss due to noise,
Source codings,- Huffman code, variable length coding
Source coding to increase average information per bit,
Lossy source Coding.
Information sources
Definition:
The set of source symbols is called the source alphabet, and the elements of the set are
called the symbols or letters.
Using this definition we can confirm that it has the wanted property of additivity:
The basis ‘b’ of the logarithm b is only a change of units without actually changing the
amount of information it describes.
Discrete memory less source (DMS) can be characterized by “the list of the symbols, the
probability assignment to these symbols, and the specification of the rate of generating these
symbols by the source”.
Let us consider a discrete memory less source (DMS) denoted by X and having the alphabet
{U1, U2, U3, ……Um}. The information content of the symbol xi, denoted by I(xi) is defined
as
I(U) = logb = - log b P(U)
Units of I(xi):
For two important and one unimportant special cases of b it has been agreed to use the
following names for these units:
b =2(log2): bit,
b =10(log10): Hartley.
log2a=
Definition:
In order to get the information content of the symbol, the flow information on the symbol can
fluctuate widely because of randomness involved into the section of symbols.
H(U)= E[I(u)]=
where PU(·)denotes the probability mass function (PMF)2 of the RV U, and where the
support of P U is defined as
We will usually neglect to mention “support” when we sum over
It may be noted that for a binary souce U which genets independent symbols 0 and 1 with
equal probability, the source entropy H(u) is
Bounds on H(U)
If U has r possible values, then 0 ≤ H(U) ≤ log r,
0 ≤ H(U) ≤ log r,
Where
To derive the upper bound we use at rick that is quite common in in-
Formation theory: We take the deference and try to show that it must be non positive.
Equality can only be achieved if
Conditional Entropy
Similar to probability of random vectors, there is nothing really new about conditional
probabilities given that a particular event Y = y has occurred.
Note that the definition is identical to before apart from that everything is conditioned on
the event Y = y
Note that the conditional entropy given the event Y = y is a function of y. Since Y is also
a RV, we can now average over all possible events Y = y according to the probabilities of
each event. This will lead to the averaged.
Block Codes:
d min 1
• That is the maximum number of correctable errors is given by,
d 1
t min
2
where dmin is the minimum Hamming distance between 2 codewords and means
the smallest integer
• As seen from the second Parity Code example, it is possible to use a table to hold all
the codewords for a code and to look-up the appropriate codeword based on the
supplied dataword
• Alternatively, it is possible to create codewords by addition of other codewords. This
has the advantage that there is now no longer the need to held every possible
codeword in the table.
• If there are k data bits, all that is required is to hold k linearly independent codewords,
i.e., a set of k codewords none of which can be produced by linear combinations of 2
or more codewords in the set.
• The easiest way to find k linearly independent codewords is to choose those which
have ‘1’ in just one of the first k positions and ‘0’ in the other k-1 of the first k
positions.
• For example for a (7,4) code, only four codewords are required, e.g.,
1 0 0 0 1 1 0
0 1 0 0 1 0 1
0 0 1 0 0 1 1
0 0 0 1 1 1 1
• So, to obtain the codeword for dataword 1011, the first, third and fourth codewords in
the list are added together, giving 1011010
• This process will now be described in more detail
c=(c1 c2……..cn)
d 3 d1 d 2
So,
k k k k
c3 d 3i a i (d1i d 2i )a i d1i a i d 2i a i
i 1 i 1 i 1 i 1
c3 c1 c 2
Error Correcting Power of LBC:
• The Hamming distance of a linear block code (LBC) is simply the minimum
Hamming weight (number of 1’s or equivalently the distance from the all 0
codeword) of the non-zero codewords
• Note d(c1,c2) = w(c1+ c2) as shown previously
• For an LBC, c1+ c2=c3
• So min (d(c1,c2)) = min (w(c1+ c2)) = min (w(c3))
• Therefore to find min Hamming distance just need to search among the 2k codewords
to find the min Hamming weight – far simpler than doing a pair wise check for all
possible codewords.
•
Linear Block Codes – example 1:
1 0 1 1
G
0 1 0 1
a1 = [1011]
a2 = [0101]
1 0 1 1
0 1 0 1
c
_ _ _ _
1 1 1 0
Systematic Codes:
• For a systematic block code the dataword appears unaltered in the codeword – usually
at the start
• The generator matrix has the structure,
000000 0
000001 1
000010 1
000011 0
……… .
• A linear block code is a linear subspace S sub of all length n vectors (Space S)
• Consider the subset S null of all length n vectors in space S that are orthogonal to all
length n vectors in S sub
• It can be shown that the dimensionality of S null is n-k, where n is the dimensionality
of S and k is the dimensionality of
S sub
• It can also be shown that S null is a valid subspace of S and consequently S sub is also
the null space of S null
• S null can be represented by its basis vectors. In this case the generator basis vectors
(or ‘generator matrix’ H) denote the generator matrix for S null - of dimension n-k = R
• This matrix is called the parity check matrix of the code defined by G, where G is
obviously the generator matrix for S sub - of dimension k
• Note that the number of vectors in the basis defines the dimension of the subspace
• So the dimension of H is n-k (= R) and all vectors in the null space are orthogonal to
all the vectors of the code
• Since the rows of H, namely the vectors bi are members of the null space they are
orthogonal to any code vector
• So a vector y is a codeword only if yHT=0
• Note that a linear block code can be specified by either G or H
This is so since,
k
c di a i
i 1
and so,
k k
b j .c b j . d i a i d i (a i .b j ) 0
i 1 i 1
• This means that a codeword is valid (but not necessarily correct) only if cHT = 0. To
ensure this it is required that the rows of H are independent and are orthogonal to the
rows of G
• That is the bi span the remaining R (= n - k) dimensions of the codespace
• For example consider a (3,2) code. In this case G has 2 rows, a1 and a2
• Consequently all valid codewords sit in the subspace (in this case a plane) spanned by
a1 and a2
• In this example the H matrix has only one row, namely b1. This vector is orthogonal
to the plane containing the rows of the G matrix, i.e., a1 and a2
• Any received codeword which is not in the plane containing a1 and a2 (i.e., an invalid
codeword) will thus have a component in the direction of b1 yielding a non- zero dot
product between itself and b1.
Error Syndrome:
• For error correcting codes we need a method to compute the required correction
• To do this we use the Error Syndrome, s of a received codeword, cr
s = crHT
and
s = (c + e) HT = cHT + eHT
s = 0 + eHT
• That is, we can add the same error pattern to different code words and get the same
syndrome.
– There are 2(n - k) syndromes but 2n error patterns
– For example for a (3,2) code there are 2 syndromes and 8 error patterns
– Clearly no error correction possible in this case
– Another example. A (7,4) code has 8 syndromes and 128 error patterns.
– With 8 syndromes we can provide a different value to indicate single errors in
any of the 7 bit positions as well as the zero value to indicate no errors
• Now need to determine which error pattern caused the syndrome
where I is the k*k identity for G and the R*R identity for H
0 1 1
1 0 1
1 1 0
s c r H T 1 1 0 1 0 0 11 1 1 0 0 0
1 0 0
Standard Array:
0 1 0
0 as0follows,
1
• The Standard Array is constructed
c1 (all zero) c2 …… cM s0
e1 c2+e1 …… cM+e1 s1
e2 c2+e2 …… cM+e2 s2
e3 c2+e3 …… cM+e3 s3
… …… …… …… …
eN c2+eN …… cM+eN sN
• The array has 2k columns (i.e., equal to the number of valid codewords) and 2R rows
(i.e., the number of syndromes)
Hamming Codes:
• We will consider a special class of SEC codes (i.e., Hamming distance = 3) where,
– Number of parity bits R = n – k and n = 2R – 1
– Syndrome has R bits
– 0 value implies zero errors
– 2R – 1 other syndrome values, i.e., one for each bit that might need to be
corrected
– This is achieved if each column of H is a different binary word – remember s
= eHT
• Systematic form of (7,4) Hamming code is,
1 0 0 0 0 1 1
0 1
1 0 0 1 0 0 1 1 1 1 0 0
G I | P
0 0 1 0 1 1 0 T
H - P | I 1 0 1 1 0 1 0
0 0 0 1 1 1 1 1 1 0 1 0 0 1
1 1 1 0 0 0 0
1 0
0 0 1 1 0 0 0 0 1 1 1 1
G
0 1 0 1 0 1 0 H 0 1 1 0 0 1 1
1 1 0 1 0 0 1 1 0 1 0 1 0 1
• Compared with the systematic code, the column orders of both G and H are swapped
so that the columns of H are a binary count
• The column order is now 7, 6, 1, 5, 2, 3, 4, i.e., col. 1 in the non-systematic H is col. 7
in the systematic H.
Introduction
◊ A major concern of designing digital data transmission and storage Systems is the control
of errors so that reliable reproduction of data systems is the control of errors so that reliable
reproduction of data can be obtained.
◊ In 1948, Shannon demonstrated that, by proper encoding of the information, errors induced
by a noisy channel or storage medium can be reduced to any desired level without sacrificing
the rate of information transmission or storage, as long as the information rate is less than the
capacity of the channel.
◊ A great deal of effort has been expended on the problem of devising efficient encoding and
decoding methods for error control in a noisy environment
Types of Codes
◊ Block codes
◊ Convolutionalcodes
◊ Turbo codes
◊ Block codes
Block Codes
◊ Corresponding to the 2k different possible messages, there are 2k different possible code
words at the encoder output.
◊ n-k redundant bits can be added to each message to form a code word
◊ Since the n-symbol output code word depends only on the corresponding k-bit input
message, the encoder is memoryless, and can be implemented with a combinational logic
circuit.
Block Codes
◊ Much of the theory of linear block code is highly mathematical in nature and requires an
extensive background in modern algebra nature, and requires an extensive background in
modern algebra.
◊ Galois was a young French math whiz who developed a theory of finite fields, now know as
Galois fields, before being killed in a duel at the age of 21.
◊ For well over 100 years, mathematicians looked upon Galois fields as elegant mathematics
but of no practical value.
Convolutional Codes
◊ The encoder for a convolutional code also accepts k-bit blocks of the information sequence
u and produces an encoded sequence (code word) v of n-symbol blocks.
◊ Each encoded block depends not only on the corresponding k-bit message block at the same
time unit, but also on m previous message blocks. Hence the encoder has a memory order of
m message blocks. Hence, the encoder has a memory order of m.
◊ The set of encoded sequences produced by a k-input, n-output encoder of memory order m
is called an (n, k, m) convolutional y ( , , ) code.
◊ Since the encoder contains memory, it must be implemented with a sequential logic circuit.
Transition probability diagrams for binary symmetric channel (BSC).1.5 Types of Errors 1.5
Types of Errors
◊ On channels with memory, the noise is not independent from Transmission to transmission
◊ Channel with memory are called burst-error channels.
Simplified model of a channel with memory.1.6 Error Control Strategies 1.6 Error Control
Strategies
Forward error correction (FEC) that is by employing error- forward error correction (FEC),
that is, by employing error correcting codes that automatically correct errors detected at the
receiver.
◊ Error control for a two-way system can be accomplished using error detection and
retransmission, called automatic repeat request (ARQ).
◊ In an ARQ system, when errors are detected at the receiver, a request is sent
For the transmitter to repeat the message and this continues until the message for the
transmitter to repeat the message, and this continues until the message is received correctly.
◊ The major advantage of ARQ over FEC is that error detection requires much simpler
decoding equipment than does error correction.
◊ When the channel error rate is high, retransmissions must be sent too frequently, and the
system throughput, the rate at which newly generated messages are correctly received, is
lowered by ARQ.
◊ In general, wire-line communications (more reliable) adopts BEC scheme, while wireless
communications (relatively unreliable) adopts FEC scheme.
◊ Cyclic Redundancy Code (CRC Code) –also know as the polynomial code polynomial
code.
◊ Polynomial codes are based upon treating bit strings as representations of polynomials with
coefficients of 0 and 1 only.
◊ When the polynomial code method is employed, the sender and receiver must agree upon a
generator polynomial, G(x), in advance.
◊ To compute the checksum for some frame with m bits, corresponding to the polynomial
M(x), the frame must be longer than the generator polynomial.
◊ The idea is to append a checksum to the end of the frame in such a way that the polynomial
represented by the check summed frame divisible by G(x).
◊ When the receiver gets the checksummed frame, it tries dividing it by G(x). If there is a
remainder, there has been a transmission error.
Calculation of the polynomial code checksum Calculation of the polynomial code checksum
Calculation of the polynomial code checksum Calculation of the polynomial code checksum
Convolution Codes
Encoding,
Decoding using state Tree and trellis diagrams,
Decoding using Viterbi algorithm,
Comparison of error rates in coded and uncoded transmission.
Introduction:
• Convolutional codes are applied in applications that require good performance with
low implementation cost. They operate on code streams (not in blocks)
• Convolution codes have memory that utilizes previous bits to encode or decode
following bits (block codes are memoryless)
• Convolutional codes achieve good performance by expanding their memory depth
• Convolutional codes are denoted by (n,k,L), where L is code (or encoder) Memory
depth (number of register stages)
• Constraint length C=n(L+1) is defined as the number of encoded bits a message bit
can influence to
a span of C = n(L+1)=3(1+1)=6
successive output bits
g [1 0 11]
(1)
g ( 2 ) [111 1]
Representing convolutional codes: Code tree:
x ' j m j 2 m j 1 m j
x '' j m j 2 m j
State diagram
• Each new block of k input bits causes a transition into new state
• Hence there are 2k branches leaving each state
• Assuming encoder zero initial state, encoded word for any input of k bits can thus be
obtained. For instance, below for u=(1 1 1 0 1), encoded word v=(1 1, 1 0, 0 1, 0 1, 1
1, 1 0, 1 1, 1 1) is produced:
•
- encoder state diagram for (n,k,L)=(2,1,2) code
• Assume for simplicity a convolutional code with k=1, and up to 2k = 2 branches can
enter each state in trellis diagram
• Assume optimal path passes S. Metric comparison is done by adding the metric of S
into S1 and S2. At the survivor path the accumulated metric is naturally smaller
(otherwise it could not be the optimum path)
• For this reason the non-survived path can
be discarded -> all path alternatives need not
to be considered
• Note that in principle whole transmitted
sequence must be received before decision.
However, in practice storing of states for
input length of 5L is quite adequate
The maximum likelihood path:
(Black circles denote the deleted branches, dashed lines: '1' was applied)
• In the previous example it was assumed that the register was finally filled with zeros
thus finding the minimum distance path
• In practice with long code words zeroing requires feeding of long sequence of zeros to
the end of the message bits: this wastes channel capacity & introduces delay
• To avoid this path memory truncation is applied:
– Trace all the surviving paths to the
depth where they merge
– Figure right shows a common point
at a memory depth J
– J is a random variable whose applicable
magnitude shown in the figure (5L)
has been experimentally tested for
negligible error rate increase
– Note that this also introduces the
delay of 5L!
• H(7,4)
• Generator matrix G: first 4-by-4 identical matrix
• Transmission vector x
• Received vector r
and error vector e
g [1 0 11]
(1)
g ( 2 ) [111 1]
11 00 01 11 01
11 10
01
correct:1+1+2+2+2=8;8 (0.11) 0.88
false:1+1+0+0+0=2;2 (2.30) 4.6
total path metric: 5.48
Turbo Codes:
• Backgound
– Turbo codes were proposed by Berrou and Glavieux in the 1993 International
Conference in Communications.
– Performance within 0.5 dB of the channel capacity limit for BPSK was
demonstrated.
• Features of turbo codes
– Parallel concatenated coding
– Recursive convolutional encoders
– Pseudo-random interleaving
– Iterative decoding
Motivation: Performance of Turbo Codes
• Comparison:
– Rate 1/2 Codes.
– K=5 turbo code.
– K=14 convolutional code.
• Plot is from:
– L. Perez, “Turbo Codes”, chapter 8 of Trellis Coding by C. Schlegel. IEEE
Press, 1997
Pseudo-random Interleaving:
• In a coded systems:
– Performance is dominated by low weight code words.
• A “good” code:
– will produce low weight outputs with very low probability.
• An RSC code:
– Produces low weight outputs with fairly low probability.
– However, some inputs still cause low weight outputs.
• Because of the interleaver:
– The probability that both encoders have inputs that cause low
weight outputs is very low.
– Therefore the parallel concatenation of both encoders will produce
a “good” code.
Iterative Decoding:
The Turbo-Principle:
Turbo codes get their name because the decoder uses feedback, like a turbo engine
Performance as a Function of Number of Iterations:
0
10
-1
10
1 iteration
-2
10
-3
2 iterations
10
BER
-4
10 6 iterations 3 iterations
-5
10 10 iterations
-6
10 18 iterations
-7
10
0.5 1 1.5 2
Eb/No in dB
UNIT 5 :
Spread Spectrum Modulation
Gains:
Basic Operation:
Voice coders
Regenerative repeater
Feed back communications
Advancements in the digital communication
Signal space representation
Turbo codes
Voice coders
A vocoder ( short for voice encoder) is an analysis/synthesis system, used to reproduce
human speech. In the encoder, the input is passed through a multiband filter, each band is
passed through an envelope follower, and the control signals from the envelope followers are
communicated to the decoder. The decoder applies these (amplitude) control signals to
corresponding filters in the (re)synthesizer.
It was originally developed as a speech coder for telecommunications applications in the
1930s, the idea being to code speech for transmission. Its primary use in this fashion is for
secure radio communication, where voice has to be encrypted and then transmitted. The
advantage of this method of "encryption" is that no 'signal' is sent, but rather envelopes of the
bandpass filters. The receiving unit needs to be set up in the same channel configuration to
resynthesize a version of the original signal spectrum. The vocoder as
both hardware and software has also been used extensively as an electronic musical
instrument.
Whereas the vocoder analyzes speech, transforms it into electronically transmitted
information, and recreates it, The Voder (from Voice Operating Demonstrator) generates
synthesized speech by means of a console with fifteen touch-sensitive keys and a pedal,
basically consisting of the "second half" of the vocoder, but with manual filter controls,
needing a highly trained operator.
The human voice consists of sounds generated by the opening and closing of the glottis by
the vocal cords, which produces a periodic waveform with many harmonics. This basic sound
is then filtered by the nose and throat (a complicated resonant piping system) to produce
differences in harmonic content (formants) in a controlled way, creating the wide variety of
sounds used in speech. There is another set of sounds, known as
the unvoiced and plosive sounds, which are created or modified by the mouth in different
fashions.
The vocoder examines speech by measuring how its spectral characteristics change over time.
This results in a series of numbers representing these modified frequencies at any particular
time as the user speaks. In simple terms, the signal is split into a number of frequency bands
(the larger this number, the more accurate the analysis) and the level of signal present at each
frequency band gives the instantaneous representation of the spectral energy content. Thus,
the vocoder dramatically reduces the amount of information needed to store speech, from a
complete recording to a series of numbers. To recreate speech, the vocoder simply reverses
the process, processing a broadband noise source by passing it through a stage that filters the
frequency content based on the originally recorded series of numbers. Information about the
instantaneous frequency (as distinct from spectral characteristic) of the original voice signal
is discarded; it wasn't important to preserve this for the purposes of the vocoder's original use
as an encryption aid, and it is this "dehumanizing" quality of the vocoding process that has
made it useful in creating special voice effects in popular music and audio entertainment.
Since the vocoder process sends only the parameters of the vocal model over the
communication link, instead of a point by point recreation of the waveform, it allows a
significant reduction in the bandwidth required to transmit speech.
LPC-10, FIPS Pub 137, 2400 bit/s, which uses linear predictive coding
Code-excited linear prediction (CELP), 2400 and 4800 bit/s, Federal Standard 1016,
used in STU-III
Continuously variable slope delta modulation (CVSD), 16 kbit/s, used in wide band
encryptors such as the KY-57.
Mixed-excitation linear prediction (MELP), MIL STD 3005, 2400 bit/s, used in the
Future Narrowband Digital Terminal FNBDT, NSA's 21st century secure telephone.
Adaptive Differential Pulse Code Modulation (ADPCM), former ITU-T G.721, 32
kbit/s used in STE secure telephone
(ADPCM is not a proper vocoder but rather a waveform codec. ITU has gathered G.721
along with some other ADPCM codecs into G.726.)
Vocoders are also currently used in developing psychophysics, linguistics, computational
neuroscience and cochlear implant research.
Modern vocoders that are used in communication equipment and in voice storage devices
today are based on the following algorithms:
Since the late 1970s, most non-musical vocoders have been implemented using linear
prediction, whereby the target signal's spectral envelope (formant) is estimated by an all-
pole IIR filter. In linear prediction coding, the all-pole filter replaces the bandpass filter bank
of its predecessor and is used at the encoder to whiten the signal (i.e., flatten the spectrum)
and again at the decoder to re-apply the spectral shape of the target speech signal.
One advantage of this type of filtering is that the location of the linear predictor's spectral
peaks is entirely determined by the target signal, and can be as precise as allowed by the time
period to be filtered. This is in contrast with vocoders realized using fixed-width filter banks,
where spectral peaks can generally only be determined to be within the scope of a given
frequency band. LP filtering also has disadvantages in that signals with a large number of
constituent frequencies may exceed the number of frequencies that can be represented by the
linear prediction filter. This restriction is the primary reason that LP coding is almost always
used in tandem with other methods in high-compression voice coders.
RAWCLI vocoder
Robust Advanced Low Complexity Waveform Interpolation (RALCWI) technology uses
proprietary signal decomposition and parameter encoding methods to provide high voice
quality at high compression ratios. The voice quality of RALCWI-class vocoders, as
estimated by independent listeners, is similar to that provided by standard vocoders running
at bit rates above 4000 bit/s. The Mean Opinion Score (MOS) of voice quality for this
Vocoder is about 3.5-3.6. This value was determined by a paired comparison method,
performing listening tests of developed and standard voice Vocoders
The RALCWI vocoder operates on a “frame-by-frame” basis. The 20ms source voice frame
consists of 160 samples of linear 16-bit PCM sampled at 8 kHz. The Voice Encoder performs
voice analysis at the high time resolution (8 times per frame) and forms a set of estimated
parameters for each voice segment. All of the estimated parameters are quantized to produce
41-, 48- or 55-bit frames, using vector quantization (VQ) of different types. All of the vector
quantizers were trained on a mixed multi-language voice base, which contains voice samples
in both Eastern and Western languages.
Waveform-Interpolative (WI) vocoder was developed in AT&T Bell Laboratories around
1995 by W.B. Kleijn, and subsequently a low- complexity version was developed by AT&T
for the DoD secure vocoder competition. Notable enhancements to the WI coder were made
at the University of California, Santa Barbara. AT&T holds the core patents related to WI,
and other institutes hold additional patents. Using these patents as a part of WI coder
implementation requires licensing from all IPR holders.
The product is the result of a co-operation between CML Microcircuits and SPIRIT DSP. The
co-operation combines CML’s 39-year history of developing mixed-signal semiconductors
for professional and leisure communication applications, with SPIRIT’s experience
in embedded voice products.
Regenerative repeater
Introduction of on-board regeneration alleviates the conflict between enhanced traffic
capacity and moderate system cost by reducing the requirements of the radio front-ends, by
simplifying the ground station digital equipment and the satellite communication payload in
TDMA and Satellite-Switched-TDMA systems. Regenerative satellite repeaters can be
introduced in an existing system with only minor changes at the ground stations. In cases
where one repeater can be dedicated to each station a more favorable arrangement of the
information data than in SS-TDMA can be conceived, which eliminates burst transmission
while retaining full interconnectivity among spot-beam areas.
Practice also tells us that digital communication systems designed for HF are necessarily
designed with two objectives in mind; slow and robust to allow communications with weak
signals embedded in noise and adjacent channel interference, or fast and somewhat subject to
failing under adverse conditions, however being able to best utilize the HF medium with
good prevailing conditions.
Taken that the average amateur radio transceiver has limited power output, typically 20 - 100
Watts continuous duty, poor or restricted antenna systems, fierce competition for a free spot
on the digital portion of the bands, adjacent channel QRM, QRN, and the marginal condition
of the HF bands, it is evident that for amateur radio, there is a greater need for a weak signal,
spectrally-efficient, robust digital communications mode, rather than another high speed,
wide band communications method.
It is difficult to understand that true coherent demodulation of PSK could ever be achieved in
any non-cabled system since random phase changes would introduce uncontrolled phase
ambiguities. Presently, we have the technology to match and track carrier frequencies
exactly, however tracking carrier phase is another matter. As a matter of practicality thus, we
must revert to differentially coherent phase demodulation (DPSK).
Another practical matter concerns that of symbol, or baud rate; conventional RTTY runs at
45.45 baud (a symbol time of about 22 ms.) This relatively-long symbol time have been
favored as being resistant to HF multipath effects and thus attributed to its robustness.
Symbol rate also plays an important part in determining spectral occupancy. In the case of a
45.45 baud RTTY waveform, the expected spectral occupancy is some 91 Hz for the major
lobe, or +/- 45.45 on each side of each the two data tones. For a two tone FSK signaling
system of continuous-phase frequency-shift keying (CPFSK) paced at 170 Hz, this system
would occupy approximately 261 Hz.
Characteristics:
• Complex valued signal
Let us consider DN = {(xi , yi) : i = 1, .., N} iid realizations of the joint observation-class
phenomenon (X(u), Y (u)) with true probability measure PX,Y defined on (X ×Y, σ(FX × FY
)). In addition, let us consider a family of measurable representation functions D, where any
f(·) ∈ D is defined in X and takes values in Xf . Let us assume that any representation
function f(·) induces an empirical istribution Pˆ Xf ,Y on (Xf ×Y, σ(Ff ×FY )), based on the
training data and an implicit learning approach, where the empirical Bayes classification rule
is given by: gˆf (x) = arg maxy∈Y Pˆ Xf ,Y (x, y).
Turbo codes
In information theory, turbo codes (originally in French Turbocodes) are a class of high-
performance forward error correction (FEC) codes developed in 1993, which were the first
practical codes to closely approach the channel capacity, a theoretical maximum for the code
rate at which reliable communication is still possible given a specific noise level. Turbo
codes are finding use in 3G mobile communications and (deep
space) satellite communications as well as other applications where designers seek to achieve
reliable information transfer over bandwidth- or latency-constrained communication links in
the presence of data-corrupting noise. Turbo codes are nowadays competing with LDPC
codes, which provide similar performance.
Prior to turbo codes, the best constructions were serial concatenated codes based on an
outer Reed-Solomon error correction code combined with an inner Viterbi-decoded short
constraint length convolutional code, also known as RSV codes.
In 1993, turbo codes were introduced by Berrou, Glavieux, and Thitimajshima (from
Télécom Bretagne, former ENST Bretagne, France) in their paper: "Near Shannon Limit
Error-correcting Coding and Decoding: Turbo-codes" published in the Proceedings of IEEE
International Communications Conference.[1] In a later paper, Berrou gave credit to the
"intuition" of "G. Battail, J. Hagenauer and P. Hoeher, who, in the late 80s, highlighted the
interest of probabilistic processing.". He adds "R. Gallager and M. Tanner had already
imagined coding and decoding techniques whose general principles are closely related,"
although the necessary calculations were impractical at that time.[2]
The first class of turbo code was the parallel concatenated convolutional code (PCCC). Since
the introduction of the original parallel turbo codes in 1993, many other classes of turbo code
have been discovered, including serial versions and repeat-accumulate codes. Iterative Turbo
decoding methods have also been applied to more conventional FEC systems, including
Reed-Solomon corrected convolutional codes
There are many different instantiations of turbo codes, using different component encoders,
input/output ratios, interleavers, and puncturing patterns. This example encoder
implementation describes a 'classic' turbo encoder, and demonstrates the general design of
parallel turbo codes.
This encoder implementation sends three sub-blocks of bits. The first sub-block is the m-bit
block of payload data. The second sub-block is n/2 parity bits for the payload data, computed
using a recursive systematic convolutional code (RSC code). The third sub-block is n/2 parity
bits for a known permutation of the payload data, again computed using an RSC
convolutional code. Thus, two redundant but different sub-blocks of parity bits are sent with
the payload. The complete block has m+n bits of data with a code rate of m/(m+n).
The permutation of the payload data is carried out by a device called an interleaver.
Hardware-wise, this turbo-code encoder consists of two identical RSC coders, С1 and C2, as
depicted in the figure, which are connected to each other using a concatenation scheme,
called parallel concatenation:
In the figure, M is a memory register. The delay line and interleaver force input bits dk to
appear in different sequences. At first iteration, the input sequence dk appears at both outputs
of the encoder,xk and y1k or y2k due to the encoder's systematic nature. If the
encoders C1 and C2 are used respectively in n1 and n2 iterations, their rates are respectively
equal to
.
[edit]The decoder
The decoder is built in a similar way to the above encoder - two elementary decoders
are interconnected to each other, but in serial way, not in parallel. The decoder
operates on lower speed (i.e. ), thus, it is intended for the encoder, and is
for correspondingly. yields a soft decision which causes delay. The same
delay is caused by the delay line in the encoder. The 's operation causes delay.
An interleaver installed between the two decoders is used here to scatter error bursts
coming from output. DI block is a demultiplexing and insertion module. It
works as a switch, redirecting input bits to at one moment and to at
another. In OFF state, it feeds both and inputs with padding bits (zeros).
Consider a memoryless AWGN channel, and assume that at k-th iteration, the
decoder receives a pair of random variables:
2. (a) Explain with a neat block diagram the operation of a continuously variable slope delta
modulator (CVSD).
(b) Compare Delta modulation with Pulse code modulation technique. [8+8]
3. (a) Assume that 4800bits/sec. random data are sent over a band pass channel by BFSK
signaling scheme. Find the transmission bandwidth BT such that the spectral envelope is
down at least 35dB outside this band.
(b) Write the comparisons among ASK, PSK, FSK and DPSK. [8+8]
4. (a) What is meant by ISI? Explain how it differs from cross talk in the PAM.
(b) What is the ideal solution to obtain zero ISI and what is the disadvantage of this solution.
[6+10]
5. A code is composed of dots and dashes. Assume that the dash is 3 times as long as the
dots, has one-third the probability of occurrence.
Calculate
(a) the Information in a dot and that in a hash.
(b) average Information in the dot-hash code.
(c) Assume that a dot lasts for 10 ms and that this same time interval is allowed between
symbols. Calculate average rate of Information. [16]
7. Explain about block codes in which each block of k message bits encoded into block of
n>k bits with an example. [16]
8. Sketch the Tree diagram of convolutional encoder shown in figure 8 with Rate= 1/2,
constraint length L = 2. [16]
9. (a) State and prove the sampling theorem for band pass signals.
(b) A signal m(t) = cos(200pt) + 2 cos(320pt) is ideally sampled at fS = 300Hz. If the
sampled signal is passed through a low pass filter with a cutoff frequency of 250Hz. What
frequency components will appear in the output? [6+10]
10. (a) Derive an expression for channel noise and quantization noise in DM system.
(b) Compare DM and PCM systems. [10+6]
13. Figure 5 illustrates a binary erasure channel with the transmission probabilities
probabilities P(0|0) = P(1|1) = 1 - p and P(e|0) = P(e|1) = p. The probabilities for the input
symbols are P(X=0) =a and P(X=1) =1- a.
Determine the average mutual information I(X; Y) in bits. [16]
14. Show that H(X, Y) = H(X) + H(Y |X) = H(Y ) + H(X|Y ). [16]
15. Explain about block codes in which each block of k message bits encoded into block of
n>k bits with an example. [16]
17. The probability density function of the sampled values of an analog signal is shown in
figure 1.
(a) Design a 4 - level uniform quantizer.
(b) Calculate the signal power to quantization noise power ratio.
(c) Design a 4 - level minimum mean squared error non - uniform quantizer.
[6+4+6]
18. A DM system is tested with a 10kHz sinusoidal signal, 1V peak to peak at the input. The
signal is sampled at 10times the Nyquist rate.
(a) What is the step size required to prevent slope overload and to minimize the granular
noise.
(b) What is the power spectral density of the granular noise?
(c) If the receiver input is band limited to 200kHz, what is the average (S/NQ).
[6+5+5]
19. (a) Write down the modulation waveform for transmitting binary information over base
band channels, for the following modulation schemes: ASK, PSK, FSK and DPSK.
(b) What are the advantages and disadvantages of digital modulation schemes?
(c) Discuss base band transmission of M-ary data. [4+6+6]
20. (a) Draw the block diagram of band pass binary data transmission system and explain
each block.
(b) A band pass data transmitter used a PSK signaling scheme with
s1(t) =?A coswct; 0 t Tb
s2(t) = +A coswct; 0 t Tb
Where Tb = 0.2msec; wc = 10p /Tb.
The carrier amplitude at the receiver input is 1mV and the power spectral density of the
additive white gaussian noise at the input is 10-11w/Hz. Assume that an ideal correlation
receiver is used. Calculate the average bit error rate of the receiver. [8+8]
21. A Discrete Memory less Source (DMS) has an alphabet of five letters, xi, i =1,2,3,4,5,
each occurring with probabilities 0.15, 0.30, 0.25, 0.15, 0.10, 0.08, 0.05,0.05.
(a) Determine the Entropy of the source and compare with it N.
(b) Determine the average number N of binary digits per source code. [16]
23. Explain about block codes in which each block of k message bits encoded into block of
n>k bits with an example. [16]
24. Sketch the Tree diagram of convolutional encoder shown in figure 8 with Rate= 1/2,
constraint length L = 2. [16]
25. (a) State sampling theorem for low pass signals and band pass signals.
(b) What is aliasing effect? How it can be eliminated? Explain with neat diagram.
[4+4+8]
26. (a) Derive an expression for channel noise and quantization noise in DM system.
(b) Compare DM and PCM systems. [10+6]
27. Explain the design and analysis of M-ary signaling schemes. List the waveforms in
quaternary schemes. [16]
28. (a) Derive an expression for error probability of coherent PSK scheme.
(b) In a binary PSK scheme for using a correlator receiver, the local carrier wave-form is
Acos (wct + q) instead of Acos(wct) due to poor carrier synchronization. Derive an expression
for the error probability and compute the increase in error probability when q=150 and
[A2Tb/?] = 10. [8+8]
29. Consider the transmitting Q1, Q2, Q3, and Q4 by symbols 0, 10, 110, 111
(a) Is the code uniquely decipherable? That is for every possible sequence is there only one
way of interpreting message.
(b) Calculate the average number of code bits per message. How does it compare with H =
1.8 bits per messages. [16]
30. Show that H(X, Y) = H (X) + H (Y) and H(X/Y) = H (X). [16]
31. Explain about block codes in which each block of k message bits encoded into block of
n>k bits with an example. [16]
Unit 1:
Unit 2:
Unit 3:
Unit 4:
1. Encoding,
2. decoding using state Tree and trellis diagrams
3. Decoding using Viterbi algorithm
Unit 5:
(a) Source coding improves the error performance of the communication system.
(b) Channel coding will reduce the average source code word length.
(c) Two different source codeword sets can be obtained using Huffman coding.
(d) Two different source codeword sets can be obtained using Shanon-Fano coding
2.A memory less source emits 2000binarysymbols/sec and each symbol has a Probability of
0.25 to be equal to 1and 0.75 to be equal to 0.The minimum number of bits/sec required for
error free transmission of this source is
(a) 1500
(b) 1734
(c) 1885
(d) 162213.
3. A system has a bandwidth of 3KHz and an S/N ratio of 29dB at the input of the receiver .If
the bandwidth of
(b) ARQschemeoferrorcontrolisappliedafterthereceivermakesadecisionaboutthereceivedbit
(c) ARQ scheme of error control is applied when the receiver is unable to make a decision
about the received bit.
(b) In an (n,k) systematic cyclic code, the sum of two code words is another codeword of the
code.
(c) In a convolutional encoder, the constraint length of the encoder is equal to the tail of the
message sequence+ 1.
(d) Inan(n,k)blockcode,eachcodewordisthecyclicshiftofananothercodewordofthecode.
(a) 6
(b) 4
(c) 3
(d) 5
Answers
1.C
2.D
3.B
4.A
5.C
6.C
7.C
8.B
9.A
10.D
Unit 2
CHOOSE THE CORRECT ANSWER
(b) ARQ scheme of error control is applied after the receiver makes a decision about the
received bit
(d) ARQ scheme of error control is applied when the receiver is unable to make a decision
about the received bit.
(c) parity bits of the code word are the linear combination of the message bits
(d) the received power varies linearly with that of the transmitted power
7. Whichofthefollowingprovidesthefacilitytorecognizetheerroratthereceiver?
8. A system has a bandwidth of 3 KHz and an S/N ratio of 29dB at the input of the receiver.
If the bandwidth of
10. In a communication system, the average amount of uncertainty associated with the
Source, sink, source and sink jointly in bits/message are1.0613,1.5 and2.432 respectively.
Then the information transferred by the channel connecting the source and sink in bit sis
(a) 1.945
(b) 4.9933
(c) 2.8707
(d) 0.1293
11.ABS Chasa transition probability of P. The cascade of two such channel sis
Answers
1.A
2.D
3.D
4.C
5.D
6. C
7. A
8.C
9.D
10.D
11.A
Unit 2
2. A Field is
3.Under error free reception, the syndrome vector computed for the received cyclic code
word consists of
(a) 5
(b) 4
(c) 3
(d) 6
6. There are four binary words given as 0000, 0001,0011, 0111.Which of these cannot beam
member of the parity check matrix of a(15,11)linear Block code?
(a) 0011
(b) 0000,0001
(c) 0000
(d) 0111
g(x)=1+x2+x3 is basically a
(b) H(X,Y)=2bits/message
(c) H(X/Y)=1bit/message
(d) H(X,Y)=0bits/message
9. A system has a bandwidth of 4KHz and an S/N ratio of 28 at the input to the Receiver. If
the bandwidth of the channel is doubled ,then
10. The Parity Check Matrix of a(6,3) Systematic Linear Block code is
101100
011010
110001
If the Syndrome vector computed for the received code word is [010], then for error
correction, which of the bits of the received code word is to be complemented?
(a) 3
(b) 4
(c) 5
(d) 2
Answers
1.D
2.D
3.C
4.B
5.A
6.C
7.A
8.B
9.A
10.C
Unit 3
CHOOSE THE CORRECT ANSWER
1. The minimum number of bits per message required to encode the output of source
transmitting four different messages with probabilities 0.5,0.25,0.125and0.125is
2. (a) 1.5
3. (b) 1.75
4. (c) 2
5. (d) 1
(a) p/q
(b) pq
(c) p
(d) q
(d) Remaindering(x)/V(x)
(a) 2
(b) 8
(c) 4
(d) 1
6. In a (6,3) systematic Linear Block code, the number of‘6‘bit code words that are not
useful is
(a) 45
(b) 64
(c) 8
(d) 56
7. The output of a source is band limited to 6KHz.It is sampled at a rate of 2KHz above the
nyquist‘s rate. If the
(a) 12Kbps
(b) 32Kbps
(c) 28Kbps
(d) 24Kbps
(a) 2bits
(b) 0bits
(c) 1bit
(d) infinity
9. A communication channel is fed with an input signal x(t) and the noise in the channel is
negative. The Power received at the receiver input is
10. White noise of PSD η/2 is applied to an ideal LPF with one sided band width of B Hz.The
filter provides again
(a) 8
(b) 2
(c) 6
(d) 4
Answers
1.B
2.C
3.C
4.A
5.D
6.C
7.B
8.B
9.B
10.A
Unit 4
(a) ThesyndromeofareceivedBlockcodedworddependsonthereceivedcodeword
(b) ThesyndromeforareceivedBlockcodedwordundererrorfreereceptionconsistsofall1‘s.
(c) ThesyndromeofareceivedBlockcodedworddependsonthetransmittedcodeword.
(d) The syndrome of a received Block coded word depends on the error pattern
2. A Field is
3. Variable length source coding provides better coding efficiency, if all the messages of the
source are
(a) Equiprobable
(a) FEC and ARQ are not used for error correction
(b) ARQisusedforerrorcontrolafterreceivermakesadecisionaboutthereceivedbit
(c) FECisusedforerrorcontrolwhenthereceiverisunabletomakeadecisionaboutthereceivedbit
(d) FECisusedforerrorcontrolafterreceivermakesadecisionaboutthereceivedbit
6.
AdiscretesourceXistransmittingmmessagesandisconnectedtothereceiverYthroughasymmetricc
hannel.The capacity of the channel is given as
7. The time domain behavior of a convolutional encoder of code rate 1/3 is defined in terms
of a set of
(a) 3rampresponses
(b) 3stepresponses
(c) 3sinusoidalresponses
(d) 3impulseresponses
(a) H(X,Y)=2bits/message
(b) H(X/Y)=1bit/message
(c) H(X,Y)=0bits/message
(d) H(X/Y)=2bits/message
(a) 4
(b) 6
(c) 3
(d) 5
Answers
1.D
2.D
3.D
4.D
5.A
6.D
7.D
8.A
9.B
10.B
Unit 5
(a) -a
(b) 0
(c) a
(d) 1
3. .Under error free reception, the syndrome vector computed for the received cyclic
codeword consists of
(a) H(X)+H(Y)-H(X,Y)bits/symbol
6. Theencoderofa(7,4)systematiccyclicencoderwithgeneratingpolynomialg(x)=1+x2 +x3 Is
basically a
(a) 11stageshiftregister
(b) 4stageshiftregister
(c) 3stageshiftregister
(d) 22stageshiftregister
8. A system has a bandwidth of 4 KHz and an S/Nratio of 28 at the input to the Receiver.If
the bandwidth of the channel is doubled, then
9. A source is transmitting four messages with equal probability. Then, for optimum Source
coding efficiency.
10. The maximum average amount of information content measured in bits/sec associated
with the output of a discrete Information source transmitting 8 messages and 2000
messages/sec is
(a) 16Kbps
(b) 4Kbps
(c) 3Kbps
(d) 6Kbps
Answers
1.A
2.A
3.B
4.B
5.C
6.C
7.D
8.A
9.B
10. D
Unit 5
1.
TwobinaryrandomvariablesXandYaredistributedaccordingtothejointDistributiongivenasP(X=
Y=0)=
P(X=Y=1)=P(X=Y=1)=1/3.Then,
(a) H(X)+H(Y)=1.
(b) H(Y)=2.H(X)
(c) H(X)=H(Y)
(d) H(X)=2.H(Y)
4. Source 1 is transmitting two messages with probabilities 0.2 and 0.8 and Source2 is
transmitting two messages with Probabilities 0.5 and 0.5. Then
(a) MaximumuncertaintyisassociatedwithSource1
(b) Boththesources1and2arehavingmaximumamountofuncertaintyassociated
(d) MaximumuncertaintyisassociatedwithSource2
(a) 5
(b) 4
(c) 2
(d) 3
6. If X is the transmitter and Y is the receiver and if the channel is the noise free, then the
mutual information I(X,Y) is equal to
(a) the received power varies linearly with that of the transmitted power
(b) parity bits of the code word are the linear combination of the message bits
9. A source is transmitting four messages with equal probability. Then for optimum Source
coding efficiency,
10. If a memory ess source of information rate R is connected to a channel with a channel
capacity C, then on which of the following statements, the channel coding for the output of
the source is based?
(a) Minimum number of bits required to encode the output of the source is its entropy
Answers
1.C
2.C
3.C
4.D
5.B
6.C
7.B
8.D
9.B
10.B
A
Code No: 56026 :2: Set No. 3
II Fill in the blanks
11. The advantage of DPCM over Delta Modulation is _________________________
12. The phases in a QPSK system can be expressed as ______________________
13. The Synchronization is defined as _______________________
14. The sampling rate in Delta Modulation is _______________than PCM.
15. The bit error Probability of BPSK system is __________________that of QPSK.
16. Non-coherent detection of FSK signal results in ____________________
17. _____________ is used as a Predictor in a DPCM transmitter.
18. The Nyquist's rate of sampling of an analog signal S(t) for alias free reconstruction is
5000samples/sec. For a signal x(t) = [S(t)]2 ,the corresponding sampling rate in
samples/sec is __________________
19. A Matched filter is used to __________________________
20. A signal extending over -4v to +4v is quantized into 8 levels. The maximum possible
quantization error obtainable is _____________V.
-oOo-
A
Code No: 56026 :2: Set No. 2
II Fill in the blanks
11. The source coding efficiency can be increased by using _______________________
12. The advantage of Spread Spectrum Modulation schemes over other modulations is
_________________
13. Entropy coding is a _____________________
14. A convolutional encoder of code rate 1/2 is a 3 stage shift register with a message word
length of 6.The code word length obtained from the encoder ( in bits) is
_____________
15. Relative to Hard decision decoding, soft decision decoding results in _____________
16. If T is the code vector and H is the Parity check Matrix of a Linear Block code, then the
T
code is defined by the set of all code vectors for which H .T = ______________
17. The advantage of CDMA over Frequency hopping is ____________
18. The Parity check matrix of a linear block code is
101100
011010
110001
Its Hamming distance is ___________________
19. The significance of PN sequence in CDMA is ________________
20. The cascade of two Binary Symmetric Channels is a __________________________
-oOo-
A
Code No: 56026 :2: Set No. 3
II Fill in the blanks
11. Entropy coding is a _____________________
12. A convolutional encoder of code rate 1/2 is a 3 stage shift register with a message word
length of 6.The code word length obtained from the encoder ( in bits) is
_____________
13. Relative to Hard decision decoding, soft decision decoding results in _____________
14. If T is the code vector and H is the Parity check Matrix of a Linear Block code, then the
T
code is defined by the set of all code vectors for which H .T = ______________
15. The advantage of CDMA over Frequency hopping is ____________
16. The Parity check matrix of a linear block code is
101100
011010
110001
Its Hamming distance is ___________________
17. The significance of PN sequence in CDMA is ________________
18. The cascade of two Binary Symmetric Channels is a __________________________
19. The source coding efficiency can be increased by using _______________________
20. The advantage of Spread Spectrum Modulation schemes over other modulations is
_________________
Code No: 56026 Set No. 4
JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD
III B.Tech. II Sem., II Mid-Term Examinations, April – 2012
DIGITAL COMMUNICATIONS
Objective Exam
Name: ______________________________ Hall Ticket No.
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 10.
I Choose the correct alternative:
1. A source emits messages A and B with probability 0.8 and 0.2 respectively. The
redundancy provided by the optimum source-coding scheme for the above Source is [
]
A) 27% B) 72% C) 55% D) 45%
2. A source X and the receiver Y are connected by a noise free channel. Its capacity is [ ]
A) Max H(Y/X) B) Max H(X) C) Max H(X/Y) D) Max H(X,Y)
3. Exchange between Band width and Signal noise ratio can be justified based on [ ]
A) Hartley - Shanon‘s Law B) Shanon‘s source coding Theorem
C) Shanon‘s limit D) Shanon‘s channel coding Theorem
4. Information rate of a source is [ ]
A) maximum when the source is continuous B) the entropy of the source measured in
bits/message
C) a measure of the uncertainty of the communication system
D) the entropy of the source measured in bits/sec.
5. The Hamming Weight of the (6,3) Linear Block coded word 101011 [ ]
A) 5 B) 4 C) 2 D) 3
6. Which of the following can be the generating polynomial for a (7,4) systematic Cyclic
code? [ ]
3 5 2 4 3 7 4 3
A) x +x+1 B) x +x +1 C) x +x +1 D) x +x +x +1
7. In a Linear Block code [ ]
A) the received power varies linearly with that of the transmitted power
B) parity bits of the code word are the linear combination of the message bits
C) the communication channel is a linear system
D) the encoder satisfies super position principle
8. The fundamental limit on the average number of bits/source symbol is [ ]
A) Mutual Information B) Channel capacity
C) Information content of the message D) Entropy of the source
9. A system has a band width of 3KHz and an S/N ratio of 29dB at the input of the receiver.
If the band width of the channel gets doubled, then [ ]
A) its capacity gets halved B) the corresponding S/N ratio gets doubled
C) the corresponding S/N ratio gets halved D) its capacity gets doubled
10. The Channel Matrix of a Noiseless channel [ ]
A) consists of a single nonzero number in each column
B) consists of a single nonzero number in each row
C) is a square Matrix
D) is an Identity Matrix
Cont…….2
Code No: 56026 :2: Set No. 4
II Fill in the blanks
11. Relative to Hard decision decoding, soft decision decoding results in _____________
12. If T is the code vector and H is the Parity check Matrix of a Linear Block code, then the
T
code is defined by the set of all code vectors for which H .T = ______________
13. The advantage of CDMA over Frequency hopping is ____________
14. The Parity check matrix of a linear block code is
101100
011010
110001
Its Hamming distance is ___________________
15. The significance of PN sequence in CDMA is ________________
16. The cascade of two Binary Symmetric Channels is a __________________________
17. The source coding efficiency can be increased by using _______________________
18. The advantage of Spread Spectrum Modulation schemes over other modulations is
_________________
19. Entropy coding is a _____________________
20. A convolutional encoder of code rate 1/2 is a 3 stage shift register with a message word
length of 6.The code word length obtained from the encoder ( in bits) is
_____________
-oOo-
JNTUWORLD
A
www.jntuworld.com www.jntuworld.com www.jwjobs.net JNTUWORLD
A
www.jntuworld.com www.jntuworld.com www.jwjobs.net JNTUWORLD
Code No: 07A5EC09 :2: Set No.2
II Fill in the blanks:
11) Data word length in DM is ---------------
12) Band width of PCM signals is -----------
13) A signal extending over-4V to +4Vis quantized in to 8e maximum possible quantization
error obtainable is-------------
14) Probability of error of PSK scheme is-----------------------
15) PSK and FSK have a constant--------------
16) Granular noise occurs when step size is--------------
17) Converting discrete time continuous signal into discrete amplitude discrete time signal is
called-----------------.
18) The minimum symbol rate of a PCM system transmitting an analog signal band limited to
2 KHz, the number of Q-levels 64 is ------------------
19) In DM granular noise occurs if when step size is -------------
20) The combination of compressor and expander is called---------------------
-oOo-
5) For the data word 1110 in a (7, 4) non-systematic cyclic code with the generator
3
polynomial 1+x+x , the code polynomial is [ ]
3 5 2 3 5 2 3 4 4 5
a) 1+x+x +x b) 1+x +x +x c) 1+x +x +x d) 1+x +x
6) A source X with entropy 2 bits/message is connected to the receiver Y through a noise free
channel. The conditional probability of the source is H(X/Y) and the joint entropy of
the source and the receiver H(X, Y). Then [ ]
a) H(X,Y)= 2 bits/message b) H(X/Y)= 2 bits/message
c) H(X, Y)= 0 bits/message d) H(X/Y)= 1 bit/message
7) Which of the following is a p(Y/X) matrix for a binary Erasure channel [ ]
a) 11ppqq−⎡⎤⎢⎥−⎣⎦ b) c) d) None
8) The channel matrix of a noiseless channel [ ]
a)consists of a single nonzero number in each column.
b) consists of a single nonzero number in each row.
c) is an Identity Matrix. d) is a square matrix.
9 ) Information content of a message [ ]
a) increase with its certainty of occurrence. b) independent of the certainty of
occurrence.
c) increases with its uncertainty of occurrence. d) is the logarithm of its uncertainty of
occurrence.
Cont…..2
A
www.jntuworld.com www.jntuworld.com www.jwjobs.net JNTUWORLD
1. (a) Explain the basic principles of sampling, and distinguish between ideal sampling and practical
sampling.
(b) A band pass signal has a centre frequency fo and extends from fo - 5 KHz to f0 + 5 KHz. It is
sampled at a rate of fs = 25 KHz. As f o varies from 5 KHz to 50 KHz, find the ranges of fo for which
the sampling rate is adequate.
2. a) Describe the synchronization procedure for PAM,PWM and PPM signals. Also discuss about
the spectra of PWM and PDM signals.
3. (a) Explain the method of generation and detection of PPM signals with neat sketches.
(c) Which analog pulse modulation can be termed as analogous to linear CW modulation and
why?
4. List out the applications, merits and demerits of PAM, PPM and PWM signals.
10. (a) Sketch and explain the typical waveforms of PWM signals, for leading edge, trailing edge and
symmetrical cases.
(b) Compare the analog pulse modulation schemes with CW modulation systems.
11. (a) Explain how the PPM signals can be generated and reconstructed through
PWM signals.
(b) Compare the merits and demerits of PAM, PDM and PPM signals. List out their applications.
12. (a) Define the Sampling theorem and establish the same for band pass signals, using neat
schematics.
(b) For the modulating signal m(t) = 2 Cos (100 t) + 18 Cos (2000 πt), determine the allowable
sampling rates and sampling intervals.
13. Draw the block diagram of PCM generator and explain each block.
16. What are the applications of PCM? Give in detail any two applications.
18. Derive the expression for output Signal to noise ratio of PCM system.
20. Explain the working of DPCM system with neat block diagram.
21. Prove that the mean value of the quantization error is inversely proportional to the
22.. Explain why quantization noise could affect small amplitude signals in a PCM system More
than large signals. With the aid of sketches show how tapered quantizing level Could be
used to counteract this effect.
23. Explain the working of Delta modulation system with a neat block diagram.
24. Clearly bring out the difference between granular noise and slope overload error.
25. Consider a speech signal with maximum frequency of 3.4 KHz and maximum
Amplitude of 1v.This speech signal applied to a DM whose bit rate is set at
20kbps. Discuss the choice of appropriate step size for the modulator.
27. Explain with neat block diagram, Adaptive Delta Modulator transmitter and receiver.
28. Why is it necessary to use greater sampling rate for DM than PCM.
30. A delta modulator system is designed to operate at five times the Nyquist rate for a
signal with 3KHz bandwidth. Determine the maximum amplitude of a 2KHz input
sinusoid for which the delta modulator does not have slope overload. Quantization step
size is 250mv.Derive the formula used.
31. Compare Delta modulation and PCM techniques in terms of bandwidth and signal to noise
ratio.
32. A signal m (t) is to be encoded using either Delta modulation or PCM technique. The signal
to quantization noise ratio (So/No) ≥ 30dB.Find the ratio bandwidth required for PCM to
Delta modulation.
33. What are the advantages and disadvantages of Digital modulation schemes?
35. Explain how the residual effects of the channel are responsible for ISI?
37. What is the ideal solution to obtain zero ISI and what is the disadvantage of
this Solution.
38. Explain the signal space representation of QPSK .Compare QPSK with all other digital
signaling schemes.
39. Write down the modulation waveform for transmitting binary information over
baseband channels for the following modulation schemes: ASK,PSK,FSK and DPSK.
40. Explain in detail the power spectra and bandwidth efficiency of M-ary signals.
42. Compare and discuss a binary scheme with M-ary signaling scheme.
45. Find the transfer function of the optimum receiver and calculate the error probability.
46. Derive an expression for probability of bit error of a binary coherent FSK receiver.
48. Show that the impulse response of a matched filter is a time reversed and delayed
version of the input signal .and Briefly explain the properties of matched filter.
49. Binary data has to be transmitted over a telephone link that has a usable bandwidth of 3000
Hz and a maximum achievable SNR of 6dB at its output.
i) Determine the maximum signaling rate and error probability if a coherent ASK
ii) If the data rate is maintained at 300 bits/sec. Calculate the error probability.
50. Binary data is transmitted over an RF band pass channel with a usable bandwidth of 10MHz
at a rate of 4.8×106 bits/sec using an ASK signaling method. The carrier amplitude at the
receiver antenna is 1mV and the noise power spectral density at the receiver input is 10-15
w/Hz.
51. One of four possible messages Q1, Q2, Q3, Q4 having probabilities 1/8, 3/8, 3/8, and 1/8
respectively is transmitted. Calculate average information per message.
52. An ideal channel low pass channel of bandwidth B Hz with additive Gaussian white noise is
used for transmitting of digital information.
a. Plot C/B versus S/N in dB for an ideal system using this channel
b. A practical signaling scheme on this channel used one of two waveforms of duration
Tb sec to transmit binary information. The signaling scheme transmits data at the
rare of 2B bits/sec, the probability of error is given by P (error/1sent) = Pe
c. Plot graphs of
i C/B
53.Define and explain the following in terms of joint pdf (x, y) and marginal pdf`s p(x) and p(y).
d. Mutual Information
e. Average Mutual Information
f. Entropy
54 .Let X is a discrete random variable with equally probable outcomes X1 = A, and X2 = A and
conditional probability pdf`s p(y/xi), i = 1, 2 be the Gaussian with mean xi and variance σ 2. Calculate
average mutual information I (X,Y)
55. Write short notes on the following
a. Mutual Information
b. Self Information
c. Logarithmic measure for information
56. Write short notes on the following
a. Entropy
b. Conditional entropy
c. Mutual Information
d. Information
57. A DMS has an alphabet of eight letters, Xi , i=1,2,3,….,8, with probabilities
0.36,0.14,0.13,0.12,0.1,0.09,0.04,0.02.
i. Use the Huffman encoding procedure to determine a binary code for the
source output.
ii. Determine the entropy of the source and find the efficiency of the code
61. Explain about block codes in which each block of k message bits encoded into
100011
G = 010101
001110
Find
64. Explain about the syndrome calculation, error correction and error detection in
65. Briefly discuss about the linear block code error control technique.
66. Show that if g(x) is a polynomial of degree (n-k) and is a factor of xn+1, then
g(x) generates an (n,k) cyclic code in which the code polynomial for a data vector D
is generated by v(x)=D(x)g(x).
67. Briefly discuss about the parity check bit error control technique.
69. Draw and explain a decoder diagram for a (7,4) majority logic code whose generator
polynomial g(x)=1+x+x3.
message polynomial.
72. Design an encoder for the (7,4) binary cyclic code generated by g(x)=1+x+x3
1110100
H= 1101010
1011001
The code word received is 1000011 for a transmitted codeword C. Find the
77. (a) What is meant by random errors and burst errors? Explain about a coding
technique which can be used to correct both the burst and random errors simultaneously.
78. Draw the state diagram, tree diagram for K=3, rate1/3 code generated by
79. (a) Design an encoder for the (7,4) binary cyclic code generated by g(x) = 1+x+x3 and verify its
operation using the message vector (0101).
(b) What are the differences between block codes and the convolutional codes?
81. A convolutional encoder has two shift registers two modulo-2 adders and an output multiplexer.
The generator sequences of the encoder are as follows: g(1)=(1,0,1); g(2)=( 1,1, 1). Assuming a
5bit message sequence is transmitted. Using the state diagram find the message sequence when
the received sequence is
(11,01,00,10,01,10,11,00,00,......)
82. (a) What is meant by random errors and burst errors? Explain about a coding technique which
can be used to correct both the burst and random errors simultaneously.
83. Find the output codeword for the following convolutional encoder for the message sequence
10011. (as shown in the figure).
84. Construct the state diagram for the following encoder. Starting with all zero state, trace the path
that correspond to the message sequence 1011101. Given convolutional encoder has a single shift
register with two stages,(K=3) three modulo-2 adders and an output multiplexer. The generator
sequence s of the encoder are as follows. g(1)=(1, 0, 1) ; g(2)=(1, 1, 0),g(3)=(1,1,1).
85. Draw and explain Tree diagram of convolutional encoder shown below with rate=1/3, L=3
86. For the convolutional encoder shown below draw the trellis diagram for the message sequence
110.let the first six received bits be 11 01 11 then by using viterbi decoding find the decoded
sequence.
87. Explain the Direct sequence spread spectrum technique with neat diagram
92. Explain the operation of slow and fast frequency hoping technique.
95. Explain TDMA system with frame structure, frame efficiency and features.
96. Explain CDMA system with its features and list out various problems in CDMA systems.
Known gaps:
1. The DC as per the curriculum is not matching with the real time applications
2. This subject is not matching with the coding techniques presently using.
Action to be taken: following additional topics are taken to fill the known gaps
While analog transmission is the transfer of a continuously varying analog signal over an
analog channel, digital communications is the transfer of discrete messages over a digital or
an analog channel. The messages are either represented by a sequence of pulses by means of
a line code (baseband transmission), or by a limited set of continuously varying wave forms
(passband transmission), using a digital modulation method. The passband modulation and
corresponding demodulation (also known as detection) is carried out by modem equipment.
According to the most common definition of digital signal, both baseband and passband
signals representing bit-streams are considered as digital transmission, while an alternative
definition only considers the baseband signal as digital, and passband transmission of digital
data as a form of digital-to-analog conversion.
Data transmitted may be digital messages originating from a data source, for example a
computer or a keyboard. It may also be an analog signal such as a phone call or a video
signal, digitized into a bit-stream for example using pulse-code modulation (PCM) or more
advanced source coding (analog-to-digital conversion and data compression) schemes. This
source coding and decoding is carried out by codec equipment.
The term tele transmission involves the analog as well as digital communication. In most
textbooks, the term analog transmission only refers to the transmission of an analog message
signal (without digitization) by means of an analog signal, either as a non-modulated
baseband signal, or as a passband signal using an analog modulation method such as AM or
FM. It may also include analog-over-analog pulse modulatated baseband signals such as
pulse-width modulation. In a few books within the computer networking tradition, "analog
transmission" also refers to passband transmission of bit-streams using digital modulation
methods such as FSK, PSK and ASK. Note that these methods are covered in textbooks
named digital transmission or data transmission, for example.[1]
The theoretical aspects of data transmission are covered by information theory and coding
theory.
Protocol layers and sub-topics
OSI model
by layer
7. Application[show]
6. Presentation[show]
5. Session[show]
4. Transport[show]
3. Network[show]
2. Data link[show]
1. Physical[show]
v
t
e
Courses and textbooks in the field of data transmission typically deal with the following OSI
model protocol layers and topics:
Data (mainly but not exclusively informational) has been sent via non-electronic (e.g. optical,
acoustic, mechanical) means since the advent of communication. Analog signal data has been
sent electronically since the advent of the telephone. However, the first data electromagnetic
transmission applications in modern time were telegraphy (1809) and teletypewriters (1906),
which are both digital signals. The fundamental theoretical work in data transmission and
information theory by Harry Nyquist, Ralph Hartley, Claude Shannon and others during the
early 20th century, was done with these applications in mind.
Data transmission is utilized in computers in computer buses and for communication with
peripheral equipment via parallel ports and serial ports such as RS-232 (1969), Firewire
(1995) and USB (1996). The principles of data transmission are also utilized in storage media
for Error detection and correction since 1951.
In telephone networks, digital communication is utilized for transferring many phone calls
over the same copper cable or fiber cable by means of Pulse code modulation (PCM), i.e.
sampling and digitization, in combination with Time division multiplexing (TDM) (1962).
Telephone exchanges have become digital and software controlled, facilitating many value
added services. For example the first AXE telephone exchange was presented in 1976. Since
the late 1980s, digital communication to the end user has been possible using Integrated
Services Digital Network (ISDN) services. Since the end of the 1990s, broadband access
techniques such as ADSL, Cable modems, fiber-to-the-building (FTTB) and fiber-to-the-
home (FTTH) have become widespread to small offices and homes. The current tendency is
to replace traditional telecommunication services by packet mode communication such as IP
telephony and IPTV.
Transmitting analog signals digitally allows for greater signal processing capability. The
ability to process a communications signal means that errors caused by random processes can
be detected and corrected. Digital signals can also be sampled instead of continuously
monitored. The multiplexing of multiple digital signals is much simpler to the multiplexing of
analog signals.
Because of all these advantages, and because recent advances in wideband communication
channels and solid-state electronics have allowed scientists to fully realize these advantages,
digital communications has grown quickly. Digital communications is quickly edging out
analog communication because of the vast demand to transmit computer data and the ability
of digital communications to do so.
The digital revolution has also resulted in many digital telecommunication applications where
the principles of data transmission are applied. Examples are second-generation (1991) and
later cellular telephony, video conferencing, digital TV (1998), digital radio (1999),
telemetry, etc.
Baseband or passband transmission
This section may contain parts that are misleading. Please help clarify this article according to any
suggestions provided on the talk page. (August 2012)
Asynchronous transmission uses start and stop bits to signify the beginning bit[citation needed]
ASCII character would actually be transmitted using 10 bits. For example, "0100 0001"
would become "1 0100 0001 0". The extra one (or zero, depending on parity bit) at the start
and end of the transmission tells the receiver first that a character is coming and secondly that
the character has ended. This method of transmission is used when data are sent
intermittently as opposed to in a solid stream. In the previous example the start and stop bits
are in bold. The start and stop bits must be of opposite polarity.[citation needed] This allows the
receiver to recognize when the second packet of information is being sent.
Synchronous transmission uses no start and stop bits, but instead synchronizes transmission
speeds at both the receiving and sending end of the transmission using clock signal(s) built
into each component.[vague] A continual stream of data is then sent between the two nodes.
Due to there being no start and stop bits the data transfer rate is quicker although more errors
will occur, as the clocks will eventually get out of sync, and the receiving device would have
the wrong time that had been agreed in the protocol for sending/receiving data, so some bytes
could become corrupted (by losing bits).[citation needed] Ways to get around this problem include
re-synchronization of the clocks and use of check digits to ensure the byte is correctly
interpreted and received
23. References, Journals, websites and E-links:
TEXT BOOKS
Websites:-
1. http://en.wikipedia.org/wiki/digital_communications
2. http://www.tmworld.com/archive/2011/20110801.php
3. www.pemuk.com
4. www.site.uottawa.com
5. www.tews.elektronik.com
Journals:-
1. Communicaions Journal
REFERNCES:
1. Digital communications- John g. Prokaris, Masoud salehi-5th edition Mc Graw-Hill,
2008.
2. Digital communication- Simon Haykin, Jon Wiley, 2005.
3. Digital communications-Lan A.Glover, Peter M.Grant.2nd edition, pearson edu., 2008.
4. Communication systems-B.P.Lathi, BS Publication, 2006.
SECTION-D:
S.No Roll number Student Name
8 13R11A04G3 G MANIDEEP
14 13R11A04G9 K DARSHAN
15 13R11A04H0 K. ANIRUDH
27 13R11A04J2 M TANVIKA
33 13R11A04J8 P G CHANDANA
39 13R11A04K4 RAMYA S
Group 1
Group 2
Group 3:
14 13R11A04G9 K DARSHAN
15 13R11A04H0 K. ANIRUDH
Group 4:
16 13R11A04H1 KOMIRISHETTY AKHILA
Group 5:
Group 6:
27 13R11A04J2 M TANVIKA
Group 7:
33 13R11A04J8 P G CHANDANA
39 13R11A04K4 RAMYA S
Group 9:
Group 10:
UNIT-1
SAMPLING:
Pulse-Amplitude Modulation :
1, 0 t T
1
h (t ) , t 0, t T (3.11)
2
0,
otherwise
The instantane ously sampled version of m( t ) is
m ( t ) m( nT
n
s ) ( t nTs ) (3.12)
m ( t ) h ( t )
m ( )h ( t )d
m( nT ) (
n
s nTs )h ( t )d
m( nT )
n
s
( nTs )h (t )d (3.13)
Using the sifting property , we have
m ( t ) h ( t ) m( nT )h(t nT )
n
s s (3.14)
The instantaneous amplitude of the analog (voice) signal is held as a constant charge
on a capacitor for the duration of the sampling period Ts.
This technique is useful for holding the sample constant while other processing is
taking place, but it alters the frequency spectrum and introduces an error, called
aperture error, resulting in an inability to recover exactly the original analog signal.
The amount of error depends on how mach the analog changes during the holding
time, called aperture time.
To estimate the maximum voltage error possible, determine the maximum slope of the
analog signal and multiply it by the aperture time DT
Unit 3
(DPCM):
Usually PCM has the sampling rate higher than the Nyquist rate .The encode signal contains
redundant information. DPCM can efficiently remove this redundancy.
mn
mq n mn qn (3.78)
Unit 4
Duo-binary Signaling :
TEXT BOOKS
REFERNCES:
5. Digital communication- john g. Prokaris, Masoud salehi-5th edition Mc Graw-Hill, 2008.
6. Digital communicatio- Simon Haykin, Jon Wiley, 2005
MISSING TOPICS
UNIT 1
Hartley's law
During that same year, Hartley formulated a way to quantify information and its line
rate (also known as data signalling rate or gross bitrate inclusive of error-correcting code 'R'
across a communications channel).[1] This method, later known as Hartley's law, became an
important precursor for Shannon's more sophisticated notion of channel capacity.
Hartley argued that the maximum number of distinct pulses that can be transmitted and
received reliably over a communications channel is limited by the dynamic range of the
signal amplitude and the precision with which the receiver can distinguish amplitude levels.
Specifically, if the amplitude of the transmitted signal is restricted to the range of [ –A ... +A ]
volts, and the precision of the receiver is ±ΔV volts, then the maximum number of distinct
pulses M is given by
where fp is the pulse rate, also known as the symbol rate, in symbols/second or baud.
Hartley then combined the above quantification with Nyquist's observation that the number
of independent pulses that could be put through a channel of bandwidth B hertz was
2B pulses per second, to arrive at his quantitative measure for achievable line rate.
Hartley's law is sometimes quoted as just a proportionality between the analog bandwidth, B,
in Hertz and what today is called the digital bandwidth, R, in bit/s.[3] Other times it is quoted
in this more quantitative form, as an achievable line rate of R bits per second:[4]
Hartley did not work out exactly how the number M should depend on the noise statistics of
the channel, or how the communication could be made reliable even when individual symbol
pulses could not be reliably distinguished to M levels; with Gaussian noise statistics, system
designers had to choose a very conservative value of M to achieve a low error rate.
The concept of an error-free capacity awaited Claude Shannon, who built on Hartley's
observations about a logarithmic measure of information and Nyquist's observations about
the effect of bandwidth limitations.
Hartley's rate result can be viewed as the capacity of an errorless M-ary channel of
2B symbols per second. Some authors refer to it as a capacity. But such an errorless channel
is an idealization, and the result is necessarily less than the Shannon capacity of the noisy
channel of bandwidth B, which is the Hartley–Shannon result that followed later.
Noisy channel coding theorem and capacity
Main article: noisy-channel coding theorem
Claude Shannon's development of information theory during World War II provided the next
big step in understanding how much information could be reliably communicated through
noisy channels. Building on Hartley's foundation, Shannon's noisy channel coding
theorem (1948) describes the maximum possible efficiency of error-correcting
methods versus levels of noise interference and data corruption.[5][6] The proof of the theorem
shows that a randomly constructed error correcting code is essentially as good as the best
possible code; the theorem is proved through the statistics of such random codes.
Shannon's theorem shows how to compute a channel capacity from a statistical description of
a channel, and establishes that given a noisy channel with capacity C and information
transmitted at a line rate R, then if
there exists a coding technique which allows the probability of error at the receiver to be
made arbitrarily small. This means that theoretically, it is possible to transmit information
nearly without error up to nearly a limit of C bits per second.
The converse is also important. If
the probability of error at the receiver increases without bound as the rate is increased. So no
useful information can be transmitted beyond the channel capacity. The theorem does not
address the rare situation in which rate and capacity are equal.
[edit]Shannon–Hartley theorem
The Shannon–Hartley theorem establishes what that channel capacity is for a finite-
bandwidth continuous-time channel subject to Gaussian noise. It connects Hartley's result
with Shannon's channel capacity theorem in a form that is equivalent to specifying the M in
Hartley's line rate formula in terms of a signal-to-noise ratio, but achieving reliability through
error-correction coding rather than through reliably distinguishable pulse levels.
If there were such a thing as an infinite-bandwidth, noise-free analog channel, one could
transmit unlimited amounts of error-free data over it per unit of time. Real channels,
however, are subject to limitations imposed by both finite bandwidth and nonzero noise.
So how do bandwidth and noise affect the rate at which information can be transmitted over
an analog channel?
Surprisingly, bandwidth limitations alone do not impose a cap on maximum information rate.
This is because it is still possible for the signal to take on an indefinitely large number of
different voltage levels on each symbol pulse, with each slightly different level being
assigned a different meaning or bit sequence. If we combine both noise and bandwidth
limitations, however, we do find there is a limit to the amount of information that can be
transferred by a signal of a bounded power, even when clever multi-level encoding
techniques are used.
In the channel considered by the Shannon-Hartley theorem, noise and signal are combined by
addition. That is, the receiver measures a signal that is equal to the sum of the signal
encoding the desired information and a continuous random variable that represents the noise.
This addition creates uncertainty as to the original signal's value. If the receiver has some
information about the random process that generates the noise, one can in principle recover
the information in the original signal by considering all possible states of the noise process. In
the case of the Shannon-Hartley theorem, the noise is assumed to be generated by a Gaussian
process with a known variance. Since the variance of a Gaussian process is equivalent to its
power, it is conventional to call this variance the noise power.
Such a channel is called the Additive White Gaussian Noise channel, because Gaussian noise
is added to the signal; "white" means equal amounts of noise at all frequencies within the
channel bandwidth. Such noise can arise both from random sources of energy and also from
coding and measurement error at the sender and receiver respectively. Since sums of
independent Gaussian random variables are themselves Gaussian random variables, this
conveniently simplifies analysis, if one assumes that such error sources are also Gaussian and
independent.
The square root effectively converts the power ratio back to a voltage ratio, so the number of
levels is approximately proportional to the ratio of rms signal amplitude to noise standard
deviation.
This similarity in form between Shannon's capacity and Hartley's law should not be
interpreted to mean that M pulse levels can be literally sent without any confusion; more
levels are needed, to allow for redundant coding and error correction, but the net data rate that
can be approached with coding is equivalent to using that M in Hartley's law.
where
where
ADDITIONAL TOPICS
Unit 4
• Bandpass Signal
Characteristics:
Let us consider DN = {(xi , yi) : i = 1, .., N} iid realizations of the joint observation-class
phenomenon (X(u), Y (u)) with true probability measure P(X,Y) defined on (X ×Y, σ(FX ×
FY )). In addition, let us consider a family of measurable representation functions D, where
any f(·) ∈ D is defined in X and takes values in Xf . Let us assume that any representation
function f(·) induces an empirical distribution Pˆ Xf ,Y on (Xf ×Y, σ(Ff ×FY )), based on the
training data and an implicit learning approach, where the empirical Bayes classification rule
is given by: gˆf (x) = arg maxy∈Y Pˆ Xf ,Y (x, y).
UNIT 6
Turbo codes
In information theory, turbo codes (originally in French Turbocodes) are a class of high-
performance forward error correction (FEC) codes developed in 1993, which were the first
practical codes to closely approach the channel capacity, a theoretical maximum for the code
rate at which reliable communication is still possible given a specific noise level. Turbo
codes are finding use in 3G mobile communications and (deep
space) satellite communications as well as other applications where designers seek to achieve
reliable information transfer over bandwidth- or latency-constrained communication links in
the presence of data-corrupting noise. Turbo codes are nowadays competing with LDPC
codes, which provide similar performance.
Prior to turbo codes, the best constructions were serial concatenated codes based on an
outer Reed-Solomon error correction code combined with an inner Viterbi-decoded short
constraint length convolutional code, also known as RSV codes.
In 1993, turbo codes were introduced by Berrou, Glavieux, and Thitimajshima (from
Télécom Bretagne, former ENST Bretagne, France) in their paper: "Near Shannon Limit
Error-correcting Coding and Decoding: Turbo-codes" published in the Proceedings of IEEE
International Communications Conference. In a later paper, Berrou gave credit to the
"intuition" of "G. Battail, J. Hagenauer and P. Hoeher, who, in the late 80s, highlighted the
interest of probabilistic processing.". He adds "R. Gallager and M. Tanner had already
imagined coding and decoding techniques whose general principles are closely related,"
although the necessary calculations were impractical at that time.
The first class of turbo code was the parallel concatenated convolutional code (PCCC). Since
the introduction of the original parallel turbo codes in 1993, many other classes of turbo code
have been discovered, including serial versions and repeat-accumulate codes. Iterative Turbo
decoding methods have also been applied to more conventional FEC systems, including
Reed-Solomon corrected convolutional codes
There are many different instantiations of turbo codes, using different component encoders,
input/output ratios, interleavers, and puncturing patterns. This example encoder
implementation describes a 'classic' turbo encoder, and demonstrates the general design of
parallel turbo codes.
This encoder implementation sends three sub-blocks of bits. The first sub-block is the m-bit
block of payload data. The second sub-block is n/2 parity bits for the payload data, computed
using a recursive systematic convolutional code (RSC code). The third sub-block is n/2 parity
bits for a known permutation of the payload data, again computed using an RSC
convolutional code. Thus, two redundant but different sub-blocks of parity bits are sent with
the payload. The complete block has m+n bits of data with a code rate of m/(m+n).
The permutation of the payload data is carried out by a device called an interleaver.
Hardware-wise, this turbo-code encoder consists of two identical RSC coders, С1 and C2, as
depicted in the figure, which are connected to each other using a concatenation scheme,
called parallel concatenation:
In the figure, M is a memory register. The delay line and interleaver force input bits dk to
appear in different sequences. At first iteration, the input sequence dk appears at both outputs
of the encoder, xk and y1k or y2k due to the encoder's systematic nature. If the
encoders C1 and C2 are used respectively in n1 and n2 iterations, their rates are respectively
equal to
.
The decoder
The decoder is built in a similar way to the above encoder - two elementary decoders are
interconnected to each other, but in serial way, not in parallel. The decoder operates
on lower speed (i.e. ), thus, it is intended for the encoder, and is
for correspondingly. yields a soft decision which causes delay. The same
delay is caused by the delay line in the encoder. The 's operation causes delay.
An interleaver installed between the two decoders is used here to scatter error bursts
coming from output. DI block is a Demultiplexing and insertion module. It works as
a switch, redirecting input bits to at one moment and to at another. In OFF
state, it feeds both and inputs with padding bits (zeros).
Consider a memoryless AWGN channel, and assume that at k-th iteration, the decoder
receives a pair of random variables:
Unit I:
Unit II:
Unit III:
Unit IV:
Unit V:
1. PPTs
2. OHP slides
5. Any simulations
(I)Aims
GUIDELINES:
Distribution of periods:
No. of classes required to cover Assignment tests (for every 2 units 1 test) : 4