© All Rights Reserved

Просмотров: 0

© All Rights Reserved

- LM382
- AC MATLAB Lab Programs
- Course Outline Ecm241 (2)
- Electronic Circuits
- PHOTO DETECTORS.pdf
- ec341
- Gor4
- Design and Testing of Microwave Amplifiers
- A Numerical Comparison of 2D Resistivity Imaging With 10 Electrode Arrays 2004 Geophysical Prospecting
- All About Modulation Part 2.1
- Civil Services
- labview
- QPSK_ist
- Noise Filter
- Model Paper-1.pdf
- TWS Sec_A
- Lecture _01_2013
- Digital Communication Principles-1
- Chipadc Poze 0.5um-Isbn9512259087
- Noise

Вы находитесь на странице: 1из 559

COMMUNICATION

Prepared according to Anna university syllabus R-2017

(Common to III semester-CSE/IT )

G. Elumalai, M.E.,(Ph.D)

Assistant Professor (Grade I)

Department of Electronics and Communication Engineering

Panimalar Engineering College

Chennai.

Assistant Professor

Department of Electronics and Communication Engineering

Panimalar Engineering College

Chennai.

CHENNAI

SREE KAMALAMANI PUBLICATIONS (P) Ltd.

New No. AJ. 21, old No. AJ. 52, Plot No. 2614,

4th Cross, 9th Main Road, Anna Nagar -600 040,

Chennai, Tamilnadu, India

Landline: 91-044-42170813,

Mobile: 91-9840795803

EMAil id: skmpulbicationsmumdad@gmail.com

1ST EdiTioN 2014

2Nd REViSEd EdiTioN 2016

Copyright © 2014, by Sree Kamalamani Publications.

No part of this publication may be reproduced or distributed in any form or by any

means, electronic, mechanical, photocopying, recording or otherwise or stored in a

database or retrieval system without the prior written permission of the publishers.

Sree Kamalamani Publications.

Kamalamani Publications, from sources believed to be reliable. However,

neither Sree Kamalamani Publications nor its authors guarantee the

accuracy or completeness of any information published herein, and neither;

Sree Kamalamani nor its authors shall be responsible for any errors, omissions,

or damages arising out of use of this information. This work is published with

the understanding that Sree kamalamani publications and its authors are

supplying information but are not attempting to render engineering or

other professional services. if such services are required, the assistance of an

appropriate professional should be sought.

Sree Kamalamani Publications

New No. AJ. 21, Old No. AJ. 52, Plot No 2614,

9th Main, 4th cross, Anna Nagar-600 040

Chennai, Tamilnadu, India.

Landline: 91-044-42170813,

Mobile: 91-9840795803

About the Author

G.Elumalai M.E., is working as an Assistant Professor (Grade – I)

in the Department of Electronics and Communication Engineering,

Panimalar Engineering College, Chennai. He obtained his B.E. in

Electronics and Communication Engineering; M.E. in Applied

Electronics and Ph.D pursing in Wireless Sensor Network. His areas

of interests are Communication System, Digital communication, Digital

signal processing and Wireless Sensor Network. He has more than 13

years of experience.

Department of Electronics and Communication Engineering,

Panimalar Engineering College, Chennai. He obtained his B.E. in

Electronics and Communication Engineering; M.E. in Computer and

Communication. His areas of interests are Communication System, Digital

communication, Optical Communication and Embedded system. He has

more than 4 years of experience.

PREFACE

Dear Students,

We are extremely happy to present the book “Analog and Digital

Communication” for you. This book has been written strictly as per the

revised syllabus (R2013) of Anna University. We have divided the subject

into five units so that the topics can be arranged and understood proper-

ly. The topics within the units have been arranged in a proper sequence

to ensure smooth flow of the subject.

Unit I - Introduce the basic concepts of communication, need of

modulation and different types of analog modulation (Amplitude modu-

lation, Frequency modulation and Phase modulation).

which includes ASK, FSK, PSK, QPSK and QAM.

various pulse modulation technique.

Unit IV - Concentrate on various techniques for error control cod-

ing.

A large number of solved university examples and university

questions have been included in each unit, so we are sure that this book

will cater all your needs for this subject.

We have made every possible effort to eliminate all the errors in

this book. However if you find any, please let we know, because that will

help for us to improve further.

G.Elumalai

M.Jaiganesh

EC8394 ANALOG AND DIGITAL COMMUNICATION L T P C 3 0 0 3

Noise: Source of Noise - External Noise- Internal Noise- Noise Calculation.

Introduction to Communication Systems: Modulation – Types - Need for Modulation.

Theory of Amplitude Modulation - Evolution and Description of SSB Techniques - Theory

of Frequency and Phase Modulation – Comparison of various Analog Communication

System (AM – FM – PM).

Amplitude Shift Keying (ASK) – Frequency Shift Keying (FSK) Minimum Shift

Keying (MSK) –Phase Shift Keying (PSK) – BPSK – QPSK – 8 PSK – 16 PSK - Quadrature

Amplitude Modulation (QAM) – 8 QAM – 16 QAM – Bandwidth Efficiency– Comparison of

various Digital Communication System (ASK– FSK – PSK – QAM).

tions for Data Communication- Data Communication Circuits - Data Communication

Codes - Error Detection and Correction Techniques - Data communication Hardware -

serial and parallel interfaces. Pulse Communication: Pulse Amplitude Modulation (PAM)

– Pulse Time Modulation (PTM) – Pulse code Modulation (PCM) - Comparison of various

Pulse Communication System (PAM – PTM – PCM).

mutual information, channel capacity, channel coding theorem, Error Control Coding,

linear block codes, cyclic codes,

convolution codes, viterbi decoding algorithm.

Advanced Mobile Phone System (AMPS) - Global System for Mobile Communica-

tions (GSM) - Code division multiple access (CDMA) – Cellular Concept and Frequency

Reuse - Channel Assignment and Hand - Overview of Multiple Access Schemes - Satellite

Communication - Bluetooth

TABLE OF CONTENTS

TABLE OF CONTENTS

1.2 Noise 1.5

1.3 Introduction to communication system 1.12

1.4 Modulation 1.16

1.5 Need for modulation 1.17

1.6 Classifications of modulation 1.20

1.7 Some important definitions related to

communication 1.21

1.8 Theory of Amplitude modulation 1.24

1.9 Generation of SSB 1.58

1.10

AM – Transmitters 1.65

1.11 AM Super heterodyne receiver with its characteristic

Performance 1.68

1.12 Performance characteristics of a receiver 1.72

1.13 Theory of Frequency and Phase modulation 1.75

1.14 Comparison of various analog communication

system 1.103

Solved two mark questions 1.106

Review Questions 1.117-1.120

2.2 Digital Transmission system 2.3

2.3 Digital Radio 2.4

2.4 Information capacity 2.5

2.5 Trade of between, Bandwidth and SNR 2.7

2.6 M-ary encoding 2.10

2.7 Digital Continuous wave modulation technique 2.10

Analog and Digital communication

Modulation (or) OOK – System 2.12

2.9 Frequency shift keying 2.18

2.10 Minimum shift keying (or) continuous phase

frequency shift keying 2.26

2.11 Phase shift keying 2.27

2.12 Differential Phase shift keying 2.37

2.13 Quadrature Phase shift keying 2.41

2.14 8 PSK System 2.49

2.15 16 PSK System 2.56

2.16 Quadrature Amplitude modulation 2.57

2.17 16 - QAM 2.61

2.18 Carrier recovery (phase referencing) 2.66

2.19 Clock recovery circuit 2.70

2.20 Comparison of various digital

communication system 2.72

Solved two mark questions 2.74

Review Questions 2.83-2.84

3.2 History of data communication 3.3

3.3 Components of Data communication systems 3.4

3.4 Standard organization for data communication 3.6

3.5 Data communication circuits 3.7

3.6 Data transmission 3.8

3.7 Configurations 3.13

3.8 Topologies 3.14

3.9 Transmission modes 3.15

3.10 Data communication codes 3.18

3.11 Introduction to error detection and

correction techniques 3.25

3.12 Error detection techniques 3.28

3.13 Error correction techniques 3.45

TABLE OF CONTENTS

3.15 Serial interface 3.63

3.16 Centronics – Parallel interface 3.72

3.17 Introduction to Pulse modulation 3.76

3.18 Pulse Amplitude Modulation (PAM) 3.80

3.19 Pulse Width Modulation (PWM) 3.83

3.20 Pulse Position Modulation (PPM) 3.84

3.21 Pulse Code Modulation (PCM) 3.84

3.22 Differential Pulse Code Modulation (DPCM) 3.104

3.23 Delta Modulation (DM) 3.107

3.24 Adaptive Delta Modulation (ADM) 3.111

3.25 Comparison of various pulse communication system 3.115

3.26 Comparison of various source coding methods 3.117

Solved two mark questions 3.119

Review Questions 3.126-3.128

4.2 Entropy (or) average information (H) 4.6

4.3 Source coding to increase average information

per bit 4.18

4.4 Data compaction 4.20

4.5 Shannon fano coding algorithm 4.20

4.6 Huffman coding algorithm 4.24

4.7 Mutual information 4.39

4.8 Channel capacity 4.45

4.9 Maximum entropy for continuous channel 4.46

4.10 Channel coding theorem 4.47

4.11 Error control codings 4.57

4.12 Linear Block codes 4.59

4.13 Hamming codes 4.61

4.14 Syndrome decoding for Linear block codes 4.69

4.15 Cyclic codes 4.88

4.16 Convolutional codes 4.100

4.17 Decoding methods of Convolutional codes 4.113

Analog and Digital communication

Review Questions 4.137-4.138

5.2 Advanced Mobile Phone Systems (AMPS) 5.4

5.3 Global system for mobile - GSM (2G) 5.8

5.4 CDMA 5.19

5.5 Cellular network 5.25

5.6 Multiple access techniques for wireless

Communication 5.37

5.7 Satellite communication 5.47

5.8 Satellite Link system Models 5.48

5.9 Earth station (or) ground station 5.52

5.10

Kepler’s laws 5.54

5.11

Satellite Orbits 5.56

5.12 Satellite Elevation categories 5.58

5.13 Satellite frequency plans and allocation 5.60

5.14 5.60

5.75-5.77

5.78

NUMBER SYSTEM

Calculation. Introduction to Communication Systems: Modulation –

Types - Need for Modulation. Theory of Amplitude Modulation - Evo-

lution and Description of SSB Techniques - Theory of Frequency and

Phase Modulation – Comparison of various Analog Communication

System (AM – FM – PM).

ANALOG AND DIGITAL COMMUNICATION

1.1 INTRODUCTION

link) between two points for information exchange.

The science of communication involving long distances is called

telecommunication ,the word tele stands for long distance

The information can be of different type such as sound, picture,

music computer data etc.,

The basic communication components are

tt A Transmitter

tt A communication channel or medium and

tt A receiver

shown in figure 1.1

Source

Noise and

Distortion

ANALOG COMMUNICATION

2. Input transducer

3. Transmitter

4. Communication channel

5. Noise

6. Receiver

7. Output transducer

useful information from one place to the other.

music, or it can be in the form of pictures or it can be data informa-

tion coming from a computer.

Input transducer

cannot be transmitted as it is.

source into suitable electrical signal .

systems are microphones, TV camera etc..

Transmitter

equivalent of the information to a suitable form corresponding to

communicate through communication medium (or) channel.

amplifier, mixer, oscillator and power amplifier.

ANALOG AND DIGITAL COMMUNICATION

power level should be increased in order to cover a large range.

Communication channel

mission of electronic signal from one place to the other. The

communication medium can be conducting wires, cables, optical fibre or

free space. Depending on the type of communication medium, two types

of communication systems will exist. They are:

Noise

transmitted signal when it is travelling towards the receiver

Once added, the noise cannot be separated out from the information.

be reduced by using various techniques.

Receiver

That is extract original signal form transmitted signal.

tor, amplifier, mixer, oscillator and power amplifier.

Output transducer

of the receiver back to the original form (i.e) Sound, picture and data

signals.

picture tube computer monitor etc.

ANALOG COMMUNICATION

1.2 NOISE

signal.

In audio and video systems electrical disturbances are appearing as

interference is called as noise.

In general noise may be predictable or unpredictable (random) in

nature.

Predictable noise

The predictable noise can be estimated and eliminated by proper

engineering design.

The predictable noise is generally man made noise and it can be

eliminated easily.

Examples: power supply hum, ignition radiation pickup,

spurious oscillations in feedback amplifiers, fluorescent lightening.

Unpredictable noise

This type of noise varies randomly with time, and we have no control

over this noise.

The term noise is generally used to represent random noise.

Presents of random noise ,complicate the communication system

Sources of noise

1. Internal noise

2. External noise

1. Shot noise

2. Thermal Noise

3. Partition Noise

4. Flicker Noise

ANALOG AND DIGITAL COMMUNICATION

1. Natural Noise

2. Manmade Noise

lightning, electrical storms and other atmospheric disturbances.

This noise is unpredictable in nature.

appliances, such as motors, automobiles and aircraft ignition

etc.,

the noise. This noise is effective in frequency range of 1 MHz -

500 MHz

present within the communication system

charge carriers crossing the potential barriers. In electron tubes,

shot noise is generated due to random emission from cathodes.

of minority carriers (or) random generation of recombination of

electron hole.

direct noise current, because it is small compared to the DC-

value.

ANALOG COMMUNICATION

noise component is proportional to the DC-flowing and for most

of the devices the mean square shot noise current is given by,

Where

I0 = DC in amperes

a conducting medium such as a resistor, and this motion in

turn is randomized through collisions caused by imperfection

in the structure of conductors. The net effect of motion of all

electrons constitutes an electric current flowing through the

resistor, causing the noise

noise.

thermal noise is given by

2KTG

Si (ω ) = 2 ...(2)

ω

1+

Where, α

K - Boltzman constant

ANALOG AND DIGITAL COMMUNICATION

between two (or) more electrodes and results from random

fluctuation in the division.

than a transistor, if third electrode draws current.

diode circuit. The spectrum of the partition is flat

electron tubes and surface around the junctions of semiconductor

devices. In the semiconductor, flicker noise arise from fluctuation in the

carrier density, which in turn give rise to fluctuation in the conductivity

of the material. The power density of the flicker noise is inversely

1

proportional to frequency (ie) S (w) a .

f

Hence, this noise becomes significant at very low frequencies

(below a few KHz)

input side (or) at output side of the circuit (or) device

Signal power at the input

SNRi =

Noise power at the input

SNR0 =

Output Noise power

ii. Noise Figure

Noise figure is defined as, the ratio of the signal to noise power

ratio supplied to the input terminals of a receiver (SNRi) to the signal

ANALOG COMMUNICATION

to noise power ratio supplied to the output terminal (or) load resistor

(SNR0)

Therefore,

(SNR)i

Noise figure (F) = (SNR)

0

Generator (Antenna)

Amplifier (receiver)

Voltage

gain

V0

Ri RL

Calculate noise figure consider a network shown in figure 1.1(a).

The network has the following

1. Input impedance Rt

2. Output impedance RL

The internal resistance Ra, may or may not be equal to Rt. The figure

1.1(a) shows the block diagram of such 4 terminals network.

From the figure 1.1(a), we can obtain signal input voltage Vsi and

ANALOG AND DIGITAL COMMUNICATION

power Psi as

V sR t

Vsi = R + R ...(1)

a t

Vsi2

and Ps = R ...(2)

t

Vsi2.Rt2

Psi =

(Ra + Rt)2. Rt

Therefore,

Vsi2.Rt

Psi = ...(3)

(Ra + Rt)2

Step 2: Determination of input noise power ‘Pni’

Similarly the noise input voltage Vni and power Pni can be

calculated

We know that

Vni = 4KTBR

= ...(4)

4KTB Ra Rt

Ra + Rt

2

Vni

and Pni =

Rt

Ra Rt

4KTB

Ra + Rt

Pni =

Rt

Ra

= 4KTB R + R ...(5)

a t

ANALOG COMMUNICATION

Psi

SNRi =

Pni

(Vsi2Rt/(Ra + Rt)2)

SNRi =

4KTB (Ra/ Ra + Rt)

Vsi2.Rt

= ...(6)

4KTB. Ra (Ra + Rt)

Step 4: Determination of signal output power Pso

Vso2 (AVsi)2

Pso = =

RL RL

A2.Vsi2

Pso = ...(7)

RL

Substitute equation (1) in (7), we get

A2 (VsiRt/ RaRt)2

Pso =

RL

A2Vsi2.Rt2

Pso = ...(8)

RL(Ra + Rt)2

Step 5 Determination of noise output power Pno

instance, it can be simply written as,

ANALOG AND DIGITAL COMMUNICATION

Pso

SNR0 =

Pno

Using equation (8) and (9) we get

Pso

SNR0 = P

no

A2Vsi2Rt2

= ...(10)

(Ra + Rt)2 RL.Pno

Step 7 Calculation of Noise figure (F)

SNRi

F =

SNR0

Using equation (6) and (10), we get

Vsi 2Rt

F =

4KTB Ra [Ra + Rt ]

A 2Vsi Rt 2

[Ra + Rt ] R L .Pno

2

F =

( Ra + Rt ) RL .Pno ... (11)

=

4KTB R .R .A 2 a t

categories based on the following parameters

some kind of modulation

ANALOG COMMUNICATION

Unidirectional/ Technique of

Nature of

Bidirectional transmission

Information

communication signal

using

system duplexDuplex transmission

modulation

Figure 1.2 Classification of communication system

(or) otherwise, the communication systems are classified as,

1. Simplex system

Communication System

Unidirectional Bidirectional

(Simplex) (Duplex)

ANALOG AND DIGITAL COMMUNICATION

Simplex system

in only one direction , they cannot receive.

a satellite to earth.

receive but not simultaneously.

example a trans-receiver (or)walky talky set.

communication to take place in both the direction simultaneously.

for example the telephone Systems.

Bidirectional flow

of information

Transmitter Transmitter

+ +

Communication

Receiver 1 Receiver 2

link

Figure 1.3 Basic Block diagram of full duplex system

classified into two categories namely,

ANALOG COMMUNICATION

Analog Communication

form of analog (or) continuous in nature through the communication

channel (or) media.

Digital communication

signal is in the form of digital pulses of constant amplitude, frequency

and phase.

categories into two namely,

Base-band transmission

signals) are directly transmitted.

the sound signal converted into electrical signal is placed directly on the

telephone lines for transmission (local calls).

transmission over a Co-axial Cables in the computer networks (eg. RS

232 cables).

original information signal as it is.

(eg) it cannot be used for the radio transmission where the medium is

ANALOG AND DIGITAL COMMUNICATION

free space.

This is because the Voice signal (in the electrical form) cannot

travel long distance in air.

Communication of baseband signals a technique called “Modulation” is

used.

Why modulation

overcome using modulation.

transmitted through a common medium that is in open (free) space .This

causes interference among various signals, and no useful message is

received by the receiver.

message signals to different radio frequency spectra. This is done by the

transmitter by a process known as ”Modulation”.

1.4 MODULATION

the modulating signal and the carrier signal.

ANALOG COMMUNICATION

Carrier signal

carrier signal (such as amplitude, frequency and phase) in accordance

with the instantaneous value of modulating signal.

signal and carrier signal together.

(i.e) shifted from low frequency to high frequency.

The advantages of modulation are,

it becomes relatively easier to design amplifier circuits as well as

ANALOG AND DIGITAL COMMUNICATION

than the original signal.

of the signal Bandwidth can thus be improved by proper control of

bandwidth at the modulating stage.

are transmitted with the help of antennas.

antenna needed for an effective radiation would be of the order of the

half of the wavelength, given as,

λ c

= ...(1)

2 2f

In broadcast systems, the maximum audio frequency transmitted

from a radio station is 5 KHZ. Therefore,the antenna height required is,

λ c c 3 x 108

= = = = = 30 km

2 2f 2 x 5 x 103 10 x 103

The antenna of this height is practically impossible to install.

antenna height is given by,

λ c

Antenna height is = =

2 2f

3 x 108

=

2 x 10 x 106

= 15 metre

ANALOG COMMUNICATION

Each modulating signal (message signal) is modulated with

different carrier then they will occupy different slot in the frequency

domain (different channels).Thus modulation avoids mixing of signals.

1.5.5 Increases the range of communication

The modulation process increases the frequency

of the signal to be transmitted. Hence, increases the range of

communication.

1.5.6 Multiplexing

If different message signals are transmitted without modulation

through a single channel may causes interference with one another. (i.e)

overlap with one another.

To overcome this interference means, we need n-number of

channels for n-message signals separately.

But different message signals can be transmitted over a same

channel (single channel) without interference using the techniques

“Multiplexing”.

Simultaneous transmission of multiple message (more than one

message) over a channel is known as “multiplexing”.

This reduces the cost of installation and maintenance of more channels.

This improves quality of reception.

digital.

Message - continuous signal

Analog communication

Carrier - continuous signal

Digital communication

Carrier - continuous signal (analog)

ANALOG AND DIGITAL COMMUNICATION

Modulation

Amplitude- Angle

PAM PWM PPM

modulation(AM) modulation

(PM) (FM)

Where,

DM – Delta modulation.

Linear modulation

ANALOG COMMUNICATION

Non-linear modulation

theorem of spectra is known as non-linear modulation system.

1.7 SOME IMPORTANT DEFINITIONS RELATED TO

COMMUNICATION

1.7.1 Frequency(f)

per second. It is expressed in hertz (Hz).

as a sine wave of voltage (or) current, occurs in a given period of time.

Amp

Time

1 Cycle

Figure 1.5 One cycle

similar cycles of a periodic wave.

Wavelength

ANALOG AND DIGITAL COMMUNICATION

1.7.3 Bandwidth

information signals is transmitted .

frequency limits of the signal.

Bandwidth (BW) = f2 f1

f1 – lower frequency

BW

f1 f2 Frequency

narrower frequency bands, which are descriptive names and several of

these band are further broken down into various types of services.

international agency is control of allocating frequencies and services

within the overall frequency spectrum.

ANALOG COMMUNICATION

Frequency Wavelength

Frequency range

designation range

Extremely High

30 - 300 GHZ 1mm - 1cm

frequency (EFH)

super High frequency

3 - 30 GHZ 1 - 10 cm

(SHF)

Ultra High

300MHZ -3GHZ 10cm - 1m

Frequency (UHF)

Very High frequency

30 - 300MHZ 1 -10m

(VHF)

High frequency(HF) 3 - 30MHZ 10-100m

100m-1km

Medium frequency (MF) 300KHZ - 3MHZ

Solved Problem

(1) 850 MHZ (2) 1.9 GHZ (3) 28 GHZ.

Solution

Given data

(1)f = 850 MHZ

(2)f =1.9 GHZ and (3) f = 28 GHZ.

Velocity of light

Wavelength ‘ λ ‘ =

Frequency

c

=

f

3 x 108

(i) l = = 0.35 M

850 x 106

3 x 108

(ii) l = = 0.158 m

1.9 x 109

ANALOG AND DIGITAL COMMUNICATION

3 x 108

(iii) l = = 0.0107 m

28 x 109

1.7.5 Frequency spectrum

frequency domain . It can be obtained by using either fourier series (or)

fourier transform.

The frequency spectrum indicates the amplitude and phase of various

frequency components present in the given signal.

signal.

the modulated signal is called “demodulation”.

which the message signal is recovered from the modulated signal at

receiver.

1.8 THEORY OF AMPLITUDE MODULATION

Definition

the carrier signal is varied in accordance with the instantaneous value

(amplitude) of the modulating signal, but frequency and phase remains

constant.

Let us consider,

Where,

ANALOG COMMUNICATION

amplitude of the carrier signal is changed after modulation with respect

to message signal,

VAM(t) = (Vc+Vmsinωmt)sinωct

V

= Vc 1 + m sin ωm t sin ωc t

Vc

Vm

Where, = ma = modulation index.

Vc

Modulation index is defined as the ratio of amplitude of message

signal to amplitude of carrier signal.

Vm

Modulation index ‘ma’ =

Vc

VAM(t) = Vc[1+ ma sinωmt] sinωct ...(4)

an AM-signal.

We know that,

ANALOG AND DIGITAL COMMUNICATION

cos(A-B) - cos(A+B)

sin A . sin B =

2

Equation (2) becomes,

maVc maVc

VAM(t) = Vcsinωc t + 2 cos (ωc -ωm)t- cos(ωc+ ωm)t ...(3)

2

In equation (3),

wave (or) AM – signal.

Vc

Vc

maVc maVc

2 2

Voltage (V)

fLSB fc fUSB

frequency

maVc mV

= Vc sin 2fct + cos 2p (fc- fm)t - a c cos 2p (fc+fm)t ...(2)

2 2

{

{

{

carrier USB

LSB

ANALOG COMMUNICATION

The (-) sign associated with the USB – represents a phase shift of

180 .The figure 1.8 shows the frequency domain representation.

Vc

maVc maVc

2 2

Amplitude

fm fm

fc-fm fc fc+ fm

Frequency

BW=2fm

AM- signal.

• First term represents the unmodulated carrier signal with the fre-

quency of fc

• Second term represents the lower sideband signal with the frequency

of ( fc- fm ).

• Third term represents the upper sideband signal with the frequency

of ( fc + fm ).

1.8.4 Bandwidth of AM

highest frequency component and the lowest frequency component in

the frequency spectrum.

= ( fc + fm ) - ( fc - fm )

= fc + fm - fc +fm

BW = 2fm

ANALOG AND DIGITAL COMMUNICATION

maximum frequency of modulating signal.

probably the most commonly used. The figure 1.9 shows the graphical

representation of AM – signal.

contains all the frequencies and is used to transfer the information

through the systems.

amplitude of the carrier to increase.

modulating signal.

ANALOG COMMUNICATION

Modulation signal Vm(t)

Vm

time

-Vm

Vc

time

-Vc

Vm

0

{

Vc

time

-Vc

Modulated signal

-Vm

Figure 1.9 AM envelope

• The shape of the envelope is identical to the shape of the modulating

signal.

ANALOG AND DIGITAL COMMUNICATION

the help of a phasor diagram as shown in figure 1.10

maVc

USB

2

ωm

VAM(t)

o

-ωm

maVc LSB

2

Figure 1.10 Phasor representation of AM-wave

frequency of wm, faster than the carrier frequency ωc (i.e) (ωm>ωc).

frequency of wm, slower than the carrier frequency (ωc) (i.e)(ωm<ωc).

vector sum of the two- sideband phasors.

resulting phasor is VAM (t).

• The phasors for carrier and the upper and lower side frequencies

combine, sometimes in phase (adding) and sometimes out of phase

(subtracting).

Modulation index

of maximum amplitude of modulating signal to maximum amplitude of

carrier signal.

ANALOG COMMUNICATION

Vm

Modulation index ‘ma’ =

Vc

Modulation index is used to describe the amount of amplitude

change (modulation) present in an AM – Waveform .

Percentage Modulation

When modulator index is express in percentage, it is called

percent modulation.

Percentage modulation gives the percentage changes in the

amplitude of the output wave when the carrier is acted on by a modulating

signal.

Peak amplitude of modulating signal

Percentage modulation = x 100

Peak amplitude of carrier signal

Vm

= x 100

Vc

= ma x 100

Vm Vmin = Vc -Vm

Vc

Vmax = Vc -Vm

Figure 1.11 AM-Envelope for Calculation of

modulation index

The graphical representation of AM-Wave is also called as time

domain representation of AM-signal.

ANALOG AND DIGITAL COMMUNICATION

AM-signal is represented by,

Vmax = Vc + Vm ...(1)

Vmin

= Vc - Vm ...(2)

(or)

Vmax- Vmin

Vm = ...(4)

2

From equation (1) and (2) , the Vc can be calculated as,

Vc = Vmax - Vm ...(5)

Vc = Vmax + Vm ...(6)

2

=

2

Vmax + Vmin

\ Vc = 2 ...(7)

index for AM is given by,

ANALOG COMMUNICATION

index for AM is given by,

Vm

Ma = ...(8)

Vc

Substitute Vm, Vc values in this equation (8)

Vmax- Vmin/2

ma =

Vmax+ Vmin/2

Vmax- Vmin

∴ma = ...(9)

Vmax+ Vmin

pressed as,

Vmax- Vmin

% ma = x 100

Vmax+ Vmin

...(10)

sum of the voltages from the upper & lower side frequencies.

Vm

...(11)

VUSB = = VLSB

Vmax- Vmin/2

∴ VUSB = = VLSB

2

Vmax- Vmin

VUSB = VLSB = ...(12)

4

Where,

ANALOG AND DIGITAL COMMUNICATION

single carrier, then the modulation index is given by,

Where,

components.

depends upon the amplitude of the modulating signal relative to carrier

amplitude.

signal does not reach the zero axis. Hence the message signal is

fully preserved in the envelope of the AM wave.

• The message signal can be detected (or) recover from the modulated

signal without any distortion by an envelope detector for under

modulation.

ANALOG COMMUNICATION

Amplitude

Envelope

time

signal just reaches the zero amplitude axis. The message signal remains

preserved.

Amplitude

Vm

Vc

Time

modulated signal without any distortion by an envelope

detector.

ANALOG AND DIGITAL COMMUNICATION

signals are cancelled (or) clipped out.

Amplitude

Vm

Vc

0

time

Distortion due to Over Modulation

• The envelope of message signal are not same. Due to this enve-

lope detector provides distorted message signal.

The modulating signals are fully preserved without any distorted

from the modulated signal if and only if Vm <Vc then Ma<1

three components namely carrier, lower side band and upper side band.

power of all the three components.

ANALOG COMMUNICATION

carrier is equal to the rms carrier voltage squared divided by the load

resistance.

V2carrier

Pc = ...(2)

R

(Vc/√2)2

Pc =

R

Vc2

Pc = ...(3)

2R

Where,

is,

(Vsub/ 2)2

PUSB = PLSB = R ...(4)

The peak voltage of the upper and lower side band frequencies,

maVc

VSUB = ...(5)

2

Substitute equation (5) in (4)

PUSB = PLSB =

(

[maVc/2] 2) )

2

ma2Vc2

PUSB = PLSB = 8R ...(6)

ANALOG AND DIGITAL COMMUNICATION

Where,

ma 2 Vc 2

PUSB = PLSB = ...(7)

4 2R

Vc 2

= Pc

2R

ma 2

PUSB = PLSB = Pc

...(8)

4

Pt

= Pc +PLSB + PUSB

Vc2 ma2 ma2

= + Pc

+ Pc

2R 4 4

ma2 ma2

Pt = Pc +

Pc + Pc

4 4

m 2 m 2

= Pc 1 + a + a

4 4

2

Pt = Pc 1 + ma ...(9)

2

Pt ma 2

1 + = ...(9) a

Pc 2

The equation (9) shows the total power required for transmission

of AM-signal or DSBFC. The figure 1.15 shows the power spectrum for

amplitude modulation.

ANALOG COMMUNICATION

Pc Vc2

2R

PLSB = ma Pc

2

ma2Pc

4 PUSB =

4

Power

LSB USB

fLSB fc fUSB

Frequency

1

Pt = Pc 1 +

2

Pt = 1.5 Pc

...(10)

Pt ma2

=1+

Pc 2

Pt ma2

-1 =

Pc 2

P

2 t − 1 = ma2

Pc

ANALOG AND DIGITAL COMMUNICATION

P

2 t − 1 = ma

...(11)

Pc

P

Modulation index ma = 2 t − 1 = ma

Pc

We know that,

Where,

ma 2

Pt =Pc 1 + ...(2)

2

Substitute Pt and Pc values in equation (2)

m 2

I t 2 R = I c 2 R 1 + a

2

m

2

I t 2 = I c 2 1 + a

2

m 2

I t 2 = I c 2 1 + a ... ( 3 )

2

Equation (3) shows that the total transmission current for AM-DSBFC

ANALOG COMMUNICATION

We know that,

m 2

I t 2 = I c 2 1 + a ... (1)

2

It 2

m 2

2

= 1 + a

Ic 2

2 2

It m

2

−1 = a

Ic 2

I 2

2 t 2 − 1 = ma 2

Ic

I 2

ma 2 = 2 t 2 − 1 ... ( 2)

Ic

the transmitted power which contains the information (i.e side band

power) to the total transmitted power.

Power in sideband

% η = x 100

Total power

PLSB +PUSB

= x 100

Pt

ma 2Pc ma 2Pc

+

= 4 4 × 100

ma 2

Pc 1 +

2

m 2 m 2

Pc a + a

4 4

× 100

=

ma

2

Pc 1 +

2

ANALOG AND DIGITAL COMMUNICATION

ma2/2

= ma2 x 100

1+

2

ma2/2 ma2

= x 100 = x 100

(2+ma2)/2 2+ma2

ma2

%η = x 100

2 +ma2

For critical modulation ma =1, then the transmission efficiency

becomes,

if ma=1,

12 1

%η = x 100 = x 100

2+12 3

% η = 33.3 %

The maximum transmission efficiency of the AM-DSBFC is

33.3%.This means , that only one-third of the total power is carried by

the sidebands and the rest two-third is a wasted power.

only on sideband power, because carrier is suppressed.

(∴Carrier is suppressed)

Sideband powers

The upper and lower sidebands are having equal power which is

equal to,

(Vsub)2 (Vsub/ 2)2

PUSB = PLSB= = ...(1)a

R R

The peak voltage of the upper and lower sideband frequencies are,

ANALOG COMMUNICATION

maVc

Vsub =

2

∴ Equation (1) becomes

m V /2

PUSB = PLSB = a c

2

ma 2Vc2

= 8R ...(2)

Pt

= PLSB +PUSB

ma 2Vc2 ma 2Vc2

= +

8R 8R

= =

8R 4R

ma 2 Vc 2 Vc 2

Pt = =Pc

2 2R 2R

m 2

Pt = Pc a

2 ...(3)

For critical modulation, ma=1

(2)

1

Pt = PC

Pt = 0.5 Pc

...(4)

Power in side band

% η =

Pt

ma 2

ma2

P c. + Pc.

4 4

= ma2

Pc

2

ANALOG AND DIGITAL COMMUNICATION

ma2

Pc

2

=

m2

Pc a

2

%η = 1 x 100 = 100%

This means the total transmitted power is fully utilized by the AM-

DSBSC. There is no power is wasted.

S.No Parameters DSBFC DSBSC SSB

1. Carrier Not applicable Fully Fully

suppression

2. Sideband Not applicable Not One side band

3. Bandwidth 2 fm 2 fm fm

4. Transmission Minimum Moderate Maximum

efficiency

5 Number of 1 1 1

Modulating

inputs

6 Complexity Simple Simple Complex

7 Power High Medium Very small

requirement to

cover are

8 Application Radio Radio Point to Point

tion

Advantages

• AM is used for long distance communications.

• AM is relatively inexpensive.

Disadvantages

• AM is more likely to be affected by noise than FM.

ANALOG COMMUNICATION

• 66.67% of transmitted power is wasted.

• Large bandwidth is required.

Applications

• Sound and audio broadcasting.

• Point to point link communication

• Aircraft communications in the VHF frequency range.

Problems

AM.

Solution

BW =2 fm

BW = 2(4 KHZ)

BW =8 KHZ

modulation index is 0.5 .calculate the total RF-power delivered.

Solution

Given data

Pc =10 KW

ma = 0.5

m 2

The total RF-power delivered Pt = Pc 1 + a

2

0.52

1 + =10 KW

2

Pt =11.25 KW

ANALOG AND DIGITAL COMMUNICATION

Calculate

index (iv) Carrier power if the load is 600 W (v) Total power.

Solution

We get,

Vc =500 volts

ma =0.4

ωm

= 3140

ωc = 6.28 x 10 7

(i) ωm

= 2fm = 3140

3140

fm = 499.7

2

fm = 500 HZ

(ii) w c

= 2pfc = 6.28 x 107

6.28 x107

fc =

2

fc = 0.999 x 107 HZ

fc =10 MHZ

ANALOG COMMUNICATION

Vc2 5002

Pc = =

2R 2 x 600

Pc = 208.3 watts

m 2

(v)Total Power (Pt) = P 1 + a

2

0.42

= 208.3 1 +

2

= 224.9

Pt = 225 watts

of 100 KHZ and maximum modulating signal frequency of 5 KHZ.

Determine upper and lower side band frequency and the Bandwidth.

Solution

Given data:

fc

=100KHZ

fm =5 KHZ

=105 KHZ

ANALOG AND DIGITAL COMMUNICATION

fLSB =fc - fm

=95 KHZ

(3)Bandwidth ‘BW’ =2 fm

=2(5 KHZ)

=10 KHZ

by 10 KHZ modulating signal which causes a change in the output

wave of ± 7.5 V. Determine

frequency.

envelope.

Solution

Given data

= 500 K +10 K

= 510 KHZ

fLSB = fc - fm

= 500 K +10 K

= 490 KHZ

ANALOG COMMUNICATION

Vm

(ii) Modulation index ma =

Vc

15

=

20

= 0.75

maVc

=

2

0.75 x 20

=

2

= ± 7.5 V

Vmax = Vc + Vm

= 20 + 15

= 35 V

Vmin = Vc - Vm

= 20 - 15

= 5 V

with amplitudes 2 V and 3 V respectively. Find modulation index.

Solution

Given data

Vm1 2

Modulation index m1 = =

Vc 10

ANALOG AND DIGITAL COMMUNICATION

= 0.2

Vm1 3

Modulation index m2 = = = 0.3

Vc 10

Total modulation index ‘ma’ = m12 +m22

=

0.04 +0.09

ma = 0.66

7. An AM-signal has the equation V(t)=[15 + 4 sin 44 x 103 t] (sin 46.5 x

106 t)V. Find

(i)carrier frequency (ii)modulating frequency (iii)modulation index

(iv)sketch the signal in the time domain, showing voltage and time

scales.(v) Peak voltage of unmodulated carrier.

Solution

4

=15[1+ sin 44 x 103 t](sin 46.5 x 106 t)

15

=15[1+ 0.26 sin 44 x 103 t] (sin 46.5 x 106 t) ...(1)

Vc =15 V

ma = 0.26

ωm = 44 x 103 HZ

ωc =46.5 x 106 HZ

2fc = 46.5 x 106

46.5 x106

fc =

2

f =7.4 x 106 HZ

c

ANALOG COMMUNICATION

2fm =44 x 103

44 x 103

fm =

2

fm = 7 x 103 HZ

Vc = 15 V

=1.95 V =1.95 V

2 2

LSB USB

fLSB fC fUSB

Frequency

power Pc=1000 W. Determine

modulated carrier power (iv)Total transmitted power.

Solution

Given Data

ANALOG AND DIGITAL COMMUNICATION

ma2.Pc 0.22

PUSB =PLSB= = x 1000

4 4

PUSB =PLSB= 10 watts

PTSB

= PUSB + PLSB

ma2 ma2

= Pc + Pc

42 4

ma

= Pc

2

0.2 2

= (1000)

2

PTSB = 20 watts

(iii)The total power required for transmission of AM-wave,

2

Pc =PC 1 + ma

2

0.22

Pc =1000 1 +

2

Pt =1,020 watts

(iv) Modulated Carrier Power,

ma 2

Pt = Pc 1 +

2

Pt

=

ma 2

1 +

2

= Pc 1020

0.22

1 +

2

ANALOG COMMUNICATION

1000W = Pc

Pc =1000 watts

power in the modulated wave.

Solution

Given data

Pc =200 W

ma 2

Pt = Pc 1 +

2

0.752

= 200 1 +

2

Pt = 256.25 watts

and a load resistance of 72 W. Determine the following.

(iii) Total sideband power (iv) Upper & lower sideband power (v) Total

transmitted power.

Solution

Given

Vc =18V

RL =72Ω

ANALOG AND DIGITAL COMMUNICATION

Vc2 182

Pc = = = 2.25 W

2R 2 x 72

(ii)

Total sideband power,

ma2Pc 12.(2.25)

PTSB = 2 = (Let ma=1 critical Modulation)

2

PTSB

=1.125 W

ma2Pc 12.(2.25)

PUSB =PLSB = 2 =

4

= 0.5625 watts

ma 2

Pt = Pc 1 +

2

=1.5 Pc = 1.5 (2.25)

Pt = 3.375 watts

20 KW, for the modulation index is 0.6.

Solution

Given

Pt = 20 Kw

ma =0.6

Pt

Carrier power Pc =

ma 2

1 +

2

20 × 103

=

0.62

1 +

2

=16.94 x 103

Pc =17 K Watts

ANALOG COMMUNICATION

and a maximum modulating signal frequency of 5 KHz. Determine.

(ii) Bandwidth

Solution

= 2 x 5 KHz

=10 KHz

Voltage

BW = 10 KHz

LSB USB

Frequency

Figure 1.17 Output Frequency Spectrum

ANALOG AND DIGITAL COMMUNICATION

and modulation co-efficient m=1 with load resistance RL = 12.

Determine.

(i) Pc, PLSB, PUSB (ii) Pt (iii) Draw the power spectrum

Solution

Vc2 122

(i) Carrier power PC = =

2R 2 x12

PC = 6W

ma2Pc 12.(6)

(ii) PUSB = PLSB = =

4 4

= 1.5 watts

(iii) Total modulated power,

2

Pt = Pc 1 + ma

2

1

= 6 1 +

2

Pt = 9 Watts

Power

6w

1.5 w 1.5 w

fLSB fc fUSB

Figure 1.18 Power Spectrum

ANALOG COMMUNICATION

signal 40 sin (2 x 104 t).Find out,

Solution

Carrier signal,

Vm 20

(iii) Modulation index,ma = = = 0.5

Vc 40

% ma = 0.5 x 100 =50 %

ANALOG AND DIGITAL COMMUNICATION

maVc

=

2

0.5 x 40

=

2

=10 volts

(vi) Bandwidth BW =2fm

=2 x 1 KHZ =2 KHz

(vii) Spectrum

40 V

Amplitude

10 V 10 V

Frequency in KHz

Figure 1.19 Voltage Spectrum

ANALOG COMMUNICATION

have to suppress the carrier as well as one of the side bands. The various

techniques to suppress one of the side bands are

In Frequency discrimination method, first a DSBSC signal is

generated by using an ordinary product modulator or

balanced modulator. Then, this DSBSC signal is passed through

suitable band pass filter to obtain SSBSC signal, where, one of the

sidebands is filtered out.

The design of band pass filter is very critical and there are some

limitations on the modulating and carrier frequencies. They are

1. The base band is approximately related to carrier frequency. it is

very difficult to design a band pass filter, If the carrier frequency

is much greater than the bandwidth of the baseband or modu-

lating signal.

2. The frequency discrimination method is useful only when the

baseband is restricted at its lower edge, so that the upper side

bands and lower side bands are non-overlapping.

The filter method is used in speech communication where lowest

spectral component is 70Hz and it may not be taken as 300 Hz, without

affecting the intelligibility of the speech signal. However the system is

not useful for video communication where the modulating signal starts

from dc.

ANALOG AND DIGITAL COMMUNICATION

signal

modulator Filter

x(t) A SSB-BC

x(t) cos ωct Signal

A DSBSC signal

Applications

system is not used for commercial broadcasting.

It is mainly used in wireless system for ultra-high frequency and

very high frequency communication process.

The block diagram of the phase discriminator is shown in the

figure 1.21

In this method, there are two balanced modulators and two phase

shifters used.

One modulator accepts carrier with 90° phase shift from carrier

oscillator and modulating signal directly.

Another modulator accepts modulating signal with phase shift of 900

and the carrier signal directly.

Balanced modulator 1 accepts direct modulating signal

Vm(t) = Vm sinωmt and 90° phase shifted carrier signal

Vc(t) = Vc sin (ωct+ 90).

Balanced modulator 2 accepts 90° phase shifted modulating signal

Vm(t) = Vm sin(ωmt+90) and direct carrier signal Vc(t) = Vc sinωct.

Balanced

Modulator

M1

Modulating

or audio signal Carrier 900

Audio phase

Amplifier Shifter SSB

Adder

Carrier

Source

Balanced

Add 900 Modulator

Phase Shifter M2

ANALOG COMMUNICATION

modulator 2 is

V V

= m c ( cos (ωc t + 90 ) − ωm t ) − cos ( (ωc t + ωm t ) + ωm t )

2

VmVc

V1 = cos (ωc t − ωm t + 90 ) − cos (ωc t + ωm t + 90 ) ... (1)

2

( LSB ) (USB )

also, output of balanced modulator 2 is

V1 = Vm sin (ωm t + 90 )Vc sin ωc t

V V

= m c cos (ωc t − (ωm t + 90 ) ) − cos (ωc t + (ωm t + 90 ) )

2

VmVc

V2 = cos (ωc t − ωm t + 90 ) − cos (ωc t + ωm t + 90 ) ... ( 2)

2

( LSB ) (USB )

Therefore, output of the sum will be

V0 = V1 + V2

V V

= m c cos (ωc t − ωm t + 90 ) − cos (ωc t + ωm t + 90 )

2

V V

+ m c cos (ωc t − ωm t + 90 ) − cos (ωc t + ωm t + 90 )

2

difference (+ 90°and -900). (i.e.90°out of phase). Hence they are

cancelled each other. Therefore only upper sideband components

present, since they are in same phase(90°). Thus SSB SC signal is

obtained.

1. Each balanced modulator need to be carefully balanced in order to

suppress the carrier.

2. Each modulator should have equal sensitivity to baseband signal.

3. It is difficult to design wide band phase shifting network.

4. The carrier phase shifted network, must provide an exact 90° phase

shift of carrier frequency.

ANALOG AND DIGITAL COMMUNICATION

Advantages

regardless of the carrier frequency can be generated.

Disadvantages

signal.

2. The Phase shifting network must provide exact 900 phase shift.

This method used to generate SSB at any frequency and thus use low

audio frequencies.

method.

is shown in figure.

signal at point AB is similar to that of phase shift method.

However the manner in which voltages are fed to the balanced modu-

lators 3 and 4 are different phase shift method.

(A.F.), this method combines them with a fixed carrier frequency f(say

ANALOG COMMUNICATION

quency only.

are fed to the low pass

sure that input to the last stage Balanced modulators 3 and4 results in

a proper sideband suppression.

DSBSC-AM

A

Balanced Low Pass Balanced

modulator 1 filter 1 modulator 3

frequency 90 Phase

0 RF

(AF) Shifter Carrier

Adder SSB SC AM

AF OSC

Carrier

2 sinωct OSC 900 Phase

Shifter

B 2 cosωct

Balanced

Low Pass Balanced

modulator 2

filter 2 modulator 4

DSBSC-AM

Sub carrier signal C 0 (t ) = 2V0 sin ω0t

RF carrier signal Cc (t ) = 2Vc sin ωc t

The input to the balanced modulator 1

=Vm sin ωm t × 2V0 sin (ω0t + 900 )

The output of balanced modulator 1 is

= VmV0 cos (ω0 − ωm ) t + 900 − cos (ω0 + ωm ) t + 900 ... (1)

Similarily, output of balanced modulato or is

= Vm sin ωm t 2V0 sin ω0t

= VmV0 cos (ω0t − ωm t ) − cos (ω0t + ωm t ) ... ( 2)

The LPF 1 and LPF 2 eliminate upper sidebands of ballanced

modulator 1 and 2. Hence the output of LPF 1 is

ANALOG AND DIGITAL COMMUNICATION

Output of LPF 2

= VmVc cos (ωc − ωm ) t

Assumee V0 = Vm = Vc = 1

Output of balanced modulator 3

= cos (ω0 − ωm ) t + 900 2 sin ωc t

= sin (ωc + ω0 − ωm ) t + 900 + sin (ωc + ω0 − ωm ) t − 900 ... ( 3 )

The output of balanced modulator 4

= cos (ω0 − ωm ) t 2 sin (ωc t + 900 )

= sin ωc + 900 + ω0t − ωm t + sin ωc t + 900 − ω0t + ωm t

= sin (ωc + ω0 − ωm ) t + 900 + sin (ωc − ω0 + ωm ) t + 900 ... ( 4 )

The output off adder circuit is ( 3 ) and ( 4 )

= sin (ωc + ω0 − ωm ) t + 900 + sin (ωc − ω0 + ωm ) t − 900

+ sin (ωc + ω0 − ωm ) t + 900 + sin (ωc − ω0 + ωm ) t + 900

= 2 sin ( ωc + ω0 − ωm ) t + 900

The other two terms ( LSB ) are cancelled each other because it is

out of phase wiith each other

= 2 cos (ωc + ω0 − ωm ) t

The final RF output frequency is f c + f 0 + f m which is essentially

the lower sideband of RF carrier f c + f 0

1.9.5 Power Calculation of SSBSC-AM

The total power in transmitted AM is,

Ec 2 ma 2

Pt = 1 +

2R 2

m 2

Pt = Pc 1 + a

2

If the carrier and one e side band suppressed, then the total power

in SSB-SC-AM is,

Pt ′′ = PLSB − PUSB

ma 2Ec 2

=

8R

m E2

= a c

8R

ma 2 Ec 2

=

4R 2R

1

Pt ′′ = ma 2Pc

4

ANALOG COMMUNICATION

Pt − Pt ''

Power saving =

Pt

ma 2

ma 2

1 + 2 Pc − 4 Pc

=

ma 2

1 + 2 Pc

ma 2 mc 2

Pc 1 + −

2 4

=

ma 2

1 + Pc

2

m 2 4 + ma 2

1+ a

= 4 = 4

ma 2 2 + ma 2

1+

2 2

4 + ma 2 4 + ma 2

= =

2 ( 2 + ma 2 ) 4 + 2ma

2

5

If ma = 1, the power saving in SSB-SC-AM is = = 83.33%

6

If ma = 0.5, the power saving in SSB-SC-AM is = 94.44%

1.9.7 Advantages of SSB over DSBFC

(i) Less Bandwidth is required

(ii) More power saving, at 100% modulation, power saving is 83.33%

(iii) Reduced noise interference due to reduced bandwidth.

1.9.8 Disadvantages of SSB

(i) The generation and reception of SSB signals is complicated.

(ii) Used in land and air mobile communication, navigation and amateur

radio.

Applications of SSB

(i) Where power saving and low bandwidth requirement are important.

(ii) Used in : land and air mobile communication, navigation and

amateur radio.

1.10 AM – TRANSMITTERS

There are two types of AM transmitters,

Low level transmitter

High level transmitter

ANALOG AND DIGITAL COMMUNICATION

The block diagram for a low-level AM DSBFC transmitter is shown

in figure

Preamplifier

It is a linear voltage amplifier with high input impedance .

It is used to raise source signal amplitude to a usable level with

minimum nonlinear distortion and as little thermal noise as possible

.

Antenna

oscillator amplifier driver

Linear Linear

Bandpass Intermediate final Bandpass Coupling

Modulator

filter power power filter network

amplifier amplifier

Bandpass Modulating

Pre-amplifier signal

filter

driver

Modulating

signal

source

Modulating signal driver (linear amplifier)

Amplifies the information signal to an adequate level to suffi-

ciently drive the modulator.

RF carrier oscillator

It is used to generate the carrier signal.

Usually crystal-controlled oscillators are used

Buffer amplifier

It is a low-gain, high-input impedance linear amplifier.

It is used to isolate the oscillator from the high-power amplifiers.

Modulator can be used either as emitter or collector

modulator. The intermediate and final power amplifiers (push pull

modulator)

ANALOG COMMUNICATION

tain symmetry in the AM envelope

Coupling network

It matches output impedance of the final amplifier to the trans-

mission line/antenna.

Applications

It is used in low-power, low-capacity systems: wireless intercoms,

remote control units, pagers and short-range walkie-talkie.

The block diagram for a high-level AM DSBFC transmitter is

shown in figure

Modulating Modulating

Bandpass

Signal Preamplifier Signal

filter

amplifier driver

amplifier

Antenna

AM Modulator

and output Modulating

power Power

amplifier driver

Carrier

RF carrier Buffer Carrier Power

oscillator amplifier driver amplifier

except for the addition of power amplifier.

100% modulation (carrier power is maximum at the high-level

modulation point) .

ANALOG AND DIGITAL COMMUNICATION

• It is the final power amplifier

• Frequency up-converter translates low-frequency information

signals to radio frequency signals that can be efficiently radiated

from an antenna and propagates through free space.

1.10.3 Comparison of High level and Low level modulation

tion

1 Power level Modulation takes Modulation takes place at a

level

2 Types of am- Highly efficient class Linear amplifiers (A, AB, B)

3 Efficiency Very High Low

4 Devices used Vacuum tubes or Transistors, JFET, OP-

tions

5 Design of AF Complex due to very Easy as it is to be done at

6 Application High power Sometimes used in TV

modulation)

CHARACTERISTICS PERFORMANCE

1.11.1Introduction

The problems in the TRF receiver are

It is useful only to single tuned circuit and low frequency

applications.

ANALOG COMMUNICATION

Instability

Variation in gain

Insufficient selectivity

It was not possible to use double tuned RF amplifiers in this receiver.

Poor adjacent channel rejection.

Above mentioned problems of TRF receiver are solved in this

receiver by converting every selected RF signal to a fixed lower

frequency called as the •’Intermediate Frequency (IF)’.

1.11.2 Principle (or) Definition of Heterodyning

The radio & TV receiver operates on the principle of super hetero-

dyning. The process of mixing two signals having different frequencies

to produce a new frequency is called as heterodyning.

1.11.3 Block diagram

Audio and

Receiving

power

E.M antenna

amplifiers

Radiation

(f0-fs) AF

fs

RF fs IF

Mixer Detector

stage Amplifier

AGC

Local

f

oscillator 0

Ganged tuning

Figure 1.25 Block diagram of Super hetero-dyne receiver

1.11.4 Operations

(i). Antenna Receiver

The DSBFC or AM signal transmitted by the transmitter travels

through the air and reaches the receiving antenna.

This signal is in the form of electromagnetic waves. It induces a

very small voltage (few μV) into the receiving antenna.

(ii) RF stage

The RF stage is an amplifier which is used to select the wanted

ANALOG AND DIGITAL COMMUNICATION

It also reduces the effect of noise. At the output of the RF ampli-

fier we get the desired signal at frequency”fs”.

(iii) Mixer

The mixer receives signals from the RF amplifier at frequency (fs,)

and from the localoscillator at frequency (f0 )Such that f0>fs.

(iv) IF amplifier

The mixer will mix these signals to produce signals having

frequencies fs, fo , (fo+ fs,) and (fo–fs).

Out of these the difference of frequency component i.e.(fo - fs,) is

selected and all others are rejected.

This frequency is called as the intermediate frequency (IF).

IF amplifier stages.

IF amplifiers provide most of the gain (and hence sensitivity) and the

bandwidth requirements of the receiver.

Therefore the sensitivity and selectivity of this receiver do not change

much withchanges in the incoming frequency.

Note: This frequency contains the same modulation as the original

signal fs.

(v) Detector

The amplified IF signal is detected by the detector to recover the

original modulating signal.

This is then amplified and applied to the loudspeaker.

(vi) Audio and power amplifier

The demodulated output is not sufficient to drive the output

device (loud speaker),therefore the power level of demodulated output is

increased using audio and power amplifiers.

(vii) Loud speaker

The loud speaker is a output transducer is used to convert the

ANALOG COMMUNICATION

(viii) Ganged Tuning

In order to maintain a constant difference between the local

oscillator frequency and the incoming frequency, ganged tuning is used.

This is simultaneous tuning of RF amplifier, mixer and local

oscillator and it is achieved by using ganged tuning capacitors (Tuning

control in radio set).

(ix) Automatic Gain Control (AGC)

• This circuit controls the gains of the RF and IF amplifiers to

maintain a constant output voltage level even when the signal

level at the receiver input is fluctuating.

• This is done by feeding a controlling dC voltage to the RF and IF

amplifiers.

• The amplitude of this dc voltage is proportional to the detector

output.

1.11.5 Summary of Super heterodyne Receiver

Select the desired station at frequency fs by tuning the

RF amplifier and local oscillator

carrier equal to IF . The IF amplifier amplifies this signal.

recover the modulating signal.(AF signal).

and apply it to the loud speaker.

ANALOG AND DIGITAL COMMUNICATION

No variation in bandwidth and remains constant over the entire

operation.

High sensitivity & selectivity.

High adjacent channel rejection.

Improved stability.

Higher gain per stage because IF amplifier are operated at a lower

frequency.

1.11.7 Frequency parameter of AM receiver

The AM receiver has the following frequency measurements

1. Two frequency bands: Medium wave (MW) bands. & Short wave

(SW) bands.

2. RF carrier range: MW band- 535 Khz to 1650 Khz. SW band – 5

to 15 Mhz.

3. IF range: 455 Khz.

4. IF bandwidth: 10 Khz.

1.12.1 Sensitivity

signals.

at the input of the receiver to obtain a standard output power.

amplifier stages. By increasing the gains or these stages it is possible to

increase the sensitivity of a receiver.

ANALOG COMMUNICATION

Sensitivity µV

16

15

14

Lowest sensitivity

13

12

11

Highest sensivity

10

Frequency

1000 KHz

600 850 1600

1.12.2 Selectivity

• The selectivity of a receiver is its ability to reject unwanted signals.

amplifier. The responses of the mixer and RF amplifier stages also

play a small but significant role.

• Higher the selectivity belter is the adjacent channel rejection and less

is the adjacent channel interference.

Higher the “Q” of the tuned circuit used in the TF amplifier, better is the

selectivity.

ANALOG AND DIGITAL COMMUNICATION

Attenuation, dB

100

80 Attenuation increases

as we go away from the

tuned frequency

60

40

Receiver tuned

at 950 kHz

20

Deviation from

resonant frequency

40 -20 0 20 40 kHz

1.12.3 Fidelity

The fidelity is the ability of a receiver to reproduce all the

modulating frequencies equally.

The fidelity basically depends on the frequency response of the AF

amplifier.

High fidelity is essential in order to reproduce a good quality.

Receiver output in dB

Minimum attenuation

0

This is basically the

frequency response

of the AF amplifier

Frequency

50 1 kHz 5 kHz 10 kHz

ANALOG COMMUNICATION

Image Frequency: The image frequency is defined as the received signal

frequency plus twice the intermediate frequency.

f IM = f s + f IF

ratio(IFRR) is a numerical measure of the ability of a preselector to reject

the image frequency.

Mathematically, the IFRR is expressed as IFRR = 1 + Q 2ρ 2

Where, Q – quality factor

f M f

ρ= 1 − s

fs f1M

Note: For improving the capability of a receiver to reject image frequen-

cy, the value of IFRR should be high as possible.

1.12.5 Double Spotting

Double spotting means the same radio stations being heared at

two different points on the AM receiver dial.

Due to: Poor front-end selectivity.

Reduced: By increasing the front-end selectivity by introducing

another RF- amplifier stage.

1.13 THEORY OF FREQUENCY AND PHASE MODULATION

1.13.1 Introduction

can be varied by the modulating signal instantaneous value. Carrier

amplitude is varied in accordance with the instantaneous value of

message signal is known as Amplitude modulation.

ly the angle modulation. In angle modulation either frequency (or) phase

of the carrier is carried in accordance with the instantaneous value of

modulating signal , but the carrier amplitude is constant.

as noise reduction, improved system fidelity a nd m ove e fficient us e of

ANALOG AND DIGITAL COMMUNICATION

such as,

Angle modulation

Modulation (FM) (PM)

too such as increased bandwidth and we use of more complex circuits.

1. Radio broadcasting.

2. TV sound transmission.

4. Cellular radio.

5. Microwave communication.

6. Satellite communication.

Frequency modulation

the instantaneous amplitude of modulating signal, then it is called

frequency modulation (FM). Amplitude of the modulated carrier remains

constant.

Phase Modulation

instantaneous amplitude of modulating signal, then it is called Phase

modulation (PM). Amplitude of the modulated carrier remains constant.

ANALOG COMMUNICATION

phase angle ( f )of a sinusoidal carrier wave is varied with respect to

time. An angle modulated wave can be expressed mathematically as,

Where,

(radians).

FM, the frequency of carrier is varied by the modulating

signal whereas in PM, the phase of the carrier is varied by modulating

signal.

also gets varied and vice-versa. Therefore, FM and PM both occur

whenever either form of angle modulation is performed.

whereas a direct PM is an indirect FM. Therefore, In angle modulation

q(t) is a function of modulating signal. That means,

Where,

(or)

ANALOG AND DIGITAL COMMUNICATION

q(t) a Vm(t)

Where,

in radians with respect to the reference phase is called phase deviation

(Dq). The change in the carrier phase produces a corresponding change

in frequency.

carrier frequency is varied from its un-modulated value.

proportional to the amplitude of the modulating signal (Vm). The

instantaneous frequency of FM-signal varies with time. The maxi-

mum change in instantaneous frequency from the value fc is known as

frequency deviation.

Negative Positive

deviation deviation

Df Df

fc f

Instantaneous Phase Deviation

in the phase of the carrier at a given instant of time and indicates how

much the phase of the carrier is changing with respect to its reference

phase.

ANALOG COMMUNICATION

Instantaneous phase

given instant of time.

Where,

change in the frequency of the carrier.

deviation.

q(t))= q'(t) (

Instantaneous frequency

at a given instant of time.

deviation q'(t) is directly proportional to modulating signal voltage.

ANALOG AND DIGITAL COMMUNICATION

have

d

q(t) = q'(t)

dt

Integrate on both side we get,

⌡

Substitute q'(t) value in equation (3), we get

⌡

Where, Vm(t) = Vm cos wmt

⌡

= kf ⌠ Vm.cos wmt.dt

⌡

Vm

= kf. sin wmt

Wm

kf.Vm

= sin w mt

2pfm

Df kf.Vm

= sin w mt , \

where =Df

fm 2p

...(4)

Df

where, = Mf

fm

ANALOG COMMUNICATION

proportional to modulating signal voltage (i.e)

ANALOG AND DIGITAL COMMUNICATION

Phase Modulation

modulation zero crossings zero crossings

Vm

0 t

-Vm

Vc

0 t

-Vc

Maximum Minimum

Frequency Frequency

Maximum Maximum

deviation deviation

Vc

0 t

-Vc

sine-wave modulating signal

Applications of FM

1. Radio broadcasting.

3. Satellite Communication.

4. Police wireless.

ANALOG COMMUNICATION

6. Ambulances.

7. Taxicabs.

modulation index. Note that the term Mf sin wmt and Mp cos wmt indicates

instantaneous phase deviation q(t). Hence ‘Mf’ and Mp also indicates

maximum phase deviation. In other words , modulation index can also

be defined as “maximum Phase deviation”.

Modulation index in PM

Mp=Kp Vm rad.

peak modulating voltage and it’s unit is radians.

Modulation index in FM

Vm

Mf

FM =Kf unit less

Wm

Thus modulation index

Vm

Mf =Kf V/S

Wm

ANALOG AND DIGITAL COMMUNICATION

Vm

=Kf

2pfm

Vm

Here, Kf =Df is called frequency deviation . It is denoted by

2p

Df (or) ‘d’ and it’s unit is Hz

Modulation index in FM

Df d

Mf = (or)

fm fm

Maximum frequency deviation

=

Modulating frequency

Percentage Modulation

ratio of actual frequency deviation to maximum allowable frequency

deviation .(i.e)

Actual frequency deviation

% Modulation =

Maximum allowable frequency deviation

Deviation Ratio (DR)

to maximum modulating signal frequency .(i.e)

Maximum frequency deviation

Deviation Ration (DR) =

fm(max)

1.13.8 Frequency spectrum of Angle Modulated Waves

modulating signal is at positive and negative peaks.

zero crossing of the modulating signal.

• Both FM and PM waveforms are identical except the phase shift. From

ANALOG COMMUNICATION

is FM (or) PM.

frequency . But in angle modulated signal contains large number

of sidebands depending upon the modulation index. Since FM and

PM have identical modulated waveforms, their frequency content is

same.

infinite number of pairs of side frequencies and thus has an infinite

bandwidth.

analysis, Here we consider FM-equation for analysis.

when the modulating signal has zero amplitude, then carrier has

frequency of wc (or) fc.

of the carrier increases and vice versa. Frequency analysis of an

angle modulated wave by a single –frequency sinusoid produces a

peak phase deviation of Mf radians, where mf is the modulation in-

dex.

sine of sine function . The only way to solve this equation is by using the

Bessels functions.

wave can be expanded as follows,

ANALOG AND DIGITAL COMMUNICATION

- sin(wc-wm)t] + J2(mf )[sin(wc+2 wm) t

+ sin (wc-2wm)t] + J3 (mf)[sin(wc+3 wm)t

-sin(wc-3wm)t]+ .............} ...(2)

↓ ↓

Carrier Pair of first side band

1. The FM-wave consists of carrier. The first term in the above equation

represents the carrier.

terms except the first one remaining are sidebands.

J-co-efficients. The values of these coefficients can be obtained from

the table 1.2 (or)from the graph shown in Figure 1.25

4. For example J1(mf) denotes the value of J1 for the particular value of

mf written inside the bracket.

5. To solve for the amplitude of the side frequencies Jn(mf) is given by,

( )[ ]

n

mf 1 (mf/2)2 (mf/2)4

Jn(mf) = - + -……. ...(4)

2 n! 1!(n+1)! 2!(n+2)!

Where,

! = Factorial (1 x 2 x 3 x 4....)

mf = modulation index.

ANALOG COMMUNICATION

FM (Frequency Spectrum)

J1(mf).Vc J1(mf).Vc

J2(mf).Vc J2(mf).Vc

Bandwidth=infinite

1

0.9

0.8

0.7

J1(mf)

0.6

J2(mf)

J3(mf)

0.5

J4(mf)

0.4 J5(mf) J6(mf)

J7(mf)

J8(mf)

0.3

0.2

Jn(mf)

0.1

- 0.1

- 0.2

- 0.3

- 0.4

0 1 2 3 4 5 6 7 8 9 10 11 12

Modulation Index mf

ANALOG AND DIGITAL COMMUNICATION

Modu-

lation Carrier Side Frequency Pairs

index

m J0 J1 J2 J3 J4 J5 J6 J7 J8 J9 J10 J11 J12 J13 J14

0.00 1.00 ... ... ... ... ... ... ... ... ... ... ... ... ... ...

0.25 0.98 0.12 ... ... ... ... ... ... ... ... ... ... ... ... ...

0.5 0.94 0.24 0.03 ... ... ... ... ... ... ... ... ... ... ... ...

1.0 0.77 0.44 0.11 0.02 ... ... ... ... ... ... ... ... ... ... ...

ANALOG COMMUNICATION

1.5 0.51 0.56 0.23 0.06 0.01 ... ... ... ... ... ... ... ... ... ...

2.0 0.22 0.58 0.35 0.13 0.03 ... ... ... ... ... ... ... ... ...

2.4 0 0.52 0.43 0.20 0.06 0.02 ... ... ... ... ... ... ... ... ...

2.5 -0.05 0.50 0.45 0.22 0.07 0.02 0.01 ... ... ... ... ... ... ... ...

3.0 -0.26 0.34 0.49 0.31 0.13 0.04 0.01 ... ... ... ... ... ... ... ...

4.0 -0.40 -0.07 0.36 0.43 0.28 0.13 0.05 0.02 ... ... ... ... ... ... ...

5.0 -0.18 -0.33 0.05 0.36 0.39 0.26 0.13 0.05 0.02 ... ... ... ... ... ...

5.45 0 -0.34 -0.12 0.26 0.40 0.32 0.19 0.09 0.03 0.01 ... ... ... ... ...

6.0 0.15 -0.28 -0.24 0.11 0.36 0.36 0.25 0.13 0.06 0.02 ... ... ... ... ...

7.0 0.30 0.00 -0.30 -0.17 0.16 0.35 0.34 0.23 0.13 0.06 0.02 ... ... ... ...

8.0 0.17 0.23 -0.11 -0.29 -0.10 0.19 0.34 0.32 0.22 0.13 0.06 0.03 ... ... ...

8.65 0 0.27 0.06 -0.24 -0.23 0.03 0.26 0.34 0.28 0.18 0.10 0.05 0.02 ... ...

9.0 -0.09 0.25 0.14 -0.18 -0.27 -0.06 0.20 0.33 0.31 0.21 0.12 0.06 0.03 0.01 ...

10.0 -0.25 0.05 0.25 0.06 -0.22 -0.23 -0.01 0.22 0.32 0.29 0.21 0.12 0.06 0.03 0.01

Table 1.2 Bessel functions of the first kind Jn(m)

ANALOG AND DIGITAL COMMUNICATION

difference sideband frequencies are produced.

lower sidebands are also generated.

spectrum of AM.

Note that the sidebands are spaced from the carrier fc and from

each other by a frequency equal to modulating signal frequency fm.

Problem 1

Determine

index (mf) for an FM modulator with a deviations sensitivity

Kf= 5kHz/V and a modulating signal Vm (t)=2 cos (2p x 2000t).

with a deviation sensitivity Kp =2.5 rad/V and a modulating sig-

nal Vm(t) = 2 cos (2p x 2000t).

Given

Solution

Vm = 2V and

Fm =2000 Hz = 2kHz

ANALOG COMMUNICATION

=Kf VmDf

5 kHz

= x 2V

V

Df = 10kHz

In angle modulation, the total transmitted power is always

remains constant. It is not dependent on the modulation index. The

reason for this is that the amplitude of the FM (or) PM signal Vc is always

constant. And the power is given by.

2

Vc

2 2

Pt = = Vc

R 2R

Vc 2

Pt =

2R

1.13.10 Bandwidth Requirement of Angle Modulation

• Theoretically, the BW of angle modulated wave is infinite. But practi-

cally it is calculated based on how many sidebands have significant

amplitude.

• The BW of angle modulation depends on the modulation index.

• Angle modulated waveforms are generally classified as either low,

medium or high modulation Index.

m<1 - called low modulation index

m = 1 to 10 - called medium modulation index

m>10 - called high modulation index.

Note : If m < 10 then, then the system is called narrow band FM,

otherwise wide band FM.

For low-index modulation, the frequency spectrum resembles

double-sideband AM and the minimum bandwidth is approximated by,

BW = 2 fm, (Hz) ... (1)

ANALOG AND DIGITAL COMMUNICATION

approximated by,

BW = 2 Δf, (Hz) ... (2)

The actual bandwidth required to pass all the significant side

bands for an angle modulatedwave is:

BW = 2 (n* fm), (Hz) ... (3)

Where n - number of significant sidebands, fm- modulating signal

frequency (Hz).

Carson’s Rule

• It is used to estimate the bandwidth for all angle-modulated

system regardless of the modulation index.

• It is the second method to find practical bandwidth .

• Carson’s rule state that the bandwidth of FM wave is twice the sum

of the peak frequency deviation and the highest modulating signal

frequency.

BW = 2 (Δf * fm) (Hz) ...(4)

frequency (Hz).

BW = 2 ( ∆f * f m )

BW = ( 2∆f * 2 f m )

w.k .t ∆f = K 1 * Vm

V

m f = K1 * m

fm

∆f

fm =

mf

∆f

BW = 2∆f * 2

mf

1

BW = 2∆f 1 + radian/sec

mf

Note: This carson’s rule gives correct results if the modulations index is

greater than 6.

ANALOG COMMUNICATION

i) Narrow band FM (NBFM)

A narrowband FM is the FM wave with a small bandwidth. The

modulation index mf of narrowband FM is small as compared to one

radian.

Hence the spectrum of narrow band FM consists of the carrier and

upper sideband and a lower sideband.

For small values of mf the values of the J coefficients are,

J0(mf) = 1, J1 (mf) = mf /2

Jn(mf) = 0 for n>1

Therefore a narrow band FM wave can be expressed mathematically

as follows,

eFM = S(t) = Ec sinωct + mf Ec/2 sin (ωc+ωm)t - mf Ec/2 sin (ωc-ωm)t

• The negative sign represents LSB is 180o phase shifted

• Practically the narrow band FM systems have mf less than 1. The

maximum permissible frequency deviation is restricted to about

5kHz.

The system is used in FM mobile communications such as police

wireless, ambulances, taxicabs etc.

ii) Wide band FM (WBFM)

As discussed earlier, for large values of modulation index mf, the FM

wave ideally contains the carrier and an infinite number of sideband

located symmetrically around the carrier.

Such a FM wave has infinite bandwidth and hence called as wide-

band FM.

The modulation index of wideband FM is higher than 1.

The maximum permissible deviation is 75kHz and it is used in the

entertainment broadcasting applications such as FM radio, TV etc.

ANALOG AND DIGITAL COMMUNICATION

FM

1 Modulation index Greater than 1 Less than 1

2 Maximum devia- 75kHz 5kHz

tion

3 Range of 30 Hz to 15kHz 30 Hz to 3kHz

modulating

frequency

4 Maximum modu- 5 to 2500 Slightly greater

lation index than 1

5 Bandwidth Large Small

(approximately (equal to AM)

15times > NBFM)

suppressed suppressed

broadcasting communication

Advantages of FM

1. Improved noise immunity.

2. Low power is required to be transmitted to obtain the same quality

of received signal at the receiver.

3. Covers a large area with the same amount of transmitted power

4. Transmitted power remains constant.

5. All the transmitted power is useful.

Disadvantages of FM

Very large bandwidth is required.

Since the space wave propagation is used, the radius of transmis-

sion is limited by the line of sight.

ANALOG COMMUNICATION

Applications of FM:

Radio broadcasting.

Sound broadcasting in TV.

Satellite communication

Police wireless

Point to point communication.

Problem 1

kHz, a modulating signal frequency fm =10 kHz, Vc =10 V and a 500 –kHz

carrier determine

tion.

Given

Solution

Df

(a) Modulation index mf =

fm

20

=

10

mf = 2

ANALOG AND DIGITAL COMMUNICATION

B = 2 (n x fm )

= 2 (4 x 10 kHz)

= 2 (40 kHz)

B = 80 kHz

B = 2(Df + fm)Hz

B = 60 kHz

spectrum.

J0(2) = 0.22

J1(2) = 0.58

J2(2) = 0.35

J3(2) =0.13

mf =2

ANALOG COMMUNICATION

5.8 V 5.8 V

3.5 V 3.5 V

2.2 V

1.3 V 1.3 V

Df Df

Frequency spectrum

Problem 2

Determine,

(widest bandwidth) modulation index for an FM broadcast-band

transmitter with a maximum frequency deviation of 75 kHz and a

maximum modulating signal frequency of 15 kHz.

modulation index with only half the peak frequency deviation and

modulating signal frequency.

Given

Solution

Df(max)

a. Deviation Ratio (DR) =

fm(max)

ANALOG AND DIGITAL COMMUNICATION

75 kHz

=

15 kHz

DR =5

B =2(n x fm) Hz

=2(8 x 15,000)

=2(120000)

=240000Hz (or)

B =240 kHz

frequency fm = 7.5 kHz, the modulation index is

Df

mf =

fm

37.5 kHz

=

7.5 kHz

mf =5

Bandwidth B = 2(n x fm)Hz

=2(8 x 7500)

B =120 kHz

ANALOG COMMUNICATION

Problem 3

deviation allowed is 25 kHz. Find out the percentage modulation?

Given

Df(max) = 25 kHz

Solution

Actual frequency deviation

% Modulation =

Maximum allowed deviation

Df(actual)

=

Df(max)

10kHz

=

25kHz

% modulation = 40 %

Problem 4

and the maximum modulating frequency is 10 kHz . Calculate the

deviation ratio and bandwidth of the system using Carson’s rule?

Given

Df(max) =75kHz

Fm(max) =10kHz

Mp =5 rad

Solution

Df(max)

a. Deviation ratio (DR) =

fm(max)

ANALOG AND DIGITAL COMMUNICATION

75 kHz

=

10 kHz

DR = 7.5

=2 [75 +10]

B =170 kHz

Problem 5

For an FM modulator with a modulation index mf = 1, a

modulating Vm(t) =Vm sin (2p x 1000t), and an unmodulated carrier

Vc(t)=15 sin (2p x 500t) determine

a. Number of sets of significant side frequencies,

b. Their amplitudes,

c. Draw the frequency spectrum showing their relative amplitude.

Given

Modulation index, mf =1

Modulating signal Vm(t) =Vm sin (2p x 1000t)

Carrier signal Vc(t) =15 sin (2px 500t)

Solution

a. Modulation index mf=1 means it have three sets of significant side

frequencies.

b. The relative amplitudes of the carrier and side frequencies are

J0 = J0(mf) x Vc =J0(1) x 15 = 0.77 x 15=11.55 V

J1 =J1(1) x 15 = 0.44 x 15 = 6.6 V

J2 =J2(1) x 15 = 0.11 x 15 =1.65 V

J3 =J3(1) x 15 =0.02 x 15 = 0.3 V

ANALOG COMMUNICATION

6.6 V 6.6 V

1.65 V 1.65 V

0.3 V 0.3 V

Frequency

J3 J2 J1 J0 J1 J2 J3

Frequency Spectrum

Problem 6

(8 x106 t + 2sin 3 x104t)calculate

a. Modulating frequency

b. Carrier frequency,

d. Frequency deviation.

Given

Solution

wm

a. Modulating frequency,fm =

2p

3 x 104

=

2p

ANALOG AND DIGITAL COMMUNICATION

= 4.77kHz

wc

b. (b)Carrier frequency, fc =

2p

8 x 106

=

2p

=1.27 MHz

c. Modulation index mf =6

=6 x 4.77

Df = 28.62 kHz

1.14 COMPARISONS OF VARIOUS ANALOG COMMUNICATION SYSTEMS

S.No

eter

Definition Amplitude of the carrier is Frequency of the carrier is Phase of the carrier

varied according to ampli- varied according to amplitude is varied according to

1

tude of modulating signal of modulating signal amplitude of modulat-

ANALOG COMMUNICATION

ing signal

Output

Voltage VAM (t) = VcSin wct + VFM (t) = Vc Sin (wct + mf Vpm t) = Vc Sin (wct + mp

2 m aV c Sin wmt) Sin wmt)

Cos (wc- wm)t ]

Number of Only two side bands Infinite number of side bands Infinite number of

3

side bands sidebands

Bandwidth BW = 2fm. It is is not BW = 2(Df +fm). The bandwidth BW = 2 (Df + fm). The

4 dependent on the depends on modulation index bandwidth depends on

modulation index modulation Time

Transmit- Transmitted power depends Transmitted power is constant Transmitted power is

5 ting on the modulation index independent of modulation in- constant independent

power dex of modulation index

Modulation

Vm kf. Vm

index ma = mf = mp = kp. Vm (radians)

Vc fm

portional to both modulat- tional to modulating voltage as

ing and carrier voltage well as modulating frequency

Noise in- Noise interference is more Noise interference is mini- Noise interference is less

7 terference mum than AM, but more than

FM

Depth of Depth of modulation have Depth of modulation have Depth of modulation re-

Modulation limitation. It cannot be no limitation, it is inversely mains same if modulat-

8

increased above 1 proportional to modulating ing frequency is changed

frequency

Fidelity AM has poor fidelity due to Fidelity is better due to wide Fidelity is better due to

9 narrow bandwidth bandwidth wide bandwidth

ANALOG AND DIGITAL COMMUNICATION

Adjacent Adjacent channel Adjacent channel inter- Adjacent channel interfer-

10 channel interference is pre- ference is avoided due to ence is avoided

interference sent wide frequency spectrum

Signal to noise Signal to noise ratio Signal noise ratio is bet- Signal to noise ratio is infe-

11

ratio is less ter than that of PM rior to that in FM

Equipment Transmission and Transmission and recep- More complex

used reception equip- tion equipments are more

12

ANALOG COMMUNICATION

complex

Applications Radio and TV Radio, TV broadcasting, PM is used in some mobile

13 broadcasting police wireless, point to systems

point communication

Vc j (m ) j (m )

j1(mf) 0 f j (m ) j1(mp) 0 pj (m )

M aV c MaVc I2(mf) 1 f I2(mp) 1 p

Frequency I2(mf) I2(mp)

2 2

14

spectrum

(fc-fm) fc (fc+fm) fc-2/m fc-fm fc fc+fm fc-2/m fc-2/m fc-fm fc fc+fm fc-2/m

0 t

15 t 0 t

Output wave-

form

ANALOG AND DIGITAL COMMUNICATION

Amplitude of the carrier signal varies according to amplitude

variations in modulating signal is known as amplitude

modulation. Spectrum: Figure shows the spectrum of AM signal. It

consists of carrier (ƒc) and two sidebands at ƒc ± ƒm .

Ec

maEc maEc

2 2

fc-fm fc fc+fm

low frequency range?

antenna is practically not possible to fabricate. High carrier

frequencies require reasonable antenna size for transmission and

reception.

High frequencies can be transmitted using tropospher-

ic scatter propagation, which is used to travel long distances.

ANALOG COMMUNICATION

+ 0.3cos(6000t/2π) sin(106t/2π)]

Find the amplitude and frequency of various sideband terms.

eAM = [100 + 70cos(3000t/2π) + 30cos(6000t/2π)] sin(106t/2π)

Here

Em1 = 70 and ω1 = 3000/2π rad/sec

Em2 = 30 and ω2 = 6000/2π rad/sec

Ec = 100 and ωc = 106/2π rad/sec

Hence

m1 = Em1/Ec = 70/100 = 0.7

m2 = Em2/Ec = 30/100 = 0.3

Ec= 100

m1Ec m1Ec

= 35

2 = 35 2

m2Ec m2Ec

= 15 = 15

2 2

c c

is 20 V and modulating signal is of 15V.

Solution:

Here Em= 15V

Ec = 20V

Modulation index, m = Em / Ec = 15/20 = 0.75

Percentage modulation = m * 100

= 75%

5. Define detection (or) demodulation.

(message signal or original information or base band) from the

ANALOG AND DIGITAL COMMUNICATION

different types of modulations.

In other words, demodulation or detection is the process by

which the message is recovered from the modulated signal at re-

ceiver.The devices used for demodulation or detection are called

demodulation or detectors.

(Em) to amplitude of carrier (Ec).

i.e. m = Em / Ec

transmit an angle modulated wave as twice the sum of the

peak frequency deviation and the highest modulating signal

frequency.

Carson’s rule of FM bandwidth is given as,

BW = 2(δ + ƒm (max))

Here δ is the maximum frequency deviation and ƒm (max) is the

maximum signal frequency.

the frequency spectrum consists of two major sidebands like

AM. Other sidebands are negligible and hence they can be ne-

glected. Therefore the bandwidth of narrowband FM is limited

only to twice of highest modulating frequency.

If the deviation in carrier frequency is large enough so that other

sidebands cannot be neglected, then it is called wideband FM.

The bandwidth of wideband FM is calculated as per Carson’s

rule.

ANALOG COMMUNICATION

frequency of the carrier wave is changed in accordance with the

instantaneous value of the message signals.

deviation to the modulating frequency.

Modulation index mf = δ/ƒm

the carrier when it is acted on by a modulating signal frequency.

The frequency deviation is typically given as the peak frequency

shift in Hertz (Δf).

In FM, the total transmitted power always remains constant.

But with increased depth of modulation, the required bandwidth

is increased.

i). In AM system the bandwidth is finite. But FM system has in-

finite number of sidebands in addition to a single carrier.

ii). In FM system all the transmitted power is useful whereas in

AM most of the transmitted power is used by the carrier.

iii). Noise is very less in FM; hence there is an increase in the

signal to noise ratio.

in phase of the carrier at a given instant of time and it indicates

ANALOG AND DIGITAL COMMUNICATION

the reference phase.

quency or phase of the carrier wave is changed in accordance

with the instantaneous value of the message signals.

to amplitude variations of the modulating signal. The PM signal

can be expressed mathematically as,

ePM = Ec sin(ωct+ mpsinωmt)

Here mp is the modulation index for phase modulation. It is giv-

en as, mp = Φm

Here Φm is the maximum value of phase change.

frequencies than on lower ones. The effect of noise on higher

frequencies can be artificially boosting them at the transmitter

and correspondingly attenuating them at the receiver. Thus is

pre-emphasis.

modulation. Hence transmitter power remains constant in FM

whereas it varies in AM.

b. Since amplitude of FM is constant, the noise interference is

minimum in FM. Any noise superimposing amplitude can be

removed with the help of amplitude limits. Whereas it is difficult

to remove amplitude variations due to noise in AM.

ANALOG COMMUNICATION

depth of modulation can be increased to any value by increasing

the deviation. This does not cause any distortion in FM signal.

d. Since guard bands are provided in FM, there is less possibility of

adjacent channel interference.

e. Since space waves are used for FM, the radius of propagation

is limited to line of sight. Hence it is possible to operate several

independent transmitters on same frequency with minimum in-

terference.

f. Since FM uses UHF and VHF ranges, the noise interference is

minimum compared to AM which uses MF and HF ranges.

20.

A 107.6 MHZ carrier is frequency modulated by a 7 kHZ sine

wave. The resultant FM signal has a frequency of 50 kHZ

Determine the modulation index of the FM wave.

Here δ = 50 kHZ and ƒm = 7 kHZ .

Modulation index = δ/ƒm = 50/7 = 7.142

21.

If the rms value of the aerial current before modulation is

12.5 A and during modulation is 14 A, calculate the percent-

age of modulation employed, assuming no distortion.

Here Itotal = 14 A and Ic = 12.5 A.

m= 2

( I2total

I2C

-1

)

= 2

( 142

12.52

-1

) = 0.71

22.

An AM broadcast transmitter radiates 9.5 KW of power with

the carrier unmodulated and 10.925 KW when it is sinusoi-

dally modulated. Calculate the modulation index.

Ptotal = 10.925 KW, Pc = 9.5 KW

ANALOG AND DIGITAL COMMUNICATION

( )

Ptotal

m= 2 -1

PC

m=

2

10.925

9.5

-1

( ) = 0.54

modulation percentage is 60%. How much is the carrier power?

Ptotal = 5 KW, m = 0.6, Pc =?

Ptotal = Pc ( 1+

m2

2 )

5 KW = Pc

( 1+

0.62

2 )

Pc = 4.237 KW.

amplitude modulation?

efficient.

2) Because of amplitude variations in AM signal, the effect of noise

is more.

25.

The antenna current of an AM transmitter is 8 A when only

carrier is sent, but it increases to 8.96 A when the carrier

is modulated by a single tone sinusoid. Find the percentage

modulation.

Here Itotal = 8.96 A and Ic = 8 A.

ANALOG COMMUNICATION

m2

Itotal = Ic 1+

2

m2

8.96 = 8 1+

2

m = 0.713

es in amplitude 5 V, determine the maximum and minimum

envelope amplitudes and the modulation coefficients.

Emax = 20 + 5 = 25 V

Emin = 20 – 5 = 15 V

Emax - Emin

Modulation index =

Emax + Emin

25-15

= = 0.25

25 +15

2 kHZ resulting in a maximum frequency deviation of 5 kHZ

. Find 1) Modulation index 2) Bandwidth of the modulating

signal.

Maximum frequency deviation δ = 5 kHz

1) Modulation index = mf = δ/ƒm

5 x 103

= = 2.5

2 x 103

BW = 2(δ + ƒm (max))

Here ƒm (max) is the maximum modulating frequency, which

is given as 2 kHz .

Hence,

ANALOG AND DIGITAL COMMUNICATION

28. Calculate the bandwidth of commercial FM transmission as-

suming Δƒ = 75 kHz and W = 15 kHz.

Here δ = Δƒ = 75 kHz

And ƒm (max)) = W = 15 kHz

BW = 2(δ + ƒm (max))

= 2[75+15] kHz = 180 kHz

It is the ratio of the transmission bit rate to the minimum

bandwidth required for a particular modulation scheme. It is

denoted as Bη and given by

Transmission bit rate (bps)

Bh =

Minimum bandwidth (Hz)

bits/s bits/s bits

= = =

Hz cycles/s cycle

frequency deviation?

BW = 2 (Df + fm) Hz

Where,

Df = Peak frequency deviation (Hz)

fm = Modulating-signal frequency (Hz)

Sl. No FM PM

1 VFM(T) = Vc cos (wct + mf sin VPM(t) = Vc cos (wct +mp cos

wmt) wmt)

2 Associated with the change Associated with the changes

in fc, there is some phase in phase there some change

change in fc.

ANALOG COMMUNICATION

modulating voltage as well as modulating voltage

the modulating frequency fm

4 It is possible to receive FM on It is possible to receive PM on

a PM receiver FM receiver

AM signal waveform Vm

+ Envelope

Vm

Vc

0 Time

VDSB(t)

0 Time

100KHz and maximum modulating signal frequency of 5

KHz, determine upper and lower side band frequency and

the bandwidth.

= 100 + 5

= 105 KHz

Lower side band fLSB = fc - fm

= 100 - 5

= 95 KHz

Bandwidth (B) = 2 fm = 10 KHz

ANALOG AND DIGITAL COMMUNICATION

Lower side bands J0(mf).Vc Upper sidebands

J1(mf).Vc J1(mf).Vc

J2(mf).Vc J2(mf).Vc

Bandwidth=infinite

In an FM system, the message information is transmitted by

variations of the instantaneous frequency of a sinusoidal carrier

wave, and its amplitude is maintained constant.

Any variation of the carrier amplitude at the receiver input must

result from noise or interference.

An amplitude limiter, following the IF section is used to remove

amplitude variations by clipping the modulated wave at the IF

section.

36. In an AM transmitter, the carrier power is 10kW and the

modulation index is 0.5. Calculate the total RF power deliv-

ered.

Given:

Carrier power Pc = 10kW

Modulation Index (rna) = 0.5

Solution:

= 10 x 103

( 1+

ma2

2 )

= 112.5 KW

( 1+

(0.5)2

2 )

ANALOG COMMUNICATION

REVIEW QUESTIONS

PART - A

1. What is meant by noise?

2. What are the types of noise?

3. Define shot noise.

4. What is flicker noise?

5. Define Amplitude modulation.

6. Differentiate between narrow band and wide band FM signal

7. What is demodulation?

8. Draw the spectrum of FM signal.

9. State Shannon’s Limit for channel capacity theorem. Give an

example.

10. Define Bandwidth efficiency.

11. Distinguish between FM and PM.

12. What is bandwidth need to transmit 4kHz voice signal using AM.

13. Define modulation and modulation index.

14. What is the purpose of limiter in FM receiver?

15. What is modulation index and percentage modulation in AM?

16. Draw the frequency spectrum and mention the band-width of AM

signal.

17. In an AM transmitter, the carrier power is 10kw and the modulation

index is 0.5. Calculate the total RF power delivered.

18. For an AM DSBFC modulator with a carrier frequency of 100KHz

and maximum modulating signal frequency of 5 KHz, determine up-

per and lower side band frequency and the bandwidth.

19. State Carson's rule.

20. In a Amplitude modulation system, the carrier frequency is Fc = 100

KHz. The maximum frequency of the signal is 5 KHz. Determine the

lower and upper side bonds and the bond width of AM signal.

ANALOG AND DIGITAL COMMUNICATION

frequency is 10 KHz. Find out the bandwidth using Carson's rule and

the modulation index.

22. If a 10 V carrier is amplitude modulated by two different frequencies

with, amplitudes 2 V and 3V respectively. Find the modulation index:

23. Write down the mathematical expressions for angle modulated wave.

24. A 200W carrier is modulated to a depth of 75%. Calculate the total

power in the modulated wave.

25. Find the peak phase deviation for a PM modulator with a deviation

sensitivity K= 5rad/V and a modulating signal Vm (t )=2 cos(2p2000.t).

26. What is the basic difference between AM and FM receivers?

27. Draw the frequency spectrum of AM signals.

28. Give the expression that relates the power carried with the modula-

tion index of AM.

29. What are the basic building blocks of phase locked loops?

30. Define the term 'angle modulation'.

31. Find the carrier power of a broadcast radio transmitter radiates 20

KW for the modulation index is 0.6.

32. Mention the advantages of the super heterodyne receiver over TRF

receiver.

33. Define modulation index for FM and PM.

34. List the applications of phase locked loop.

35. Define phase modulation.

36. What is the approximate bandwidth required to transmit a signal at

4 kHz using FM with frequency deviation of 75 kHz?

37. Draw the amplitude modulation waveforms with modulation index

m = 1, m< l and m > 1.

PART – B

1. Draw the block diagram of AM super heterodyne receiver and explain

ANALOG COMMUNICATION

2. With the help of a block diagram and theory explain FM demodula-

tion employing PLL.

3. (i).What is the need for modulation?

(ii).Explain with necessary diagram any one method for generation

of AM waves.

4. (i) With neat block diagram describe AM transmitter.

(ii). Derive for carrier power and transmitter power in AM in terms of

modulation index.

5. Draw the block diagram and explain generation of DSB-SC signal

using balanced modulator. If the percentage modulation is 100%,

how much percentage of the total power is present in the signal when

DSB-SC is used.

6. Define FM and PM modulation and write their equations. Describe

the generation of FM wave using Armstrong method.

7. Write a note on frequency spectrum analysis of angle modulated

waves.

8. Explain the band width requirements of angle modulated waves.

9. Compare FM and PM.

10. The output of a AM transmitter is given by

Calculate

(1) Carrier frequency

(2) Modulating frequency.

(3) Modulation index

(4) Carrier power if the load is 600 Ω

(5) Total power.

11. In an AM modulator, 500 KHz carrier of amplitude 20 V is modulated

by 10 KHz modulating signal which causes a change in the output

wave of ± 7.5 V. Determine

ANALOG AND DIGITAL COMMUNICATION

(2) Modulation Index

(3) Peak amplitude of upper and lower side frequency

(4) Maximum and minimum amplitudes of envelope.

12. An AM signal has the equation

(1) Find the carrier frequency.

(2) Find the frequency of the modulating signal.

(3) Find the value. Of m

(4) Find the peak voltage of the unmodulated carrier.

(5) Sketch the signal in the time domain, showing voltage and time

scales.

13. For an AM DSBFC wave with an unmodulated carrier voltage of 18 V

and a load resistance of 72Ω, determine the following

(i) Unmodulated carrier power (ii) Modulated carrier power (iii) .Total

sideband power

(iv) Upper and lower sideband powers (v) Total transmitted power.

14. For an AM DSBSC modulator with a carrier frequency fc = 100 KHz

and a maximum modulating signal fm = 5 KHz.

Determine

(1) the frequency limits for the upper and lower sidebands

(2) bandwidth

(3) sketch the output frequency spectrum.

15. What is noise? Explain briefly about sources of noise.

NUMBER SYSTEM

Shift Keying (MSK) –Phase Shift Keying (PSK) – BPSK – QPSK – 8 PSK –

16 PSK - Quadrature Amplitude Modulation (QAM) – 8 QAM – 16 QAM

– Bandwidth Efficiency– Comparison of various Digital Communication

System (ASK– FSK – PSK – QAM).

Digital Communication

DIGITAL

COMMUNICATION Unit 2

2.1 INTRODUCTION

processing of information. Information is defined as knowledge (or)

intelligence communicated (or) received.

system.

communication system

can be voice, picture (or) music.

alpha numeric codes, graphic symbols, database information etc. But

the information cannot be transmitted as it is. It should be converted

into electrical signals and then modulated.

as Amplitude modulation, Frequency modulation and phase

modulation. But these schemes are being replaced by the digital

modulation schemes. Digital communication technique include

transmission of digital pulses.

ANALOG AND DIGITAL COMMUNICATION

Ease of processing.

Ease of multiplexing.

communication.

Definition

systems that use the low frequency digital information signals to

modulate the high frequency carriers and the transmission takes place

in the form of digital pulses.

2.2 DIGITAL TRANSMISSION SYSTEM

transmission system.

Digital Digital

Input Digital Digital pulses output

Digital

Terminal Terminal

Analog Interface Physical Interface

Input Analog

transmission

ADC DAC Output

Input

transmitted through the transmission medium and received by a

receiver otherwise (i.e. analog signal) then it is converted into Digital

signal and then transmitted.

Digital Communication

analog by DAC.

transmitted directly over a pair of wires, co-axial cable (or) fiber optic

cable.

medium (channel) to communicate data between two point, so it is

suitable for short distance communication only. e.g. data transmitted

from computer to printer, LAN connections etc.

2.3 DIGITAL RADIO

Channel

Input

Data Output

BPF and

Demodulator Data

Precoder Modulator power BPF and

amplifier and Decoder

Amplifier

clock

Buffer Noise

BPF

Carrier

Channel

Analog BPF

(or)

carrier Transmission Carrier

media and

clock

Recovery

Transmitter

Receiver

between two or more points.

ANALOG AND DIGITAL COMMUNICATION

medium (channel) such as a pair of wire, co-axial cable, optical fiber etc.

But the digital radio generally uses the free space as it’s transmission

medium.

Bit rate: Bit rate is the number of bits transmitted in one second. It is

expressed in bits per second (bps).

symbol, then symbol rate becomes,

Bit rate

Symbol rate =

N

Baud rate : Baud rate is the rate of change of a signal on transmission

medium after encoding and modulation have occurred .Thus Baud rate

is basically symbol rate.

1

Baud =

ts

Where, ts → time of one signalling element

through a communications system and it is a function of bandwidth and

transmission time. It is expressed in bits/sec.

Hartley’s law

transmission time and information capacity.

It is expressed as,

I Bxt ...(1)

Where,

Digital Communication

I = Information capacity(bits/sec)

B = Bandwidth (Hertz)

receiver through a transmission channel (or) medium which posses

two important characteristics , they are

(2) Bandwidth

the maximum capacity of a channel to carry information.

relationship among bandwidth, signal to noise ratio and information

capacity.

S

I = B log 2 1 + bits/sec

N0

S

I = 3.32 B log10 1 + bits/sec

N

Where,

B = Bandwidth (Hertz)

ANALOG AND DIGITAL COMMUNICATION

S

I = B log 1 + ...(1)

2

N0

• Let us try to find out the maximum possible value of ‘I’. From the

equation for ‘I’ it is evident that it depends on two factors, which are

the Bandwidth ‘B’ and the S/N ratio.

(S/N) → ∞ and so ‘I’ also will tend to ∞. Thus the noiseless channel will

have an infinite capacity.

Now consider that some white Gaussian noise is present hence (S/N)

is not infinite.

not become infinite since N = N0B, will increase with the bandwidth

B.

This will reduce the value of (S/N) with increase in B, assuming the

signal power ‘S’ to be constant.

Thus the conclude that an ideal system with infinite bandwidth has

a finite channel capacity. It is denoted by ‘I∞’ and given by,

S

I∞ = 1.44 ...(2)

N0

Shannon’s information Rate

according to the Shannon’s theorem,

S

∴Rmax = Imax = 1.44 ...(3)

N0

ANALOG AND DIGITAL COMMUNICATION

S

I = B log 1 + ...(1)

2

N0

• Let us try to find out the maximum possible value of ‘I’. From the

equation for ‘I’ it is evident that it depends on two factors, which are

the Bandwidth ‘B’ and the S/N ratio.

(S/N) → ∞ and so ‘I’ also will tend to ∞. Thus the noiseless channel will

have an infinite capacity.

Now consider that some white Gaussian noise is present hence (S/N)

is not infinite.

not become infinite since N = N0B, will increase with the bandwidth

B.

This will reduce the value of (S/N) with increase in B, assuming the

signal power ‘S’ to be constant.

Thus the conclude that an ideal system with infinite bandwidth has

a finite channel capacity. It is denoted by ‘I∞’ and given by,

S

I∞ = 1.44 ...(2)

N0

Shannon’s information Rate

according to the Shannon’s theorem,

S

∴Rmax = Imax = 1.44 ...(3)

N0

Practically it is very difficult to achieve this rate; because to

achieve this rate the channel bandwidth needs to be equal to ∞, and

Digital Communication

an infinite bandwidth.

Solved Problems

bandwidth of this channel is 5 MHZ .What is SNR required in order

to achieve this capacity ?

B = 5 MHZ

Solution

given by ,

S

I = B log 2 1 + ...(1)

N0

(or)

N0

N0

30 x 10 6

S

= log 10 1 +

3.32 x 5 x 106 N0

1.807

= log 10 S

1 +

N0

Antilog(1.807)= S

1 +

N0

64 = 1 + S

N0

ANALOG AND DIGITAL COMMUNICATION

S

63 =

N0

SNR

= 63

63 =17.99 dB

of 3500 HZ and SNR of 20 dB. Also calculate the number of levels

required to transmit at the maximum bit rate.

Solution

1 +

N0

= 3500 log2 [1+100]

(101)

= 23,290 bits/sec.

Rmax = 2N.N

23,290= N.2N

= N2 log(2)

4.36 =0.3 x N2

4.36

= N2

0.3

14.53 = N2

∴ N = 3.824

Digital Communication

2.6 M-ARY ENCODING

a digit that corresponds to the number of conditions, levels (or)

combinations possible for a given number of binary variables.

when there are more than two conditions possible.

conditions is expressed mathematically as,

N =log 2 M ...(1)

M= Number of conditions

conditions possible with N-bits as,

2 N=M ...(2)

For example,

=2 conditions are possible.

2.7.1 Introduction

ANALOG AND DIGITAL COMMUNICATION

is no carrier (or) any modulator. This is suitable for only short distance

transmission.

carrier. Hence it is also called as “digital continuous

wave modulation”. It is suitable for transmission over long

distances.

transmission of digital signals .These methods are based on the three

characteristics of a sinusoidal signal amplitude, frequency and phase.

system

Need of modulation

The modem will modulate the digital data signal from the

DTE(computer) into an analog signal. This analog signal is then

transmitted on the telephone lines.

telephone lines because the digital data consists of binary 0’s and

Digital Communication

telephone lines because the digital data consists of binary 0’s and

1’s . Therefore the waveform changes it’s value abruptly from high to

low (or) low is high.

introduced, the communication medium needs to have a large

bandwidth.

Therefore we have to convert the digital signal first into an analog

signal which needs lower bandwidth by means of the modulation

process.

Advantages

FSK ,PSK etc. used for transmission of data is that we can use the

telephone lines for transmission of high speed data. Due to the use of

CW modulation the BW requirement is reduced.

Disadvantages

use a MODEM along with every computer. This makes the system costly

and complex.

2.8 AMPLITUDE SHIFT KEYING (OR) DIGITAL AMPLITUDE

MODULATION (OR) OOK- SYSTEM

Definition

signal directly modulates (or) alternate the amplitude of the carrier

between two distinct levels (1 and 0).This digital modulation method is

also referred to as ON-OFF Keying (00K).

ANALOG AND DIGITAL COMMUNICATION

wave of frequency ‘fc’.

as follows,

A

VASK (t) = [1+ Vm(t)] c cos ωc t ...(2)

2

Where,

Case 1

Ac

VASK(t) = [1+1] cos ωct

2

...(3)

VASK (t) = AC cos ωct

Case 2

2

Digital Communication

V (t) = 0 ...(4)

ASK

Thus, the VASK(t) is either Ac cos ωct (or) 0. Hence the carrier is ei-

ther ‘ON or ‘OFF’. Therefore ASK is also called ON-OFF keying.

2.8.3 ASK-generator

unipolar NRZ (non-return to zero) signal which acts as a

ANALOG AND DIGITAL COMMUNICATION

band-pass filter. The carrier signal is applied as a another input of

product modulator.

present only when a binary ‘1’ is to be transmitted.

be sent and no carrier is transmitted. When binary ‘0’ is to be

sent is as shown in Figure 2.5

ASK

Signal Tb Decision Original

(or) 0

∫ making

device

data

Received

signal

(Carrier) Threshold

Ac cos wct

Figure 2.7 ASK - Demodulator Circuit

multiplier

Case 1

multiplier is given by,

Digital Communication

act as a LPF. Therefore LPF produce low frequency component only at

the output.

1 + cos 2ωc t

= Ac 2

2

Ac 2 Ac 2

= + cos 2 ωc t ...(2)

2 2

represents second order harmonic. Therefore LPF filtered out the second

term, First term only obtained at the output

Ac 2

= ...(3)

2

The output of integrator is given to Decision making device,

which is compared with threshold value and produce the output logic 1

(i.e binary ‘1’)

Case 2

integrator and decision making device is equal to zero. Therefore the

output is logic ‘0’ (i.e) binary ‘0’.

The bit interval is the time required to send one single bit. It is the

reciprocal of the bit rate.

Bit rate

Bit rate is the number of bits transmitted (or) sent in one second.

It is expressed in bits per second (bps).

1 1

Bit rate = = = fb

Bit interval Tb

ANALOG AND DIGITAL COMMUNICATION

Baud rate

fb

Baud rate = N ...(1)

Therefore , the rate of change of the ASK wave form (baud) is the same

as the rate of change of binary input (bps), thus bit rate equals the baud

rate.

fb

Baud =

N

fb

= = fb

1

Where N= number of bits =1

fb

Where, fa = fundamental frequency of binary input =

2

fb fb

BW = fc + − fc −

2 2

fb f

= fc + − fc + b

2 2

= fb

BW

Advantages

Disadvantages

Digital Communication

Definition

discrete values according to the binary symbol (0 (or) 1).

expression for FSK is,

FSK

Where,

From equation (1) it can be seen that the peak shift in carrier

frequency (fc) is proportional to the amplitude of binary input signal Vm(t).

Case 1

FSK

Case 2

ANALOG AND DIGITAL COMMUNICATION

frequency domain by the binary input signal, is as shown in Figure 2.8.

-∆f

+∆f

fs fc fm

Logic 1

Logic 0

and vice versa, the output frequency shifts between two frequencies,

|fm-fs|

∆f = ... (4)

2

Where,

frequencies.

Digital Communication

Modulation.

binary input

1 0 1 0 1

t

Carrier signal

t

fm fs fm fs fm

Where,

NRZ-Binary

input FSK-Modulator FSK

(VCO) Output

The VCO act as a FSK – generator , the input binary data is given

as control input of VCO.

ANALOG AND DIGITAL COMMUNICATION

If binary input is not applied (i.e) there is no input signal the VCO

generates the centre frequency equal to carrier frequency.

frequency fm (i.e)( fc +∆ f).

frequency fs (i.e) (fc- ∆f ).

We conclude that the VCO output frequency changes back and forth

between space and mark frequencies.

2.9.5 FSK-detection

BPF Envelope

detector

dc

‘fS’ ~ comparator

FSK -

Output Power

splitter +

Output

dc Data

or

‘fm’ (Original

~ data)

Envelope

BPF detector

Rectified signal

Figure 2.11 Block Diagram of FSK- demodulator

Digital Communication

shown in Figure 2.11.

band pass filters (BPF) through a power splitter.

The respective filter passes only the mark (or) only the space

frequency on to its respective envelope detector.

The envelope detectors, in turn indicate the total power in each pass

band, and the comparator responds to the largest of the two powers.

as logic 1 and vice versa for logic 0.

synchronized either in phase .frequency (or) both with the incoming

FSK-signal.

FSK receiver.

Multiplier

LPF

Carrier

FSK Power -

Input Splitter

+

Output

Data

Multiplier or

Original

LPF Data

Carrier

ANALOG AND DIGITAL COMMUNICATION

multipliers through power splitter.

multiplier .The two frequencies are not same as transmitter

reference frequency, it is impractical to reproduce a local

reference that is coherent with both of them. So coherent FSK –

detection is seldom used.

The multiplier outputs are passed through low pass filters and the

filter outputs are applied to a comparator.

logic 1 and vice versa for logic 0.

The most common circuit used for modulation of BFSK is the phase

Locked loop (PLL) which is shown in Figure 2.13.

center frequency of FSK –modulator (i.e) carrier frequency (fc) before

Digital Communication

one is received BFSK signal and other is VCO –output.

Comparator compares both the values and produce the error voltage.

frequency (or) space frequency).

The dc-error voltage at the output of the phase comparator follows

the frequency shift.

of dc -error Voltage (0 (or) 1).Thus we get a binary data at the output

is shown in Figure 2.14.

(or) QAM –system . so it is not used for high performance digital radio

system.

modems which are used for data communications over analog voice

– band telephone lines.

ANALOG AND DIGITAL COMMUNICATION

Bit rate

Bit rate is the number of bits are transmitted (or) sent in one

second .It is expressed in bits per second (bps).

1 1

Bit rate = = = fb

Bit interval T b

Baud rate

fb

Baud rate = .

N

For, BFSK we use one bit ( 0 (or) 1 ) to represent one symbol.

Therefore, the rate of change of FSK waveform( baud) is same as the rate

of change of binary input (bps),thus bit rate equals the baud rate.

fb

Baud rate = = fb

1

Where, N → Number of bits = 1

Where,

fb

fa = = fundamental frequency of binary input

2

f f

= fc + b − fc − b

BW

2 2

fb f

= f c + − fc + b

2 2

BW = fb ...(2)

Digital Communication

• Implementation is simple and inexpensive.

• FSK is affected less by noise than BASK

2.9.8 Disadvantages of BFSK

• Probability of error for BFSK signal is, higher than BPSK

P (e ) = 1 erfc Eb / 2N 0

2

• More bandwidth is required.

• Error rate is more compared to BPSK.

FREQUENCY SHIFT KEYING

Minimum shift keying is a binary FSK except mark and space frequencies

are synchronized with the input binary bit rate synchronous simply

implies that there is a precise time relationship between the two: it does

not mean they are equal.

With CP-FSK, the mark and space frequencies are selected such

that they are separated from the centre frequency by an exact multiple

of one-half the bit rate [fm and fs = n(fb/2)], where n-any integer. This

ensures a smooth phase transition in the analog output signal when it

changes from a mark to space frequency (or) vice-versa.

ANALOG AND DIGITAL COMMUNICATION

a logic 0 and vice versa, there is an abrupt phase discontinuity in an

analog signal. When this occurs, the demodulator has trouble following

the frequency shift. Consequently an error may occur.

transition. Consequently, there are no phase discontinuous, CP-FSK

has a better bit-error performance than conventional binary FSK for a

given signal to noise ratio.

circuits and is therefore, more expensive to implement.

phase shift keying (PSK) is the most efficient of the three modulation

methods. Therefore it is used for high bit-rates.

the data bit to be transmitted.

Digital Communication

Binary PSK

phase modulation except with PSK the input is a binary digital signal

and there are a limited number of output phases possible.

conditions is given by,

M=2N ...(1)

where N=1, and M=2. Therefore with BPSK , two phases (21=2) are

possible for the carrier. One phase represents logic 1 and other phase

represents a logic 0.

¾¾ As the input digital signal changes state (i.e from ’1’ to ‘0’ or from ‘0’

to ‘1’),the phase of the output carrier shifts between two angles that

are separated by 1800.

¾¾ Hence , other names for BPSK are “phase reversal keying” (PRK) and

“biphase modulation”.

(cw) signal.

Where,

digital data (Vm(t)=±1)

ANALOG AND DIGITAL COMMUNICATION

modulation (or) generation

The balanced modulator acts as a phase reversing switch.

The binary data signal (0’s and 1’s ) is converted into a NRZ

(non-return to zero ) bipolar signal from unipolar signal by an level

converter.

modulator) as one of the input. The other input to the multiplier is

Digital Communication

BP-Bipolar

+V

+ V UP - Unipolar -V

0

Level PSK

Balanced Band pass

Converter Modulated

Binary Modulator filter

(UP to BP) Output

data

input

~ sin ωct

Buffer

~ sin ωct

Reference Carrier

Oscillator

The output of balanced modulator (multiplier) is either carrier

signal (sin wct) (or) phase shifted carrier signal (-sin ωct), depending

on binary input logic 1 (or) logic 0 respectively.

The operation of balanced modulator is explained as follows for the

binary inputs logic 1 and logic 0.

2.11.3.1 Balanced Ring Modulator (Balanced Modulator)

Figure 2.19 Shows the balanced ring modulator circuit

ANALOG AND DIGITAL COMMUNICATION

The operation is explained with the assumptions that the diodes acts

as perfect switches and that they are switched ON and OFF by the

digital data signal.

shown in Figure 2.20 . The diodes D1 and D2 are ON (forward biased)

while diodes D3 and D4 are OFF(Reverse Biased).

for the binary input ‘I’

With the polarity shown, the carrier voltage is developed across the

transformer T2 in phase with the carrier voltage across T1.

Hence, the output signal is in phase with the carrier input signal.

shown in Figure 2.21. The diodes D1 and D2 are reverse biased and

remains OFF, whereas D3 and D4 are forward biased and remains

ON.

Digital Communication

With the polarity shown, the carrier voltage is developed across the

transformer T2 is out of phase with the carrier voltage across T1.

Hence , the output signal is out - of phase with the carrier input

signal.

binary input ‘O’

Ouptut Waveforms

ANALOG AND DIGITAL COMMUNICATION

Truth table

Binary input Output phase

Logic 0 1800

Logic 1 00

Phasor diagram

-Sin wct Sin wct

1800 00

Logic 0 Logic 1

Constellation diagram

+ 1800 00

Logic 0 Logic 1

Figure 2.24 Constellation diagram for BPSK

• The Figure 2.25 below shows the simplified block diagram of BPSK

receiver.

• The input BPSK signal can be +sin ωct (or) –sin wct.

a carrier signal that is both frequency and phase coherent with the

Digital Communication

product of the two inputs (the BPSK signal and the recovered

carrier.

• The low-pass filter (LPF) separates the recovered binary data from

the complex demodulated signal.

follows,

Case 1

For a BPSK input signal of +sin wct (logic 1), the output of the

balanced modulator is,

= 1 (1-cos ωct)

2

1 1

= - cos 2ωct

2 2

}

↓ ↓

Constant Second harmonic

term

The second term is filtered out by LPF and allows only positive

voltage (+1/2 V).A positive voltage represents a demodulate output is

logic 1.

Case 2

• For a BPSK input of –sin wct (logic 0), the output of the balanced

modulator is,

=-sin2ωct

ANALOG AND DIGITAL COMMUNICATION

1 1

= − − cos 2ωc t

2 2

1 1

= - + cos2ωct

2 2

}

↓ ↓

Constant Second harmonic

term

The second term is filtered out and allows only negative voltage

(-1/2 v). A negative voltage represents a demodulated output is logic 0.

In both cases, the LPF output is applied to the level detector and

clock recovery circuit at the output of level detector we get the following

outputs.

1

+ V → logic 1

2

1

- V → logic 0

2

Thus the binary signal is obtained at the output.

Advantages

(i) BPSK has a Bandwidth which is lower than that of a BFSK – signal.

(ii) BPSK has the better performance of all the system in the presence of

noise .It gives the minimum possibility of error.

Disadvantage

and detection of BPSK is not easy. It is quite complicated,

because the synchronous (coherent) demodulation is used to recover

the original signal from BPSK signal.

Bit rate

Bit rate is the number of bits are transmitted (or) sent in one

Digital Communication

1 1

Bit rate = = = fb

Bit interval Tb

fb

= = fb

1

Baud rate

fb

Baud rate =

N

For, BPSK we use one bit ( 0 (or) 1 ) to represent one symbol.

Therefore, the rate of change of PSK waveform( baud) is same as the rate

of change of binary input (bps),thus bit rate equals the band rate.

fb

Baud rate = = fb. Where N=1,number of bits

1

Bandwidth of BPSK

f fb

= fc + b

BW − fc − 2

2

fb f

= f c + − fc + b

2 2

= fb ...(2)

BW

ANALOG AND DIGITAL COMMUNICATION

Variable

i. Amplitude Frequency Phase

characteristics

ii. Bandwidth (Hz) fb fb fb

Noise

iii. Low High High

immunity

Error

iv High Low Low

Probability

Performance in presence Better than Better than

v. Poor

of noise ASK FSK

Moderately

vi. Complexity Simple Very Complex

complex

Suitable upto 100 Suitable upto Suitable for

vii Bit rate

bits/sec 1200 bits/sec high bit rates

viii Detection method Envelope Envelope Coherent

non- coherent version of PSK. DPSK does not need a synchronous

(coherent) carrier at the demodulator.

binary input is contained in the difference between two successive

signaling elements rather than the absolute phase.

BPSK

Digital Communication

to entering the BPSK modulator (Balanced modulator).

For the first data bit, there is no preceding bit with which to compare

it . Therefore, an initial reference bit is assumed.

If the initial reference bit is assumed a logic 1, the output from the

XNOR circuit is simply the complement of that shown in timing

diagram.

Figure 2.27 shows the relationship between the input data , the XNOR

output data, and the phase at the output of the balanced modulator.

The first bit (data bit) is XNOR ed with the reference bit .If they are

the same, the XNOR output is a logic1; they are different , the XNOR

output is logic 0.

modulator; Logic 1 produces + sinwct at the output and a logic 0

produces -sinwct at the output.

Figure 2.28 and Figure 2.29 shows the block diagram and timing

sequence for a DBPSK receiver.

ANALOG AND DIGITAL COMMUNICATION

with next signaling element in the balanced modulator. If they are the

same, a logic 1 (+ voltage) is generated .If they are different , a logic 0

(- voltage) is generated.

1 1

(+Sinwct)(+Sinwct) = - Cos2wct

2 2

1 1

(-Sinwct)(-Sinwct) = - Cos2wct

2 2

1 1

(-Sinwct)(+Sinwct) = + Cos2wct

2 2

1 1

(+Sinwct)(-Sinwct) = + Cos2wct

2 2

If reference phase is incorrectly assumed, then only the first

demodulated bit is in error.

Bandwidth

only one bit is transmitted at a time (ie) N =1

Advantages

Digital Communication

Disadvantage

Bandwidth for both BPSK & DPSK both are same. Mathematically

, the output of BPSK modulator is proportional to

Output = (sinωat)(sinωct)

fb

Where, ωat = 2fat = 2p t

2

fb

ωat =2fct

2

fb

Output =(sin2 t) (sin 2pfct)

2

1 f 1 f

= cos 2π f c − b t − cos 2π f c + b t

2 2 2 2

The output frequency spectrum extends from and the minimum

fb

to f c − f b

bandwidth is, f c +

2 2

fb fb

= fc + 2 − fc − 2

fb

= 2 2

BW = f

b

ANALOG AND DIGITAL COMMUNICATION

QPSK is an M-ary encoding technique in which the two

successive bits in a bit –stream are combined together to form a

message and each message is represented by a distinct value of phase

shift of the carrier.

=N

=2N

=22

=4

Therefore, with QPSK, the binary input data are combined into groups

of two bits called dibits.

output phases(+450,+1350,-450, and -1350).

Digital Communication

2.13.2 QPSK-Transmitter

bits (dibit) are clocked into the bit splitter. After both bits have been

serially inputted, they are simultaneously parallel outputted.

One bit is directed to the I-channel and the other directed to the

Q-channel ,as one input to Balanced modulator.

I-balanced modulator another input and 900 phase shifted carrier is

applied as another input to Q-balanced modulator.

It can be that once a dibit has been split into the I and Q channels,

the operation is the same as in a BPSK modulator.

ANALOG AND DIGITAL COMMUNICATION

I-balanced modulator

Case 1

Case 2

Q - Balanced modulator

Case 1

Case 2

Linear summer

When the linear summer combines the two quadrature (900 out of

phase ) signals , there are four possible resultant phasors given by these

expressions

Digital Communication

Truth table

o/p

I Q

phase

0 0 -1350

0 1 1350

1 0 -450

1 1 +450

Table 2.2 Truth Table for QPSK

therefore the output of linear summer is directly communicate through

the channel.

Figure 2.32 (a) Phasor Diagram, (b) Constellation Diagram for QPSK

ANALOG AND DIGITAL COMMUNICATION

• Consider the received input is (coswct-sinwct), for that input how the

QPSK-receiver operates and detects the output (01) is explained in

this diagram itself.

Power splitter

The incoming QPSK signal may be any one of the four possible

output phases shown in equation 1,2, 3 and 4. For example consider

the QPSK signal is (cos ωct –sin ωct ) is received then the power splitter

directs the input QPSK signal to the I and Q product detectors and also

to the carrier recovery circuit.

oscillator signal. The recovered carrier must be frequency and phase

coherent with the transmit reference carrier. This recovered carrier is

applied directly to one of the input of I-product detector and 900 phase

shifted and then applied as one of the input of Q-product detector.

Digital Communication

Product detector

which generates the original I and Q data bits, are as follows,

I-product detector

The received QPSK- signal (cos wct-sin wct) is one of the inputs

to the I-product detector and the other input is the recovered carrier

(sin wct). The output of the I –product detector is,

1 1 1

= (sin (wc + wc)t + sin(ωc-ωc)t - (1-cos 2ωct)

2 2 2

1 1 1 1

= sin 2ωct + - sin 0 - cos 2wct

2 ↑ 2 2 2 ↑

0

(filtered out) (filtered out)

1

I =-

2

I = Logic 0

Q-Product detector

inputs to the q- product detector and the other input is the recovered

carrier shifted 900 in phase (cos wct).The output of Q -product detector

is,

1 1 1

= (1 + cos 2 wct) - sin (wc+ wc)t - sin (wc- wc)t

2 2 2

ANALOG AND DIGITAL COMMUNICATION

1 cos2wct 1 sin2wct

1

= + ↑ - ↑ - Sin 0

2 2 2 (filtered out) 0

(filtered out)

1

= v

2

Q = Logic 1

The demodulated Q and I bits (1 & 0 respectively) corresponds

to the constellation diagram and truth table for the QPSK –modulator

shows in Figure 2.32.

The outputs of the product detectors are fed to the bit combining

circuit , where they are converted from parallel I and Q data channels to

a single binary output data stream.

Clock recovery

Clock recovery is the process of receive the clock signal from the

received signal and it is similar to transmitter clock, within the clock

signal a single binary output data stream is obtained.

With QPSK , because the input data are divided into two channels,

the bit rate in either the I (or) the Q- Channel is equal to one-half of the

f

input data rate b .

2

The output of the balanced modulators can be expressed

mathematically as,

Output

fb

Where, ωat =2fat =2 t

fa 4 fb/2 fb

For QPSK fa= = =

2 2 4

Digital Communication

ωct = 2fct

fb

∴ Output = sin 2π 4 t (sin2fct)

1 f 1 f

= cos 2π f c − b t − cos 2π f c + b t

2 2 2 2

fb fb

The output frequency spectrum extends from fc + to fc -

4 4

and minimum bandwidth fN is,

f fb fb fb

= fc + b − fc − 4 = 2 4 = 2

4

fb

∴Bandwidth = 2

Advantages of QPSK

2. Baud rate is half the bit rate therefore more effective utilization

of the available bandwidth of the transmission channel.

3. Low error probability.

4. Due to these advantages the QPSK is used for very high bit rate

data transmission.

Disadvantages

the bit –rate to double the bit rate of PSK without increasing the

bandwidth.

system than PSK-system.

ANALOG AND DIGITAL COMMUNICATION

1. Variable Characteristics Phase Phase

2. Type of modulation Two level Four level (phase)

A group of two binary bit is

A binary bit is represent-

3. Type of representation represented by one phase

ed by one phase state

state

4. Bit rate/ Baud rate Bit rate - Baud rate Bit rate = 2 Baud rate

5. Detection method Coherent Coherent

6. Complexity Complex Very Complex

Suitable for applications

7. Applications Very high bit rate

that need high bit rate

2.14 8-PSK SYSTEM

producing eight different output phases. With 8-PSK, n =3,M =8 and

there are eight possible output phases .To encode eight different phases,

the incoming bits are encoded in groups of three, called tribits (2 3 =8)

2.14.1 8-PSK Transmitter

A block diagram of an 8-PSK modulator is as shown in Figure

2.34.

Digital Communication

I C Output

0 0 -0.541 V

0 1 -1.307 V

1 0 +0.541 V

1 1 +1.307 V

Q C Output

0 0 -1.307 V

0 1 -0.541 V

1 0 +0.541 V

1 1 +1.307 V

Table 2.5 Q - Channel Truth Table

+ 1.307 V

+ 0.541 V

OV

- 0.541 V

- 1.307 V

converted to a parallel, three-channel output (the I (or) in-phase

channel, the Q(or) in-quadrature channel, and the c (or) control

channel)

level converter , and the bits in the Q and C channels enters the

Q- Channel 2 to 4 converter.

converters (DACs), with two input bits, four output voltages are

ANALOG AND DIGITAL COMMUNICATION

possible.

The algorithm for the DAC, is quite simple. The I (or) Q bit

determines the polarity of thr output analog signal (logic 1= +V

and logic 0=-V),where as c (or) c bit determines the magnitudes

[(logic 1= 1.307 V) and (logic 0= 0.541 V ). Consequently with two

magnitudes and two polarities , four different output conditions are

possible.

For example

I – channel

are I=0 and C=0, the output is -0.541.

The two inputs to the I-channel product modulator is -0.541 and sin

wct .The output is,

Q-channel

C =1 , the output is -1.307 V.

cos wct. the output is ,

are combined in the linear summer and produce a modulated output

For remaining tribit codes (001, 010, 011, 100, 101, 110 and 111),

the procedure is same.

Digital Communication

Phasor diagram

(100) + cos ωc t

-0.541 sin ωc t +1.307 cos ωc t 0.541 sin ωc t + 1.307 cos ωc t

(110)

(101)

-1.307 sin ωc t + 0.541 cos ωc t

(111)

- sin ωc t sin ωc t

(001)

-1.307 sin ωc t - 0.541 cos ωc t

1.307 sin ωc t - 0.541 cos ωc t

(011)

(000)

0.541 sin ωc t - 1.307 cos ωc t

-0.541 sin ωc t - 1.307 cos ωc t

(010)

-cos ωc t

Figure 2.35 (a) Phasor Diagram for 8-PSK

Truth Table

Q I C Output Phase

0 0 0 -112.50

0 0 1 -157.50

0 1 0 -67.50

0 1 1 -22.50

1 0 0 +112.50

1 0 1 +157.50

1 1 0 +67.50

1 1 1 +22.50

Table 2.6 Truth table for 8-PSK

ANALOG AND DIGITAL COMMUNICATION

Constellation Diagram

• The power splitter directs the input 8-PSK signal to the I and Q

product detectors and the carrier recovery circuit.

signal.

• The incoming 8-PSK signal is mixed with the recovered carrier in the

I- product detector and with a Quadrature carrier in the Q-product

Digital Communication

detector.

• The output of the product detectors are 4-level PAM-signals that are

fed to the 4-to-2 level analog to digital converters (ADCs).

• The outputs from the I-channel 4-to-2 level converter are the I and C

bits , whereas the outputs from the Q-channel 4-to-2 level converter

are the Q and C bits.

• The parallel –to-serial logic circuit converts the I/C and Q/C bit pairs

to serial Q, I and C output data streams.

For examples

Consider the received 8-PSK signal -0.541 sin ωct -1.307 cos ωct

I-channel

8-PSK signal (i.e)-0.541 sin wct -1.307 cos ωct and another input is

carrier signal sin ωct.

=-0.541

( 1-cos2ωct

2 ) – 1.307 sin ωct .cos ωct.

=- + cos ωct - sin ( ωc + ωc)t-

sin( ωc- ωc)t

2 2 2 2

= - 0.2705 + 0.2705 cos2 ωct - 0.6535 sin2 ωct -0.6535 sin 0

(filtered out) (filtered out)

= - 0.2705.

I = logic 0, C = logic 0

ANALOG AND DIGITAL COMMUNICATION

Q-channel

- 0.541 sin ωct -1.307 cos ωct.

(sin(ωc+ ωc)t+sin (ωc- ωc)t) cos 2ωc t

= -0.541 ` - 1.307 1 +

2 2

=- sin 2ωct +sin 0 - - cos 2ωct

2 2 2

↓ ↓

(Filtered out) (Filtered out)

= -0.6535

Q = logic 0 C = logic 1

tribits, the procedure is same.

mathematically as follows,

fa fb/2 fb

For 8 PSK the bit rate is fa/3; Therefore, (fa= = = )

3 3 6

Output =sin 2fat.sin 2 fct

1 f 1 f

= cos 2π f c − b t − cos 2π f c + b t

2 6 2 6

Digital Communication

fb fb

The output frequency spectrum extends from fc+ to fc-

and the minimum bandwidth fN is, 6 6

fb f

BW = fc + − fc − b

6 6

fb fb

= fc + − fc +

6 6

fb

= 2

6

f

BW = b

3

2.15 16-PSK

different output phases possible

• With 16- PSK, four bits (called quadbit) are combined, producing 16

different phases (where n = 4, M = 2n \ M = 16)

• The minimum bandwidth and baud equal to one-fourth the bit rate

(fb/4)

• For 16- PSK, the angular separation between adjacent output phases

is only 22.50. Therefore 16- PSK can undergo only a 11.250 phase

shift during transmission and still retain it’s integrity.

• Figure 2.38 and the Table 2.7 shows the constellation diagram and

the truth table for 16-PSK respectively.

ANALOG AND DIGITAL COMMUNICATION

0000... 11.250

0001... 33.750

0010... 56.250

0011... 78.750

0100... 101.250

0101... 123.750

0110... 146.250

0111... 168.750

1000... 191.250

1001... 213.750

1010... 236.250 Fig 2.38 Constellation diagram for 16-PSK

1011... 258.750

1100... 284.250

1101... 303.750

1110... 326.250

1111... 348.750

• In all the PSK methods discussed till now, one symbol is distinguished

from the other in phase, but all the symbols transmitted using BPSK,

QPSK (or) M-ary PSK of “same amplitude”.

• The ability of a receiver to distinguish between one signal vector from

another in presence of noise, depends on the distance between the

vector end points.

• This suggests that the noise immunity will improve if the signal

vector differ not only in phase but also in amplitude.

• Such as a system is called as amplitude and phase shift keying

system.

• In this system the direct modulation of carriers in quadrature (i.e

cos wct and sin wct) is involved, therefore this system is called as

the quadrature amplitude phase shift keying (i.e) QAPSK (or) Simply

QASK.

It is also known as Quadrature amplitude modulation (QAM).

Digital Communication

Types of QAM

Depending on the number of bits per message the QAM –signals

are classified as follows,

Name Bits per symbol Number of symbols

4-QAM 2 2 2= 4

8-QAM 3 2 3 =8

16-QAM 4 2 4=16

32-QAM 5 2 5

=32

64-QAM 6 2 6=64

2.16.1 8-QAM –Transmitter

¾¾ 8-QAM is an M-ary encoding technique where M=8, unlike 8-PSK,

the output signal from an 8-QAM modulator is not a constant

amplitude signal.

¾¾ Figure 2.39 shows the block diagram of an 8-QAM transmitter. The only

difference between the 8-QAM transmitter and the 8-PSK transmitter

is the omission of the inverter between the C-channel and the

Q-product modulator.

¾¾ As with 8-PSK , the incoming data are divided into groups of three

bits(tribits). The I,Q and C bit streams, each with a bit rate equal to

one-third of the incoming data rate (fb/3).

ANALOG AND DIGITAL COMMUNICATION

¾¾ The I and Q bits determine the polarity of the PAM –signal at the

output of the 2-to-4 level converters, and the c-bit determines the

magnitude. Because the C-bit is fed uninverted to both the I and the

Q-channel 2-to-4 level converters the magnitudes of the I and Q PAM

signals are always equal.

¾¾ Their polarities depends on the logic condition of the I and Q bits and

therefore, may be different.

¾¾ The truth table for the I and Q-channel 2 -to -4 level converters are

identical.

Truth table

I/Q C Output

0 0 -0.541 V

0 1 -1.307 V

1 0 +0.541 V

1 1 +1.307 V

Q I C Amplitude Phase

0 0 0 0.765 V -1350

0 0 1 1.848 V -1350

0 1 0 0.765 V -450

0 1 1 1.848 V -450

1 0 0 0.765 V +1350

1 0 1 1.848 V +1350

1 1 0 0.765 V +450

1 1 1 1.848 V +450

Digital Communication

for 8-QAM

For example

I-channel

are I=0 & C=0 the output of 2- to -4 level converter is -0.541 V.

and sin wct .The output is,

ANALOG AND DIGITAL COMMUNICATION

Q-channel

The inputs to the Q-channel 2- to- 4 level converters are Q=0 &

C=0 .The output is -0.541 V.

The two inputs to the Q-channel product modulator are -0.541

and cos wct .The output is

Q=(-0.541) (cos ωct)= -0.541 cos ωct

The outputs from the I and Q –channel product modulators are

combined in the linear summer and produce a modulated output of

Summer output =-0.541 sin ωct -0.541 cos ωct

(or)

= 0.765 sin (ωct – 1350)

For remaining tribit codes, the procedure is same.

2.16.2 8-QAM Receiver

An 8-QAM receiver is almost identical to the 8-PSK receiver.

The differences are the PAM levels at the output of the product

detectors and the binary signals at the output of the analog –to digital

converters.

Because there are two transmit amplitudes possible with 8-QAM

that are different from those achievable with 8-PSK , the four demodulated

PAM –levels in 8-QAM are different from those in 8-PSK. Therefore,

the conversion factor for the analog-to- digital converters must also be

different.

Also, with 8-QAM the binary output signals from the I-channel

analog-to-digital converter are the I and C bits, and the binary output

signals from the Q-channel analog to digital converter are Q and C-bits.

Bandwidth consideration for 8-QAM

Bandwidth required for 8-QAM is same as in 8-PSK.

fb

The minimum BW =

3

2.17 16 QAM

input data are acted on in groups of four (24 = 16). As with 16 QAM both

the phase and magnitude of the transmit carrier are varied.

Digital Communication

2.43. The input binary data are divided into four channels: I, I’, Q, Q’.

The bit rate in each channel is equal to one-fourth of the input bit rate

(fb/4).

Bit Splitter

Four bits are serially clocked into the bit splitter, then they are

outputted simultaneously and in parallel with I, I’, Q, Q’ -channels

2- to 4- level converter

-4 level converters (a logic 1 = positive and a logic 0 = negative). The I’

and Q’ bits determine the magnitude (a logic 1 = 0.821 V) and logic 0 =

0.22 V). Consequently, the 2 to 4 level converter generates 4- level PAM

signal for each 2- to -4 level converters.

each 2- to -4 level converters. They are ± 0.22 V and ±0.821V .

ANALOG AND DIGITAL COMMUNICATION

carriers in the balanced modulator (product modulator). Four outputs

are possible for each product modulator

They are + 0.821 sin wct, -0.821 sin wct,+ 0.22 sin wct and - 0.22

sin wct

Summer

product modulators and produces the 16 output conditions necessary

for 16- QAM.

The Figure 2.44 shows the truth table, Phasor diagram and

constellation diagram for 16-QAM.

For example

For a, quadbit input of 0000, determine the output amplitude and

phase for 16 QAM modulator.

For Q Product Modulator

They are +0.821 cos wct, - 0.821 wct, 0.22 cos wct and -0.22 cos wct

Digital Communication

I - Channel

0. The output is -0.22 V.

The output of I-Channel product modulator for the two inputs are

-0.22 V and sin wct.

Q - Channel

Q’ = 0. The output is -0.22 V.

are - 0.22 V and cos wct

Summer output is

ANALOG AND DIGITAL COMMUNICATION

Truth table

Binary input

16- QAM Output

}

Q Q’ I I’

0 0 0 0 0.311 V -1350

0 0 0 1 0.850 V -1650

0 0 1 0 0.311 V -450

}

0 0 1 1 0.850 V -1500

0 1 0 0 0.850 V -1050

0 1 0 1 1.161 V -1350

0 1 1 0 0.850 V -750

}

0 1 1 1 1.61 V -450

1 0 0 0 0.311 V 1350

1 0 0 1 0.850 V 1650

1 0 1 0 0.311 V 450

}

1 0 1 1 0.850 V 150

1 1 0 0 0.850 V 1050

1 1 0 1 1.61 V 1350

1 1 1 0 0.850 V 750

1 1 1 1 1.61 V 450

With 16- QAM, because the input data are divided into four

channels, the bit rate in the I, I’, Q, Q’ channel is equal to one-fourth of

the binary input data rate (fb/4).

fa

For 16-QAM, fa = 4

fb fb/2 fb

where fa = \ fa = =

2 4 8

Digital Communication

fb

Output = x sin 2p

t. sin 2pfct

8

x f f

= cos 2π f c − b t − cos 2π f c + b t

2 8 8

fb

\ The output frequency spectrum extends from fc - to fc + fb

8

8

fb f

\ Minimum bandwidth = f c + − fc − b

8 8

fb

=2

8

fb

BW =

4

Definition

reference carrier from a received signal. This is sometimes called phase

referencing.

amplitude , frequency of the carrier is changed respectively. So , at the

receiver we generate a separate carrier and multiplied with received

signal, obtain the original information.

But , in phase shift keying (PSK) system, the phase of the carrier

is changed according to the instantaneous value of modulating signal.

transmitted carrier.

recovered and compared with the received carrier in a product detector.

ANALOG AND DIGITAL COMMUNICATION

necessary to produce a carrier at the receiver that is phase coherent

with the transmit reference oscillator. This is the function of the carrier

recovery circuit.

modulator’s and therefore, is not transmitted consequently at the

receives the carrier cannot simply be tracked with a standard PLL.

sophisticated method of carrier recovery circuit is necessary such as,

is the squaring loop. The block diagram flow shows the squaring loop

method carrier recovery is as shown in figure 2.45.

The received BPSK signal is given to BPF. The BPF reduces the

spectral width of the received noise and produce the required signal to

the squarer circuit.

Squarer circuit

The squaring circuit removes the modulation and generates the

second harmonic of the carrier frequency. With BPSK , only two output

phases are possible + sin wct, - sin wct.

Mathematically, the squaring loop circuit operation can be

described as follows,

Digital Communication

squaring circuit is given by,

1

= sin2 ωct = (1- cos2ωct)

2

1 1

= – cos 2 ωct

2 2

= cos 2 ωct

Case 2 Received signal consider –sin ωct, then the output of squaring

circuit is given by,

1

= sin2ωct = (1- cos2ωct)

2

1 1

= - cos 2 ωct.

2 2

filtered out

=cos 2ωct

constant voltage (+1/2 V) and a signal at twice the carrier frequency

(cos 2 ωct). The constant voltage is removed by filtering, leaving only

cos 2 ωct.

Phase-Locked loop(PLL)

tracked for the second harmonic of the squarer output . The putput of

VCO is taken and applied as a input to frequency divider network.

Frequency divider

frequency by the value of N. Here divide by 2 network only used and

obtained the phase reference carrier for the product detectors.

ANALOG AND DIGITAL COMMUNICATION

A second method of carrier recovery is the costas, (or) quadrature

loop is as shown in Figure 2.46 below.

Power splitter

The power splitter circuit directs the input received PSK signal to

the I and Q Balanced modulators (or) product detectors.

I - Balanced modulator

It has two inputs , one is the output of power splitter (i.e) received

signal, another is VCO output .

It produces In phase signal to Balanced product detector.

Q – Balanced modulator

It is also having two inputs, one is the received signal, another is

90 phase shifted VCO output.

0

to balanced product detector.

Loop filter

The balanced product detector output is product of I & Q-signals

,(i.e) applied as input to loop fiter.

Loop filter designed for the cut-off frequency of carrier (i.e) ± wc

and produce a error voltage.

VCO

The error voltage of loop filter act as a control voltage for VCO

Digital Communication

Once the frequency of the VCO is equal to the suppressed carrier

frequency then the VCO is in lock – in condition.

The output of the VCO – act as carrier signal and it is applied to

the I and Q- balanced modulator in the Receiver circuit.

Clock recovery is the process of extracting the phase coherent

clock from the received signal.

2.19.1 Need of clock recovery circuit

In any digital system , digital radio requires precise timing (or)

clock synchronization between the transmit and receive circuitry.

Because of this, it is necessary to generate clocks at the receiver that are

synchronous with those at the transmitter, to reproduce original data.

circuit for clock recovery circuit

Operation

recover clocking information from the received data.

ANALOG AND DIGITAL COMMUNICATION

The recovered data are delayed by one-half a bit time and then

compared with the original data in an XOR-circuit.

equal to the received data rate(fb).

Figure 2.49 Shows the relationship between the data and the

recovered clock timing.

contain a substantial number of transitions (I/O sequences), the

recovered clock is maintained.

successive 1’s and 0’s ,the recovered clock would be lost.

transmitter end and descrambled at the receive end.

signal using a prescribed algorithm , and the descrambler uses the same

algorithm to remove the transitions.

2.20 Comparison of various digital communication systems

Param- M-ary

S.NO BASK BFSK MSK BPSK DPSK QPSK QAM

eter PSK

Variable Amplitude

1 Amplitude Frequency Frequency Phase Phase Phase Phase

character and phase

Bits per

2 One One Two One One Two N-bits N-bits

symbol

Number of

possible

3 Two Two Four Two Two Four M = 2N M = 2N

symbols

m = 2N

Symbol

4 Tb Tb 2Tb Tb 2Tb 2Tb NTb NTb

duration

Minimum 2fb

5 2fb 4fb 1.5 fb 2fb fb fb 2nH.fb/N

BN N

Perfor-

mance in Better Better Better Better

6 Poor Better Better Better

presence than ASK than ASK than FSK than ASK

of noise

Error pos- Lower Lower

7 High Low Low Low Low Low

sibility than FSK than ASK

Digital Communication

Moderate- More

Complex- More More More

8 Simple ly com- Complex Com- Complex

ity Complex Complex Complex

plex plex

Non-

Detection Coher- Non- Coher- Coher-

9 Coherent Coher- Coherent Coherent

Method ent Coherent ent ent

ent

Minimum p 0.4 Eb

10 Euclidean Eb 2 Eb 2 Eb 2 Eb 2 Eb 2 Eb 2 Eb. Sin for M =

distance

m 16

s(t)

s(t) = K1

s(t) = s(t) =

ANALOG AND DIGITAL COMMUNICATION

b(t) 2Ps s(t) = 2Ps

Cos = 2Ps Cos 2pf0t

Equation s(t) 2Ps cos cos (2pf0t +

cos (2 (pf0 s(t) = + K2

of trans- 2pf0t for = 2Ps cos 2pf0t (2pf0t + (2m+1) p

11 +d (t) Wt) b(t) 2Ps 4 0.2 Ps

mitted symbol cos 2pf0t b(t) (2m+1) p

W is cos 2pf0t 4 Sin 2pf0t

signal d(t)e differ- m = 0, 1,

‘I’ = 0 frequency K1K2 = ±1

entially m = 0, 2,.....

for sym- shift or ± 3, for

coded 1, 2,.....

m =16

bol ‘O’

Digital Communication

converts digital data to analog signal. In ASK, the two binary

values (0, 1) are represented by two different amplitudes of the

carrier signal.

converts digital data to analog signal. In FSK, the two binary

values (0, 1) are represented by two different frequencies near

the carrier frequency.

converts digital data to analog signal. In PSK, the two binary

values (0, 1) are represented by two different phase’s (00 or 1800)

of the carrier phase.

In BPSK, the two symbols are transmitted with the help of

following signals,

Symbol ‘1’ s1(t) = 2P cos ( 2πƒ0t)

Symbol ‘2’ s2(t) = 2P cos ( 2πƒ0t + π)

Here observe that above two signals differ only in a relative phase

shift of 1800. Such signals are called antipodal signals.

ANALOG AND DIGITAL COMMUNICATION

5. What is correlator?

noisy signal ƒ(t) with the locally generated replica of the known

signal x(t). Its output is given as,

T

r(t) = ⌠ ⌠ f (t)x(t)dt

0

Minimum shift keying uses two orthogonal signals to

transmit binary ‘0’ and ‘1’. The differences between these two

frequencies are minimum. Hence, there are no abrupt changes

in the amplitude and the modulated signal is continuous and

smooth.

The rate at which data (bits) are transmitted is called bit rate.

That is number of bits transmitted per second. Unit is bps(bits

per second).

The rate at which signal elements (pulses) are transmitted is

called baud rate (modulation rate). This means number of signal

elements(pulses) transmitted per second. Unit is bauds.

Binary PSK

1. Two different phases are used to represent two binary values.

2. Each signal element represents only one bit.

QPSK

1. Four different phases are used to represent two binary values.

2. Each signal element represents two bits.

Digital Communication

1. The two frequencies are in- The difference between the

teger multiple of base band two frequencies is minimum

frequency and at the same and at the same time they are

time they are orthogonal. orthogonal.

2. BW = 4fb BW = 1.5 fb

3. This is binary modula- This quadrature modulation.

tion.

1. One bit form a symbol. Two bits form a symbol.

2. Two possible symbols. Four possible symbols

Minimum bandwidth is twice of

3. Minimum bandwidth is equal to ƒb .

ƒb

4. Symbol duration = T . Symbol duration = 2T

S r . Parameter QPSK QASK

No

1. Modulation Quadrature phase Quadrature amplitude and

phase

2. Location of signal All signal points placed on Signal points are replaced

points circumference of circle symmetrically about origin

2

3. Distance between 2 0.15 Eb for 16 symbols 0.4 Eb for 16 symbols

signal points and for 2 Eb for 4 symbols

4. Complexity Relatively simpler Relatively complex

5 Noise immunity Better than QASK Poor than QPSK. But better

than M-ary PSK

6 probability

Error Less than QASK Higher than QPSK. Lower

than M-ary PSK.

7 Type of demodulation Coherent Coherent

ANALOG AND DIGITAL COMMUNICATION

Coherent (synchronous) detection

is phase locked with the carrier at the transmitter. The detection

is done by correlating received noisy signal and locally generated

carrier. The coherent detection is a synchronous detection.

Non-coherent (envelope) detection

This type of detection does not need receiver carrier to be

phase locked with transmitter carrier. The advantage of such a

system is that the system becomes simple, but the drawback is

that error probability increases. The different digital modulation

techniques are used for specific application areas. The choice is

made such that the transmitted power and channel bandwidth

are best exploited.

1) For the same bit error rate, the bandwidth required by QPSK

is reduced to half as compared to BPSK.

2) Because of reduced bandwidth, the information transmission

rate of QPSK is higher.

3) Variation in OQPSK amplitude is not much. Hence carrier

power almost remains constant.

bandwidth of this Channel is 5MHz. What is the signal to

Noise ratio required in order to achieve this capacity?

Given

I= 30Mbs

B = 5MHz

Solution

According to Shannon Hartley theorem,

Digital Communication

I = B log2

( 1+

S

N )

)

30 x 106 = 5 x 106 log2

( 1+

S

N

30 x 106

5 x 106

= log2

( ) 1+

S

N

6 = log2

( ) 1+

S

N

S

64 =

N

S

= 63 or 17.99 dB

N

15. For an 8-PSK system operating with all information bit rate

of 48kbps. Determine (a) baud (b) minimum baud width (c)

bandwidth efficiency.

Given:

fb = 48 Kbps

Solution

fb 48000

(a) Band = =

N 3

fb 48000

(b) Bandwidth B = = = 16000

N 3

Transmission bit rate (bps)

(c) Bandwidth efficiency =

Minimum bandwidth(Hz)

48000 bps

= = 3 bits/cycle

16000 Hz

ANALOG AND DIGITAL COMMUNICATION

Minimum bandwidth for FSK is given by

B = |(fs-fb)-(fm-fb)|

= |(fs-fm)|+ 2fb

|fm-fs|

Df =

2

From 2Df = |fs -fm|

Substitute (3) in (1)

= 2Df + 2 fb

B =2 (Df + fb)

Where,

B - Minimum Nyquist bandwidth (Hz)

fb - Input bit rate (bps)

Df = frequency deviation (Hz)

17. Draw the phasor diagram of QPSK.

01

10 900 00

11

Information capacity

It is a measure of how much information can be propagated

through a communications system and it is a function of

bandwidth and transmission time. Information capacity

represents the number of independent symbols that can be

carried through a system in a given unit of time.

Bit rate

It is the number of bits transmitted during one second. It is

expressed in bits per second (bps).

Digital Communication

19. What is the relation between bit rate and baud for a FSK sys-

tem?

The bit time equals the time of an FSK signaling element, and

the bit rate equals the baud.

Baud = fb / N

Number of bits encoded into each signaling element N = 1

Baud = fb

Where, fb - Input bit rate (bps)

20. Draw ASK and PSK waveforms for a data stream 01101001.

Advantages of QPSK:

(i) Low error probability,

(ii) Very good noise immunity,

(iii) For the same bit error rate, the bandwidth required by QPSK

is reduced to half as compared to BPSK.

(iv) Because of reduced bandwidth, the information transmission

rate of QPSK is higher.

ANALOG AND DIGITAL COMMUNICATION

produces SNR of 1023 at the output.

S

= 1023

N

B = 50 KHz

( )

Solution

S

Rmax = B log2 1+

N

= 50000 log2 (1+1023)

= 150514.99 bits/sec

23. What is the purpose of limiter in FM receiver?

In an FM system, the message information is transmitted by

variations of the instantaneous frequency of a sinusoidal carrier

wave, and its amplitude is maintained constant.

Any variation of the carrier amplitude at the receiver input must

result from noise or interference.

An amplitude limiter, following the IF section is used to remove

amplitude variations by clipping the modulated wave at the IF

section.

2 Amplitude

1 Amplitude

4 Phase

4 Phase

01 00 011

010

110

10 11 111

Digital Communication

imum bandwidth, for a BFSK signal with a mark frequency of 49

kHz, a space frequency of 51 kHz and an input bit rate of 3kbps.

Given fm = 49 KHz

fs = 51 KHz

fb = 3 Kbps

Solution

(a) Peak frequency

|fm-fs|

Df =

2

49 X 103 - 51 X 103

=

2

= 1 KHz

B = 2(Df + fb)

= 2(1000 + 3000) = 8000

= 8 KHz

AM

ANALOG AND DIGITAL COMMUNICATION

REVIEW QUESTIONS:

PART - A

1. What do you mean by FSK?

2. What is M-ary encoding?

3. State Shannon’s Limit for channel capacity theorem. Give an

example.

4. Draw the block diagram of BFSK transmitter.

5. Define Bandwidth efficiency.

6. Draw the constellation diagram of QPSK signal.

7. Draw 8-QAM phasor diagram.

8. Determine the peak frequency deviation and minimum bandwidth

for a binary FSK signal with a mark frequency of 49 KHz, a space

frequency of 51 KHz.

9. What is Shannon limit for information capacity?

10. What is binary phase shift keying?

11. Draw ASK and PSK waveforms forms for a data stream 110101.

12. What are the advantages of QPSK?

13. What is the relation between bit rate and baud for a FSK system?

14. Draw the ASK and FSK signals for the binary signal s (t) = 101100l.

15. A typical dial up telephone connection has a bandwidth of 3 KU, and

a signal to noise ratio of 30 dB. Calculate the Shannon limit.

16. What are the two types of carrier recovery circuit?

17. Write down the expression for peak frequency deviation of FSK.

18. What is the need for synchronization?

19. What is the bandwidth requirement of FSK?

20. Write down the bit error rate expression of a QPSK system.

21. Draw the block diagram of QPSK transmitter.

22. Differentiate between PSK from DPSK

23. What are the advantages of PSK over FSK?

24. Determine the bandwidth and baud for the FSK signal with a mark

frequency of 49 kHz and a space frequency of 51 kHz and a bit rate

of 2 kbps.

25. Write the differences between PSK and FSK.

PART – B

1. Draw the block diagram of a QPSK transmitter and explain. Derive the

bandwidth requirement of a QPSK system.

2. Draw the block diagram of a non-coherent receiver for detection of

binary FSK signals and derive the probability of symbol error for a

Digital Communication

3. (i) Determine the baud rate and minimum bandwidth necessary to

pass a 10 Kbps binary signal using amplitude shift keying.

(ii) Explain quadrature amplitude modulation with the help of

relevant diagrams.

4. (i) Derive an expression for baud rate in PSK and FSK systems.

(ii) Explain the generation and detection of QPSK signals.

5. With neat schematic diagram, explain the balanced ring modulator of

BPSK.

6. (i). Describe the two techniques of achieving carrier recovery circuit.

(ii). Explain in detail about 8 – QAM transmitter and receiver.

7. With relevant diagram explain the method of synchronous detection

of FSK signal. What should be the relationship between bit rate and

frequency shift for a better performance.

8. With neat diagram explain the working of a DPSK transmitter. What

are the advantages of DPSK over PSK.

9. Explain the generation and detection of PSK system with the help of

block diagrams

10. Describe the coherent detection procedure of M-ary PSK and obtain

the expression for the probability of symbol error.

11. (i) Discuss the principle of operation of FSK- Transmitter.

(ii). Write a note on QPSK.

12. (i). Discuss the principle of operation of FSK - receiver.

(ii). Write a note on DPSK.

13. What is carrier recovery? Discuss how carrier recovery is achieved by

the squaring loop and Costas loop circuits.

14. (i) Compare the different digital modulation schemes in terms of

bandwidth bit error rate and efficiency.

(ii) For the DPSK modulator, determine the output phase sequence

for the following input bit sequence: 11001100 ll10 10. Assume that

the reference, bit = 1

15. (i). Determine the minimum bandwidth and baud for a BPSK

modulator with a carrier frequency of 80 MHz and an input bit rate

fb= 1 Mbps. Sketch the output spectrum.

(ii) Discuss the differences between PSK and differential PSK

Data Communication: History of Data Communication - Standards

Organizations for Data-Communication- Data Communication

Circuits - Data Communication Codes - Error Detection and

Correction Techniques - Data communication Hardware - serial and

parallel interfaces. Pulse Communication: Pulse Amplitude Modulation

(PAM) – Pulse Time Modulation (PTM) – Pulse code Modulation (PCM)

- Comparison of various Pulse Communication System (PAM – PTM –

PCM).

ANALOG AND DIGITAL COMMUNICATION

3.1 INTRODUCTION

single data unit is called as datum.

information between two points.

consists of any one or the communication of the following:

microprocessor OP codes, control codes, user addresses. Program

data or data base information.

during the transmission , it may be analog or digital.

computers connected to each other a public telecommunication

network is as shown in figure 3.1.

digital computing equipments, internet etc.

DATA COMMUNICATION

networking.

exchange of data such as audio, text and video between any points

in world.

network. A network is like a highway over which the data travels

smoothly.

that is agreed upon by the users and creaters of data.

some form of transmission medium . Such as a co-axial cable.

3.2 HISTORY OF DATA COMMUNICATIONS

invented and when the Morse code was developed.

The basic symbols for the Morse code were dots and dashes. Various

combinations of dots and dashes were used for representing various

letters, numbers, punctuation marks etc.

Wheatstone and Sir William Cooke. In 1844 the first telegraph line

between Baltimore and Washington D.C. was established.

high speed version was developed in 1860.

ANALOG AND DIGITAL COMMUNICATION

simultaneous transmission of upto six different messages from

telegraph machines on the same line.

The telephone line was invented in 1876 by Alexander Bell. In 1899

Marconi was successful in sending radio telegraph messages.

In 1920 the first commercial radio stations were installed.

The first special purpose computer was developed by Bell

laboratory in 1940 using electromechanical relays.

The first general purpose computer was developed jointly by Harvard

university and IBM.

In 1951 the first mass produced electronic computer was launched.

Then a number of mainframe computers, small business computers,

Personal computer, has increased rapidly.

So the need of data communication has increased exponentially.

3.3 COMPONENTS OF DATA COMMUNICATION SYSTEM

If we specifically consider the communication between two com-

puters then the data communication system is as shown in figure 3.2

2. Sender 5. Protocol

3. Medium

DATA COMMUNICATION

Description

1. Message

one point to the other.

or combination of them.

2. Sender

sender are: computer, workstation, video camera, telephone handset

etc.

3. Medium

It is the physical path over which the message travels from the

sender to the receiver.

pair wire, fiber optic cable, radio waves (used in terrestrial or satellite

communication).

4. Receiver

are : computer, TV receiver, workstation, telephone handset etc.

5. Protocol

medium, but the actual communication between them will take

place with the help of a protocol.

ANALOG AND DIGITAL COMMUNICATION

Some of the standard organizations for data communication are

as follows

creates sets of rules and standards for graphics, document exchange

etc.

organizations.

raphy (CCITT)

CCITT .

telephone and telegraph communications.

computer and communications engineers.

DATA COMMUNICATION

standards.

standards for data and telecommunications.

responsibilities to those of ANSI.

3.5. DATA COMMUNICATION CIRCUITS

network.

station, a transmission medium and a destination called secondary

station.

own peripherals and local terminals.

station via a transmission medium.

includes terrestrial microwave or satellite communication.

coaxial cables.

to describe the interface equipment to take digital signals from

computers and terminals and to convert them into a form which is

more suitable for transmission.

converts digital signals to analog signals and interfaces the data

ANALOG AND DIGITAL COMMUNICATION

following techniques.

1. Serial transmission

2. Parallel transmission.

simultaneously . Parallel transmission takes place only between host

and DTE.

a time.

of bits between two or more digital devices. The data transmission takes

place over some physical medium from one computer to the other.

DATA COMMUNICATION

1. Parallel Transmission

2. Serial Transmission.

Data Transmission

parallel serial

transmission transmission

Synchronous Asynchronous

transmission are the two basic types of transmission.

namely synchronous and asynchronous transmission.

ANALOG AND DIGITAL COMMUNICATION

simultaneously on separate wires as shown in figure 3.5

Wires carrying the bits

0 0

1 1

2

0

3

0

4

1

5

0

6

1

7

0

8

Transmitter Receiver

interconnecting the two device.

close to each other.

and its printer.

and a receiver.

increase to an unmanageable number.

(i) The advantage of parallel transmission is that all the data bits will

be transmitted simultaneously. Therefore the time required for the

DATA COMMUNICATION

(ii) The serial transmission will require N number of cycles for the

transmission of same word.

(iii) Due to this the clock frequency can be kept low without affecting

the speed of operation . For serial transmission the clock frequency

cannot be low.

Disadvantage

To transmit an N-bit word, we need N number of wires. With

increase in the number of users, these wires will be too many to

handle. The serial transmission uses only one wire , for connecting the

transmitter to the receiver. Hence practically the serial transmission is

always preferred.

3.6.3 Serial Transmission

In serial transmission, the bits of a byte are serially transmitted one

after the other as shown in figure 3.6

The byte to be transmitted is first stored in a shift register. Then

these bits are shifted from MSB to LSB bit by bit in synchronization

with the clock . Bits are shifted right by one position per clock cycle.

The bit which falls out of the shift register is transmitted. Hence LSB

is transmitted first.

For serial transmission only one wire is needed between the

transmitter and the receiver. Hence serial transmission is preferred

for long distance data communication. This is the advantage of serial

transmission over parallel transmission.

Single wire used for

Shift register

transmission

(At transmitter)

MSB LSB 1 1 1 1

1 0 0 0 1 0 1 1

0 000

LSB MSB

Figure 3.6 Serial transmission

ANALOG AND DIGITAL COMMUNICATION

is transmitted per clock cycle, it requires a time corresponding to

8-clock cycles to transmit one byte . (The parallel transmission needs

only one clock cycle to transmit a byte).The time can be reduced by

increasing the clock frequency.

Advantages of serial transmission

1. Only one wire is required.

2. Reduction in cost due to less number of conductors.

Disadvantages

1. The speed of data transfer is low

2. To increase the speed of data transfer, it is necessary to increase the

clock frequency.

Application

It is used for computer to computer communication , specially

long distance communication.

Types of serial transmission

There are two types of serial transmission namely,

1. Asynchronous transmission.

2. Synchronous transmission.

3.6.4 comparison of Serial and Parallel Transmission

S.No Parameter Parallel Transmission Serial transmission

1 Number of wires required N wires 1 wire

to transmit N bits.

2 Number of bits transmit- N bits 1 bit

ted simultaneously

3 Speed of data transfer Fast Slow

4 Cost Higher due to more number Low, since only one

of conductors. wire is used.

5 Application Short distance communica- Long distance com-

tion such as computer to puter to computer

printer communication. communication.

DATA COMMUNICATION

3.7 CONFIGURATIONS

Data communication circuits can be generally categorized as two

point and multipoint circuits.

connecting links.

There are possible ways to connect the devices .They are follows :

2. Multipoint connection.

devices as shown in figure 3.7.

two devices only.

(see Figure 3.7(a)) or using a microwave or satellite link as shown

in Figure 3.7(b).

ANALOG AND DIGITAL COMMUNICATION

In such a connection more than two devices share a single link

as shown in figure 3.8

In the multipoint connection the channel capacity is shared. If

many devices share the link simultaneously . It is called spatially shared

connection.

But if users share it turn then it is time sharing connection.

3.8 TOPOLOGIES

identifies the manner in which various locations within the network

are interconnected.

Most commonly used topologies are

1. Point-to-point

2. The star

3. The bus or the multidrop

4. A ring or a loop

5. Mesh

These topologies are shown in figure 3.9

DATA COMMUNICATION

Station Station

1 2

(a) Point to point

Remote 1

stations 2 Remote station

1 3

2 4

Central

host 3 Common communication

medium

5 4 n 5

(b) Star (c) Bus or multidrop

1

Stations 6 1 Stations

2 6 2

5

7 3

3 4

5

4

(d) Ring or loop (e) Mesh

Figure 3.9 Various Topologies

3.9 TRANSMISSION MODES

circuits.

Transmission Modes

direction. For example the radio or TV broadcasting systems can only

transmit. They cannot receive .

ANALOG AND DIGITAL COMMUNICATION

place as shown in Figure 3.10

unidirectional.

receive but not simultaneously.

trans receiver or walky talky set.

shown in figure 3.11

Each station can transmit and receive , but not at the same time.

When one device is ending the other one is receiving and vice versa.

DATA COMMUNICATION

taken over by the transmitting (sending).

communication to take place in both the directions simultaneously.

example the telephone systems.

In full duplex mode, signals going in either direction share the full

ANALOG AND DIGITAL COMMUNICATION

capacity of link.

The link may contain two physically separate transmission paths one

for sending and another for receiving.

travelling in both directions.

3.9.4 Full/full Duplex Mode

In this mode , the transmission is possible in both the directions at

the same time but not between the same two stations.

That means say one station is transmitting to a second station and

receiving from a third station at the same time.

(Example-Conference cells, Video downloading using torrents)

sequences used for encoding characters and symbols.

The codes are also called as the character sets, character codes,

symbol codes or character languages.

The three most common codes are

1. Baudot code

2. ASCII code and

3. EBCDIC code

Need of Code

A binary digit or bit can represent only two symbols as it has only

states “0” or “1”.

computers because there we need many more symbols for

communication. These symbols are required to represent.

1. 26 alphabets with capital and small letters

2. Numbers from 0 to 9

DATA COMMUNICATION

Therefore instead of using only single binary bits, a group of bits is

used as a code to represent a symbol.

3.10.1 ASCII-(American Standard Code for Information Interchange)

It was defined by American National standards Institute (ANSI). It

is a 7 bit code with 27 i.e. 128 possible combinations and all of them

have defined meanings.

The ASCII code set consists of 94 printable characters. SPACE and

DEL characters, and 32 control symbols. These are all showed in

Tables 3.1 and 3.2.

Table 3.1 ASCII code set

7 0 0 0 0 1 1 1 1

Bit

6 0 0 1 1 0 0 1 1

Numbers

5 0 1 0 1 0 1 0 1

4321 0 1 2 3 4 5 6 7

0000 0 NUL DLE SPACE 0 @ P P

0001 1 SOH DC1 ! 1 A Q a Q

0010 2 STX DC2 “ 2 B R b R

0011 3 ETX DC3 # 3 C S c S

0100 4 EOT DC4 $ 4 D T d T

0101 5 ENQ NAK % 5 E U e U

0110 6 ACK SYN & 6 F V f V

0111 7 BEL ETB . 7 G W g W

1000 8 BS CAN ( 8 H X h X

1001 9 HT EM ) 9 I Y i Y

1010 A LF SUB * : J Z j Z

1011 B VT ESC + ; K [ k {

1100 C FF FS , < L \ L l

1101 D CR GS - = M ] m 1

1110 E SO RS > N ^ n ~

1111 F SI US / ? O - - o DEL

character “K” are (4,B) therefore its code is 1001011.

ANALOG AND DIGITAL COMMUNICATION

The control symbols are codes reversed for special functions. The

control symbols are as listed in Table 3.2 CR(carriage Return) and

LF(line feed) are the symbols used for basic operations of printers or

displays.

Acknowledgement) are used for the error control.

And the symbols STX (Start to Text) and ETX (End of Text) are used

for grouping of data character.

The outer symbols and their meanings are as listed in Table 3.2.

ACK Acknowledgement FF Form Feed

BEL Bell FS File Separator

BS Backspace GS Group Separator

CAN Cancel HT Horizontal Tabulation

CR Carriage Return LF Line Feed

DC1 Device Control 1 NAK Negative Acknowledgement

DC2 Device Control 2 NUL Null

DC3 Device Control 3 RS Record Separator

DC4 Device Control 4 SI Shift-In

DEL Delete SO Shift-Out

DLE Data Line Escape SOH Start of Heading

EM End of Medium STX Start of Text

ENQ Enquiry SUB Substitute Character

EOT End of Transmission SYN Synchronous Idle

ESC Escape US Unit separator

ETB End of Transmission VT Vertical Tabulation

ETX End of Test

Use of parity bit in ASCII code

ASCII is a 7 bit code but the eighth bit is often used. This is called

as parity bit. The parity bit is used to detect the errors introduced

during transmission.

The parity bit is generally added in the most significant bit (MSB)

position.

DATA COMMUNICATION

The 8-bit ASCII code word format has been shown in figure 3.13

8 bits

Figure 3.13 An ASCII word with parity bit included in the MSB

position

This is an 8-bit code. However all the possible 256 combinations are

not used.

There is no parity bit used to check error in the basic code set. The

EBCDIC code set is as shown in table 3.3.

0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1

1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1

Bit

2 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1

Numbers

3 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1

4567 0 1 2 3 4 5 6 7 8 9 A B C D E F

0000 0 NUL DLE SP & 0

0001 1 SOH SBA / a j A J 1

0010 2 STX EUA SYN b k s B K S 2

0011 3 ETX IC c / t C L T 3

0100 4 d m u D M U 4

0101 5 HT NL e n v E N V 5

0110 6 ETB f o w F O W 6

0111 7 ESC EOT g p x G P X 7

1000 8 h q y H Q Y 8

1001 9 EM i r z I R Z 9

1010 A ! !

∩

1011 B $ #

1100 C DUP RA < * % @

1101 D SF ENQ NAK ( ) --

1110 E FM + ; > =

1111 F ITB SUB | ⌐ ? -

ANALOG AND DIGITAL COMMUNICATION

b0 b1 b2 b3 b4 b5 b6 b7

MSB LSB

b0 b1 b2 b3 b4 b5 b6 b7

SP = 1 0 1 0 0 0 0 0

combinations are possible but in this code there are 58 symbols.

¾¾ Therefore the same code is used for two symbols using letter shift/

figure shift keys which change the meaning of a code.

asynchronously.

No No

case case 5 4 3 2 1 case case 5 4 3 2 1

1 A - 0 0 0 1 1 17 Q 1 1 0 1 1 1

2 B ? 1 1 0 0 1 18 R 4 0 1 0 1 0

3 C : 0 1 1 1 0 19 S . 0 0 1 0 1

4 D WRU 0 1 0 0 1 20 T 5 1 0 0 0 0

5 E 3 0 0 0 0 1 21 U 7 0 0 1 1 1

6 F 1 0 1 1 0 1 22 V - 1 1 1 1 0

7 G & 1 1 0 1 0 23 W 2 1 0 0 1 1

8 H # 1 0 1 0 0 24 X / 1 1 1 0 1

9 I 8 0 0 1 1 0 25 Y 6 1 0 1 0 1

Au-

10 J dible 0 1 0 1 1 26 Z + 1 0 0 0 1

signal

DATA COMMUNICATION

Catridge re-

11 k ( 0 1 1 1 1 27 0 1 0 0 0

turn

12 l ) 1 0 0 1 0 28 Line feed 0 0 0 1 0

13 M . 1 1 1 0 0 29 Letters 1 1 1 1 1

14 N . 0 1 1 0 0 30 Figures 1 1 0 1 1

15 O 9 1 1 0 0 0 31 Space 0 0 1 0 0

16 P 0 1 0 1 1 0 32 Not used 0 0 0 0 0

In order to make the size of each code word 1 byte (8 bits) the ASCII

patterns are augmented by an extra 0 at the left (MSB position).

The extra bit is called as parity bit. The first byte in the extended

ASCII is 00000000 and the last one is 0111 1111.

3.10.5 Unicode

It is a 16 bit code which can represent upto 216 (65 ,536) symbols.

directly.

different languages. Some sections are reserved for graphical and

special symbols.

3.10.6 ISO

standardization. It has designed a code of 32 bits.

symbols.

Bar codes are seen in almost every consumer item sold in the

ANALOG AND DIGITAL COMMUNICATION

modern stores across the world . The bar code is a series of black

bars separated by white spaces.

The widths of the bars represent binary 1’s and 0’s and the bar

pattern represents the cost of that item.

4. Security access

5. Document

6. Production counting

7. Automatic billing

Figure 3.14 (a) shows a typical bar code and Figure 3.14 (b) shows

the bar code structure.

Refer Figure 3.14 (b) various fields in the bar code structure are

as follows

1. Start Margin

DATA COMMUNICATION

unique sequence of bar and spaces which identifies the beginning of the

data field.

2. Data Characters

used by the bar code. The data in this field is serially encoded and is

extracted from the card with an optical scanner. The photodetector

in the scanner senses the reflected light and converts it into binary

signals.

2. Check Characters

The value of the characters within a message are added together and the

sum is divided by a constant. The stop character field indicates the end

of data and stop margin contain a unique pattern of black and white

bars to indicate that the bar code has ended. A number of bar code

formats are being used out of which the most common code is known as

code 39.

3.11 INTRODUCTION TO ERROR DETECTION AND CORRECTION

TECHNIQUES (ERROR CONTROL)

3. 11. 1 Need of Error Control

systems such as computers as shown in figure 3.15 . the signal get

contaminated due to the addition of “ Noise” to it.

The noise can introduce an error in the binary bits travelling from

one system to the other . That means 0 may change to 1 or 1 may

change to 0.

digital system. Therefore it is necessary to detect and correct the

errors.

control.

ANALOG AND DIGITAL COMMUNICATION

more than one extra bits are added to the data bits at the time

transmitting.

These extra bits are called as parity bits. They allow the detection or

sometimes correction of the errors.

The data bits along with the parity bits form a code word.

errors. They cannot correct the errors.

correcting the errors.

can be categorized as

1. Content errors

DATA COMMUNICATION

1. Content errors

of a message e.g. a “0” may be received as “1” or vice versa. Such

errors are introduced due to noise added into the data signal during its

transmission.

that a data block may be lost in the network as it has been delivered to

a wrong destination.

the errors into two types as,

The term single bit error suggests that only one bit in the given

data unit such as byte is in error.

shown in Figure. 3.16

Medium

0 1 1 1 0011 0 1 1 1 0010

error

Figure 3.16 Single bit error

Burst errors

If two or more bits from a data unit such as a byte change from

1 to 0 or from 0 to 1 then burst errors are said to have occurred.

ANALOG AND DIGITAL COMMUNICATION

The length of the burst is measured from the first corrupted bit

to the last corrupted bit. Some of the bits in between may not have been

corrupted.

Length of Burst

error(5 bits)

errors

Medium

0 1 1 1 0 0 1 1 0 1 0 1 1 0 01

transmission error has occurred.

transmitted bits will be reversed (0 to 1 or vice-versa) due to

transmission impairments.

It is possible for the receiver to detect these errors if the received code

word (corrupted) is not one of the valid code words.

transmitted and received code words will be equal to the number of

errors as illustrated in figure 3.18.

word

Received code 11101100 01111011 10110001

word error

error error

DATA COMMUNICATION

Number of errors 1 2 3

Distance 1 2 3

Figure 3.18

Hence to detect the errors at the receiver , the valid code words should

be separated by a distance of more than 1.

Otherwise the incorrect received code words will also become some

other valid code words and the error detection will be impossible.

distance between any two valid code words.

occurring but to prevent the undetected errors from occurring.

Error Detection Techniques

encoding and

LRC

the character is not received even after transmitting it twice, then a

transmission error is said to have occurred.

ANALOG AND DIGITAL COMMUNICATION

where human operators are used to enter data manually from a

keyboard.

transmitted immediately after it has been typed into the transmit

terminal.

transmits it back to the transmitter where it appears on the screen.

verification that the character has been received by the destination

terminal.

the transmit screen. If this happens, then the operator sends a

backspace and removes the erroneous character. Then he can retype

the correct character.

circuitry. But its disadvantage is that an error may creep in when a

correctly received character becomes erroneous on it journey back to

transmitter.

operators to detect and correct error.

information is flowing only in one direction.

known as parity but to each word being transmitted.

used as the parity bit and the remaining 7 bits are used as data or

DATA COMMUNICATION

message bits.

MSB LSB

P d6 d5 d4 d3 d2 d1 d0

7-data bits

Parity bit

odd parity.

¾¾ Even parity means the number of 1’s in the given word including

the parity bit should be even (2,4,6…).

¾¾ Odd parity means the number of 1’s in the given word including

the parity bit should be odd(1,3,5..).

required.

For odd parity this bit is set to 1 or 0 at the transmitter such that

the number of “1 bits” in the entire word is odd. This is illustrated

in Figure 3.21(b).

For even parity this bit is set 1 or 0 such that the number of “1 bits”

in the entire word is even. This is illustrated in Figure 3.21(a).

P Data bits P Data bits

1 1001011

0 1001011

1 0010011 0 1000110

to obtain an even parity is obtain the odd parity

Figure 3.21

ANALOG AND DIGITAL COMMUNICATION

error if the parity of the received signal is different from the expected

parity. That means if it is known that the parity of the transmitted

signal is always going to be “even” and if the received signal has an

odd parity then the receiver can conclude that the received signal is

not correct. This is as shown in figure 3.22

transmission the parity of the code word changes. Parity of the

received code word is checked at the receiver and change in parity

indicates that error is present in the received word. This is as shown

in figure 3.22

received byte and request for the retransmission of the same byte to

the transmitter. Receiver’s decision

Parity

P Message bits

Transmitted Correct word

0 10010110 Even

code

error

P

Incorrect word

Received code with Odd

0 00010110

one error

errors

P

Received code with

0 01000110 Odd Incorrect word

three errors

number of errors is odd i.e. 1,3,5 ….

or any even number , then the parity of the received code word will not

change. It will still remain even as shown in figure 3.23 and the receiver

will fail to detect the presence of errors.

DATA COMMUNICATION

Transmitted code : 0 1 0 0 1 0 1 1 0

Receiver’s decision

two errors Parity

Received code

0 1 1 1 1 0 1 1 0 Even No error

with two errors

Figure 3.23 The receiver cannot detect the presence of error if the

number of errors is even i.e. 2,4,6 ….

not suitable for detection of multiple errors (two , four, six etc ) i.e. The

parity checking method is not useful in detecting the burst error.

reveal the location of erroneous bit. It cannot correct the error either.

P 7 6 5 4 3 2 1

H 0 1 0 0 1 0 0 0

O 1 1 0 0 1 1 1 1

L 1 1 0 0 1 1 0 0

E 1 1 0 0 0 1 0 1

Note that the parity bits are selected in order to obtain an even

parity for each row (i.e. for each letter)

or even number of errors within the same word (or) burst errors.

parity.

and the sum is retained at the transmitter as shown in Figure 3.24

ANALOG AND DIGITAL COMMUNICATION

Word A 1 0 1 1 0 1 1 1

+

Word B 0 0 1 0 0 0 1 0

Sum 1 1 0 1 1 0 0 1

sum. At the end of the transmission the sum (called a checksum ) upto

that time is sent.

method is not useful in detecting the errors under such conditions. The

checksum error detection method can be used successfully in detecting

such errors.

of data bytes . In this method an eight bit accumulator is used to add 8

bit bytes of a block of data to find the “checksum byte” .The carries of the

MSB are ignored while finding out the checksum byte. The generation

of checksum will be clear if you refer to the example 3.1.

Solution

Carries O

10 1 0 1 1 1 1 0

1 0 1 1 0 0 0 1

Data + 1 0 1 0 1 0 1 1

bytes + 0 0 1 1 0 1 0 1

+ 1 0 1 0 0 0 0 1

Checkman

0 0 1 1 0 0 1 0

bytes

`Note that the carries of MSB have been ignored while writing the

checksum byte.

DATA COMMUNICATION

“checksum” bytes is also transmitted. The checksum byte is

regenerated at the receiver separately by adding the received bytes.

transmitted one. If both are identical then there is no error. If they

are different then the errors are present in the block of received data

bytes.

instead of the checksum itself. The receiver will accumulate all the

bytes including the 2’s complement of the checksum. If there is no

error, the contents of the accumulator should be zero after

accumulation of the 2’s complement of the checksum byte.

method is that the data bits are “ mixed up “ due to the 8 bit addition.

Therefore checksum represents the overall data block. In checksum

therefore, there is 255 to 1 chance of detecting random errors.

received in succession, the resulting collection of bits is considered as

a block of data, with rows and columns as shown in figure 3.25. The

parity bits are produced for each row and column of such block of data.

Characters C O M P U T E R

b1 1 1 1 0 1 0 1 0 1

b2 1 1 0 0 0 0 0 1 1

7 bits ASCII b3 0 1 1 0 1 1 1 0 1

codes (Mes- b4 0 1 1 0 0 0 0 0 0

sage bits) b5 0 0 0 1 1 1 0 1 0

b6 0 0 0 0 0 0 0 0 0

b7 1 1 1 1 1 1 1 1 0

VRC Bits

(even party)

→ 1 1 0 0 0 1 1 1 1

→

These bits will make the party of These bits will make

each column even LRC bits → the

→ party of each

(even party) row even

ANALOG AND DIGITAL COMMUNICATION

The LRC bits indicate the parity of rows and VRC bits indicate the

parity of columns as shown in figure 3.25

associated with the ASCII code for each character. Each VRC bit will make

the parity of its corresponding column “an even parity” .For example

consider column 1 corresponding to character “C” .The ASCII code for

the character C is,

Character C

b1 1

b2 1 → Column - 1 of the data block

b3 0

b4 0

b5 0

b6 0

b7 1

VRC bit → 1

→ VRC bit = 1 to make the parity of first

column even

Figure 3.26

Therefore the eighth bit b8 which a VRC bit is is made “1” to make

the parity even.

The LRC bits are parity bits associated with the data block of

figure 3.25 Each LRC bit will make the parity of the corresponding row,

an even parity. For example, consider row 1 of Figure 3.27.

parity even

DATA COMMUNICATION

Figure 3.27

in one of the rows and an incorrect VRC in one of the columns. The bit

which is common to the row and column is the bit in error.

is that , multiple errors in rows and columns can be only detected but

they cannot be corrected. This because, it is not possible to locate the

bits which are in error .This will be clear when you will solve the example

3.2

Example 3.2 The following bit stream is encoded using VRC,LRC and

even parity. Locate and correct the error if it is present.

11100001

Solution

1. Figure 3.28 shows the received data block alongwith the LRC and

VRC bits.

indicate wrong parity. Therefore the fifth bit is in the first row

(encircled bit) is incorrect. Thus using VRC & LRC, it is possible to

locate and correct the bits in error.

ANALOG AND DIGITAL COMMUNICATION

→

1 2 3

→ (even

b1

b2

1

1

1

1

1

0

0

0

O0

0

0

0

1

0

0

1

1

1

parity)

Wrong

→

b3 0 1 1 0 1 1 1 0 1

parity

Data b4 0 1 1 0 0 0 0 0 0

Block → b5 0 0 0 1 1 1 0 1 0

b6 0 0 0 0 0 0 0 0 0

b7 1 1 1 1 1 1 1 1 0

VRC bits →

(Even party)

1 1 0 0 →0 1 1 1 1

Wrong party

Figure 3.28 Received data block along with VRC and LRC-bits

This technique is more powerful than the parity than the parity

check and checksum error detection.

CRC or CRC remainder is appended at the end of a data unit such

as byte.

exactly divisible by another predetermined binary number.

number.

There is no error if this division does not yield any remainder. But a

non-zero remainder indicates presence of errors in the received data

unit.

DATA COMMUNICATION

procedure given below:

requirements :

result in the bit sequence which is exactly divisible by the divisor.

n-bits

Data 00................0

Remainder

CRC

Data CRC

Step 1:

the number of bits in the predecided divisor (n+1 bits long).

ANALOG AND DIGITAL COMMUNICATION

Step 2:

Divide the newly generated data unit in step 1 by the divisor .This

is a binary division.

Step 3:

CRC.

Step 4:

This CRC will replace the n 0s appended to the data unit in step

1, to get the code word to be transmitted as shown in figure 3.29

Received code word

Data CRC

Data CRC

If remainder is 0

Remainder then no errors

¾¾ The code word received at the receiver consists of data and CRC.

¾¾ The receiver treats it as one unit and divides it by the same (n+1) bit

divisor which was used at the transmitter.

¾¾ If the remainder is zero, then the received code word is error free and

hence should be accepted.

DATA COMMUNICATION

responding code word should be rejected.

Example 3.3 The code word is received as 1100 1001 01011 .Check

whether there are errors in the received code word , if the divisor is

10101 .(The divisor corresponds to the generator polynomial).

Solution

As we know the code word is formed by adding the dividend and the

remainder

This code word will have an important property that it will be

completely divisible by the divisor.

Thus at the receiver we have to divide the received code word by, the

same divisor and check for the remainder.

If there is no remainder then there are no errors. But if there is re-

mainder after division, then there are errors in the received code

word.

Let us use this technique and find if there are errors.

Data word : 1100 1001 01011

Divisor : 10101

1111100001

)

10101 1100100101011

Received

Codeword

Divisor 10101

+

011000

10101

+

MOD - 2 011010

additions 10101

+

011111

10101

+

010100

10101

+

000011011

10101

01110 Remainder

ANALOG AND DIGITAL COMMUNICATION

received code word.

The generation of CRC code is clear after solving the example 3.4

Example 3.4 Generate the CRC code for the data word of 110010101.

The divisor is 10101.

Solution

Divisor : 10101

110010101 0 0 0 0 0

zeros

DATA COMMUNICATION

11111001111 Quotient

)

10101 11001010100000 dividend

Divisor 10101

+

+

+

+

011000

10101

011011

10101

011100

10101

10011

10101

11000

10101

11010

10101

11110

10101

10110

10101

11 Remainder

Code word

word followed by the remainder.

\

11001010100000

11

Code word = 1 1 0 0 1 0 1 0 1 0 0 0 1 1

depends on the choice of divisor.

ANALOG AND DIGITAL COMMUNICATION

Example 3.5 Calculate CRC for the frame 110101011 and the generator

polynomial =x4 + x + 1 and write the transmitted frame.

Solution

process of CRC generation.

Divisor : x4 + 0 x3 + 0x2 + x + 1 = 1 0 0 1 1

1 1 0 1 0 1 0 1 1 0 0 0 0 0

11000000110

10011 ) 11 10 01 10 01 1 0 1 1 0 0 0 0 0

MOD - 2

+

+

+

+

additions

010011

10011

00000011000

10011

010110

10011

001010 Remainder

DATA COMMUNICATION

Code word

110101011 0 1 0 1 0

Data word Remainder

Error correction

substitution ARQ correction CFEC

reverse question mark (?) is substituted for the erroneous character.

character, then it would be displayed as :?ome”.

message by inspection and retransmission is not necessary.

determine the correct character , so retransmitted is essential.

by adding a group of parity bits or check bits as shown in figure 3.31.

The source generates the data in the form of binary symbols. The

ANALOG AND DIGITAL COMMUNICATION

encoder accepts these bits and adds the check bits to them to

produce the code words.

These code words are transmitted towards the receiver. The check

bits bus are used by the decoder to detect and correct the errors.

Data

Encoder Decoder Destination

source

code words

Data bits Data bits

Data bits Check bits

The encoder of figure 3.31 adds the check bits to the data bits,

according to the prescribed rule. This rule will be dependent on the

type of code being used.

The decoder separates out the data and check bits. It uses the

parity bits to detect and correct errors if they are present in the

received code words.

In FEC the receiver searches for the most likely correct word.

invalid code word and all the possible valid code words in measured.

The nearest valid code word (the one having minimum distance)

is the most likely correct version of the received code word as shown in

Figure 3.32 Valid code word 1

Distance

11011100

1

Received code word Valid code word 2

Distance

11001100 11101101

2

Distance Valid code word 3

3 11110100

DATA COMMUNICATION

In Figure 3.32 the valid code word 1 has the minimum distance

(1), hence it is the most likely correct code word.

There are two basic systems of error detection and correction. The

first on being the forward error correction (FEC) system which has

been discussed in previous section. The second one is the automatic

repeat request (ARQ) system.

a request is made for the retransmission of that signal. Therefore a

feedback channel is required for sending the request for retransmission.

respects. They are as follows:

1. ARQ system less number of check bits (parity bits) are required to

be sent. This will increase the (k/n) ratio for an (n,k) block code if

transmitted using the ARQ system.

implement repeat transmission of codewords will be needed.

3. The bit rate of forward transmission must make allowance for the

backward repeat transmission.

Figure 3.33

Forward channel

Forward

Message Input buffer

Encoder Transmission Output buffer

Detector

Input and controller

Channel and controller

Return

Transmission ACK/NAK

Channel

Feedback path

ANALOG AND DIGITAL COMMUNICATION

¾¾ The encoder produces code words for each message signal at its

input. Each code word at the encoder output is stored temporarily

and transmitted over the forward transmission channel.

¾¾ At the destination a decoder will decode the code words add look

for errors.

errors are detected and it will output a negative acknowledgement

(NAK) if errors are detected.

return transmission path the “controller” will retransmit the

appropriate word from the words stored by the input buffer.

retransmitted twice or more number of times.

¾¾ The output controller and buffer on the receiver side assemble the

output bit stream from the code words accepted by the decoder.

The bit rate of the return transmission which involves the return

transmission of ACK/NAK signal is low as compared to the bit rate of

the forward transmission . Therefore the error probability of the return

transmission is negligibly small.

DATA COMMUNICATION

The block diagram for the step and wait ARQ system is as shown in

Figure 3.34. This method is the simplest ARQ system.

the first code word X1 during time TW. This code word reaches the

receiver after a delay time td which is proportional to the distance

between the transmitter and receiver.

Code word X2 is

Tw Tw

T1 T2 retransmitted

Transmitter X1 X2 X2

NAK

Receiver X1 ACK X2

Error detected

At the receiver the detector searches for error. As error is not found

it sends the positive acknowledgement signal ACK back to the

transmitter.

After receiving this ACK signal, the transmitter will send the next

code word X2. The receiver detects the error and sends a negative

acknowledgement signal NAK to the transmitter.

code word X2.

to wait for the ACK and NAK signals .This wastes a lot of transmitter

time.

ANALOG AND DIGITAL COMMUNICATION

Retransmission of code

Tw

words begins here

Transmitter X1 X2 X3 X4 X5 X6 X7 X3 X4 X5 X6 X7 X8

NAK

X1 X2 X3 X4 X5 X6 X7 X3 X4 X5 X6 X7 X8

Discarded

detected

figure 3.35.

¾¾ The major difference between this and the previous system is that

the transmitter does not wait for ACK signal for the transmission of

next code word.

receive the “NAK” signal.

¾¾ When the receiver detects an error in the third code word X3 as shown

in Figure 3.35. The receiver sends the NAK signal.

¾¾ But this signal takes some time to reach the transmitter by that time

the transmitter has transmitted code words upto X7.

¾¾ On reception of the NAK signal the transmitter will retransmit all the

code words from X3 onwards. The receiver discards all the code words

it has received after X3 i.e. X3 to X7. It will then receive all the code

words that are retransmitted by the transmitter.

Figure 3.36.

DATA COMMUNICATION

Tw

Transmitter X1 X2 X3 X4 X5 X6 X7 X3 X8 X9 X10

K Only X3 is retransmitted

NA

X1 X2 X3 X4 X5 X6 X7 X3 X8 X9 X10

Retransmitted

Code word X3 is

Error

Received

detected

Figure 3.36 Selective repeat A R Q system

In this system as well, the transmitter does not wait for the ACK

signal for the transmitter of the next code word. It transmits the code

words continuously till it receives the “N A K” signal from the receiver.

The receiver sends the “NAK” signal back to the transmitter as soon

as it detects an error in the received code word. For example the

receiver detects an error in the third code word X3.

transmitted the code words upto X7 as shown in Figure 3.36

the code word X3 and then continues with the sequence X8,X9…. As

shown in figure 3.36.

The code words X4,X5,X6 and X7 received by the receiver are not

discarded by the receiver . The receiver receives the retransmitted code

word in between the regular code words. Therefore the receiver will have

to maintain the code words sequentially.

but the most complex system, of all the ARQ systems.

3.14 Data Communication Hardware

communication circuit using the bus topology. This arrangement is

ANALOG AND DIGITAL COMMUNICATION

host computer to the remote computer terminal is called a data

communication link.

the other stations are called secondaries or remotes.

control unit (LCU) and a modem.

equipment such as computer terminals and printers.

RS - 232 Serial Interface

↓

Primary station

Mainframe Mux channel DTE DCE

Front-end Data

computer (parallel interface)

processor(FEP) modem

DCE DCE

Data Data

modem modem

RS - 232 RS - 232

↓

DTE DTE

CT ROP CT ROP

Secondary station 1 Secondary station 2

DATA COMMUNICATION

program is stored on it.

the data received by it from the secondary stations. It also stores

software for database management.

The LCU at the primary station is more complicated then the LCU at

the secondary stations.

The primary LCU controls the data traffic to and from various

circuits having different characteristics.

The secondary LCU is used for directing the data traffic between one

data link and a few terminal devices. All these devices operate at the

same speed using the same character code.

end processor (FEP). The LCU at the primary station is generally a

FEP.

Functions of LCU

of LCU are as follows:

and other circuits.

2. LCU directs flow of input and output data between different links and

their application programs.

5. The data link control (DLC) characters are inserted and deleted in

the LCU.

ANALOG AND DIGITAL COMMUNICATION

Equipment (DTE).

±± Inside LCU, there is one circuit which performs most of the tasks

mentioned above. It is called as UART when the transmission is of

asynchronous type and it is called as USRT when synchronous

transmission is being used.

DTE and DCE .The asynchronous data format is used and no clock

signal

Functions of UART

a control word into the UART control register. This control word

indicates the nature of data, number of data bits, whether parity is

used and if used whether it is an even parity or an odd parity.

Figure 3.38 shows how to program the control word for various

functions. The control word set up the data, parity and stop bit

steering logic circuit.

0 = parity bit

POE 1 = parity even

0 = parity odd

NSB 1 = 2 stop bits

0 = 1 stop bits

DATA COMMUNICATION

NDB2 NDB1+ Bits/word

0 0 5

0 1 6

1 0 7

1 1 8

Figure 3.38 Control word

The UART sends a transmit buffer empty (TNMT) signal to the DTE to

indicate that it is ready to receive data.

data character to the transmit data lines TD0 – TD7 and strobes them

into the transmit buffer register with the transmit data strobe signal

(TDS).

active, the contents of transmit buffer register are transferred to the

transmit shift register.

The data pass through the steering logic circuit where the start, stop

and parity bits are picked up.

The data is loaded into the transmit shift register , and then it is

serially outputted on the transmit serial output (TSO) pin. The bit

rate of outputted data is equal to the transmit clock frequency (TCP)

clocked out, the DTE loads the next character into the buffer register.

This process continues till the DTE transfers all its data. This is

shown in Figure 3.39 (b).

ANALOG AND DIGITAL COMMUNICATION

NSB NDB 2

NPB NDB 1 POE Parallel Input data from LCU

CS Control register

(C.R.)

Transmit buffer register (TBR)

Parity

Generator

(P.G.)

TCP Timing Serial

shift bit circuit

generation

register Data

out

Status word

register

SWE

TBMT

DATA COMMUNICATION

Figure 3.40 (a) Shows the simplified block diagram of a UART receiver.

RSI

(Receive Receive Receive Shift register

Start - bit

Serial Timing

verify (RSR)

input) circuit

RDE

checker (RBR)

register Parallel output data to LCU

(SWR)

ANALOG AND DIGITAL COMMUNICATION

by the receiver to determine the number of stop bits, data bits and

the parity bit information for the UART receiver.

The UART receiver ignores the idle time line 1’ s.

the start bit verification circuit, the data character is clocked out in

serial manner into the receiver shift register.

When one complete character is loaded, into the shift register, the

complete character is transferred to the buffer register using parallel

data transfer.

The receive data available (RDA) flag is set in the status word

register.

In order to read the status register, the DTE observes the status word

enable (SWE). If this line is found active the character is read from

the buffer register by placing an active condition on the receive data

enable (RDE) pin.

Once the data reading is over the DTE places an active signal on the

receive data available reset (RDAR) pin. Which resets the RDA pin.

receiver shift register and this process ie repeated until all the data

have been received.

DATA COMMUNICATION

ANALOG AND DIGITAL COMMUNICATION

the DTE and DCE. In the synchronous transmission there is a checking

information transferred between USRT and modem. Each transmission

begins with a unique SYN character.

Functions of USRT

2. Error detection.

operates very similar to the UART . Hence we will discuss only the

differences.

USRT does not allow the start and stop bits. Instead unique SYN

characters are used into the transmit and receive SYN registers before

the data is transmitted.

Figure 3.41 (a)

0 = parity bit

0 = Parity odd

NDB2 NDB1+ Bits/Word

0 0 5

0 1 6

1 0 7

1 1 8

DATA COMMUNICATION

requirement. The desired SYN character is loaded from the parallel

input pins (DB0-DB7) into the transmit SYN register with the help of

transmit SYN strobe (TSS).

Data is loaded into the transmit data register from DB0-DB7 with the

help of transmit data strobe (TDS) signal.

transmit data register if the TDS pulse comes during the

transmission of the present character.

But if TDS pulse does not come, the next transmitted character is

extracted from the transmit SYN register and the SYN character

transmitted signal (SCT) is set.

The transmit buffer empty (TBMT) signal is used for requesting the

next character from the DTE.

The serial output data appears on the transmit serial out (TSO) pin.

ANALOG AND DIGITAL COMMUNICATION

DATA COMMUNICATION

The bit rate of the receiver clock signal (RCP) is adjusted as per

requirement and the desired SYN character is loaded into the receive

SYN register from DB0-DB7 with the help of receive SYN strobe (RSS).

When the receiver rest input goes from high to low , the receiver is

placed in the search mode in which serially received data is checked

bit by bit to find the SYN character.

After clocking each bit into the receive shift register, its contents are

compared with those of receive SYN register. if they are identical it

shows that a SYN character is found and the STN character receive

(SCR) output is set.

This character is transferred into the receiver buffer register and the

receiver is placed into the character mode.

the receive flags are set accordingly.

serial interface is introduced.

control signals and timing information between the DTE and DCE.

In this section we are going to discuss about the most widely used

serial interface RS=232.

are,

ANALOG AND DIGITAL COMMUNICATION

means recommended standards . RS-232 standard is published by

the Electronic Industry Association (EIA).

as RS-232 C standard. This includes the original RS-232 standards.

recommended by the ITU-T (CCITT) standards.

Equipment (DTE) and the modem is called a Data Circuit Terminating

Equipment (DCE).

To specify the exact nature of interface between the DTE and DCE

following characteristics are used:

1. Mechanical

2. Electrical

3. Functional

4. Procedural

between the DTE and DCE . Typically , the data signal, control signal,

timing signal and ground signal are bundled into a cable with a terminator

connector male or female at each end.

pin connector is shown in figure 3.42 (a) and 3.42 (b).

DATA COMMUNICATION

voltage changes. Both DTE and DCE must use the same code (e.g.

NRZ or RZ).

that can be achieved.

to each of the pins in the 9 pins or 25 pins , and what they mean. The

Figure 3.43 shows 9 pins that are nearly always used.

ANALOG AND DIGITAL COMMUNICATION

and electrical ground.

sequence of events . The protocol is based on action –reaction pairs.

For e.g. when the DTE (Terminal )asserts a request to send, the

DCE ( modem) replies with a clear to send. Similar action-reaction pair

exist for other circuits as well.

binary “1” is called as mark and a binary “0” is called as space. As

per the RS-232C standards the voltage ranges for representing mark

and space are well defined.

Volts.

+3 Volts to 25 Volts as shown in Figure 3.44

DATA COMMUNICATION

range. There should not be any mark or space voltages within this 6

Volt wide zone at any time.

-25 volts and to send a ‘space’ the voltage level should be close to

+ 25 volts.

only a single wire and ground wire. The Rs-232 standard however makes

a provision for various handshaking arrangements. There are actually

19 signals used for interfacing . The RS-232 standard uses at the most

25 wires in a cable and connector. The standard divides these signals

into three groups. They are:

1. Data group

2. Control group

3. Timing group

4. Ground

ANALOG AND DIGITAL COMMUNICATION

Table 3.6 shows the RS-232 signals divided into the four

categories.

Data Signals

BA 103 Transmitted Data DCE Transmitted by DTE

BB 104 Received Data DTE Received by DTE

Secondary transmitted

SBA 118 DCE Transmitted by DTE

Data

SBB 119 Secondary received data DTE Received by DTE

Control Signals

CA 105 Request to send DCE DTE wishes to transmit

DCE is ready to receive,

send

CC 107 DCE ready DTE DCE is ready to operate

CD 108.2 DTE ready DCE DTE is ready to operate

DCE is receiving a

channel line

DCE is receiving

CF 109 DTE

detector appropriate limits on

Indicates whether there

Selects one of two data

CH 111 Data signal rate selector DCE

rates

Selects on of two data

CI 112 Data signal rate selector DTE

rates

CJ 133 Ready for receiving DCE On/off flow control

Secondary request to DTE wishes to transmit

SCA 120 DCE

send on reverse channel

DATA COMMUNICATION

SCB 121 Secondary clear to send DTE

on reverse channel

Secondary received line Same as 109, for

SCF 122 DTE

signal detector reverse channel

Instructs remote DCE

RL 140 Remote loopback DCE

to loop back signals

Instructs DCE to loop-

LL 141 Local loopback DCE

back signals

Local DCE in a test

TM 142 Test mode DTE

condition

Clocking signal

DA 113 DCE

element training OFF occur at center of

Clocking signal bot 113

Transmitter signal

DB 114 DTE and 114 relate to sig-

element timing

nals on circuit 103

Receiver signal element Clocking signal for

DD 115 DTE

timing circuit

Ground

Signal ground/common Common ground

AB 102

return reference for all circuits

RS-232 standard.

Data

These are the lines which actually carry the message bits between

the DTE and DCE ends.

to carry a signal from the DTE end to DCE end.

ANALOG AND DIGITAL COMMUNICATION

And the received data line is used to carry a signal from DCE to DTE

end. The data is transmitted serially on these lines.

The secondary lines are provided to transmit and receive data for

those applications in which there are two channels in each direction.

Out of two the first channel is a primary high speed high perfor-

mance channel and the second channel is used for carrying some

less critical message.

This message can be like how many errors have been detected or

about the condition of the data link etc. Most applications do not use

these lines but some DTE and DCE equipment can make use of it.

Control lines

all 12 control lines, nine out of which are used for the control of the

primary channel, and three for the secondary channel.

Out of the nine primary control lines, six are for the DCE to DTE

direction and the remaining three are for the DTE to DCE direction.

Timing lines

signal must be sent along with the data. These timing signals are used

for the receiver synchronization .Most of the RS-232 installations are

asynchronous and hence do not use these lines.

Ground

As seen from the Table 3.6 there are only two data lines name-

ly signal ground and protective ground. The protective ground is

connected to the chassis of the equipment . It provides a protection

to the user against shocks.

transmission. The voltages that are transmitted on the other wires

DATA COMMUNICATION

ground, by seeing which provides a better performance and minimize

the noise.

The control lines for handshaking are request to send (RTS), clear to

send (CTS),data set ready (DSR) and data terminal ready (DTR).

Here “data set” is a communication box which acts as DCE and “data

terminal” is the computer or terminal which is a DTE.

The other three control lines are “ring indicator”, “received line signal

detector” and the “signal quality detector”.

The received line signal detector shows the presence of data coming

into the DCE from telephone lines. It also shows whether the qual-

ity of the received signal is adequate for low error performance and

indicates it via the “signal quality detector”.

If a telephone line is being used then the DCE can tell the DTE that

someone is calling. When it detects the bell ringing signal on the line.

It then uses the “ring indicator” to show this.

3. It is suitable for low baud rate slow systems typically upto 20,000

bauds.

communication.

5. The standard voltage levels for mark and space makes it possible to

reduce the interference due to noise.

ANALOG AND DIGITAL COMMUNICATION

effective and easy to use.

it has the following limitations

the distance, the baud rate has to be reduced.

2. The noise interference for the single ended signals is very high. To

improve noise immunity it is necessary to have the mark and space

voltages close to ± 25 Volts. This difficult in the modern digital sys-

tems which use a 5 Volt supply.

3. The highest baud rate is 20,000 baud for distance less than 50 ft.

This is too slow.

for two users connected directly to each other.

many interconnections should be allowed and the one which is best

suited to the given application be selected.

limitations of RS-232 system. These other interfaces are designed to

provide superior performance to RS-232 in one or more areas.

devices with eight or more bits at a time.

transfer is increased as compared to serial transmission.

DATA COMMUNICATION

computer terminals and peripheral equipments process data

internally in parallel.

data from serial to parallel form or vice versa.

close proximity with each other.

a printer.

Data circuits

STB

ACK

Computer BUSY Printer

PO

SLCT

AF

PRIME

ERROR

These are eight parallel data lines d0 to d7. All these are

unidirectional lines taking data from computer to printer.

form of seven-bit ASCII ( with eight bit reserved for parity or in the form

of eight bit extended ASCII or EBCDIC code.

information from computer to printer. The control lines are:

1. Strobe (STB)

ANALOG AND DIGITAL COMMUNICATION

2. Autofed (AF)

4. Select (SLCT) .

direct the printer to accept data.

feed function after receiving carriage return character from the

computer.

If (AF) is low (active) then the printer responds to the carriage return

character by performing a carriage return and a line feed.

This line is also called a initialize line. It is an active low line used by

the computer to clear printer’s memory which includes the printer

programming and print buffer.

to its original position.

4. Select (SLCTIN)

When used, a low signal on this should be seen by the printer before

accepting the data from the computer.

DATA COMMUNICATION

from printer to computer Via these lines, the printer tells computer

what the printer is doing.

1. Acknowledge (ACK)

The printer makes this line after it receives an active STB signal

from the computer. It tells the computer that the printer is ready to

receive another character from the printer.

2. Busy

and cannot accept data from the computer .Following are the

conditions that makes a printer busy.

is full.

activated to high the ERROR line also is activated to low.

4. Select (SLCT)

This is an active high line which indicates whether the printer is selected

or not.

5. Error ERROR

printer problem.

ANALOG AND DIGITAL COMMUNICATION

conditions.

Introduction

rather than being a sine wave.

Definition

varied in accordance with the message signal.

DATA COMMUNICATION

PTM- Pulse Time Modulation DPCM- Differential Pulse code

modulation

PWM- Pulse width modulation DM- Delta Modulation

PPM - Pulse position Modulation ADM - Adaptive delta modula-

tion

as the carrier wave, and some characteristics of each pulse

(e.g., Amplitude, duration and position) is varied in a continuous

manner in accordance with the corresponding sample value of the

message signal.

transmission takes place at discrete times

form that is discrete in both time and amplitude, thereby permitting

its transmission in digital form as a sequence of coded pulses.

time signal into an equivalent discrete-time signal

sequence of samples that are usually spaced uniformly in time

communication.

ANALOG AND DIGITAL COMMUNICATION

The other input to the multiplier is a train of pulses. This signal is

called sampling signal.

a period of ‘Ts’ seconds.

The time ‘Ts’ is called as the sampling time (or) sampling period and

to its reciprocal fs =1/Ts is called sampling frequency (or) sampling

rate. This ideal form of sampling is called instantaneous sampling.

DATA COMMUNICATION

Sampling Theorem

sample and received back. If the sampling frequency is twice of the high-

est frequency content of the signal.

(ie) fs ≥ 2 fm

Where,

fs = Sampling frequecny

Nyquist rate

for a signal bandwidth fm. Hence, then it is called ‘Nyquist rate’.

continuous signal faithfully in its sampled form

fs = 2 fm samples/sec

Nquist Interval

sampling rate is Nyquist rate

1

Ts = sec

2fm

Alaising

distortion called “aliasing” is introduced in the spectrum of the sampled

signal. The alaising effect is clearly shown in figure 3.49.

ANALOG AND DIGITAL COMMUNICATION

Definition

the instantaneous amplitude of the modulating signal.

Types of PAM

Depending upon the shape of the pulse of PAM. There are two

types of PAM

1. Natural PAM

DATA COMMUNICATION

i. The flat-top PAM is most popular and is widely used. The reason

for using flat top PAM is that during the transmission, the noise

interferes with the top of the transmitted pulses and this noise can

be easily removed if the PAM pulse has flat-top

but in Flat- top PAM, the amplitude detection of received pulse is

exact because the noise (or) distortion is removed easily

somewhat complicated because the pulse top shape is to be

maintained. These complications are reduced by flat top-PAM.

shown in figure 3.50 and 3.51 respectively

The modulating signal x(t) is passed through a LPF which will band

limit this signal to fm. Band limiting is necessary to avoid the “aliasing

effect” in the sampling process.

ANALOG AND DIGITAL COMMUNICATION

Pulse train generator generates the train of pulse with frequency fs.

Such that fs ≥ 2fm. Thus the Nyquist criteria is satisfied.

generate the PAM-Signal

as shown in figure 3.52.

A sample and hold circuit consists of two field effect transistor (FET)

switches and a capacitor

emitter terminal and sample pulse is connected to Gate terminals G1

and G2 of the two FETs (Charging and Discharging) switches.

duration by a short pulse applied to a gate G1 of the transistor.

During this period, the capacitor ‘c’ quickly charge upto a voltage

equal to the instantaneous sample value of the incoming signal x(t)

DATA COMMUNICATION

Now, the sampling switch is open and capacitor ‘c’ hold the charge.

The discharge switch is then closed by a pulse applied to a gate Q2.

Due to this, capacitor ‘c’ is discharged to zero volts.

The discharge switch is then opened and thus the capacitor has no

voltage. This is repeated and flat-top PAM signal is generated is as

shown in figure 3.53.

Definition

amplitude of modulating signal.

3.19.1 PWM-Generator

The figure 3.54 shows the circuit that generates PWM signal

is also varied (ie). The amplitude is more positive than the width

of the pulse is large and amplitude of modulating signal is more

negative than the width of the pulse is narrow

ANALOG AND DIGITAL COMMUNICATION

Amplitude variations due to the additive noise do not effect the

performance of PWM-generation.

Thus PWM is more immune to noise than PAM

The amplitude and width of the pulses are kept constant but the

position of each pulse is varied in accordance with the amplitude of the

sampled values of the modulating signal. The circuit shown in figure

3.54 is also used to generate the PPM-signal

The PPM-Signal can be generated from PWM signal.

To generate PPM signal, the PWM pulse obtained at the output

of the comparator are used as the trigger input to monostable

multivibrator.

The monostable is triggered on negative edge of PWM. The output

of monostable goes high. This high voltage remains high for a fixed

period and then turns low.

The highest amplitude of modulating signal produce the larger width

of PWM-signal, therefore the PPM - pulse moves to the far right

The lowest amplitude of modulating signal produces the narrow

PWM-Signal, therefore the PPM-Pulse moves to the far left.

It is also more immune to noise than PAM.

PAM is used as intermediated form of modulation with PSK, QAM

and PCM, although it is seldom used by itself. PWM and PPM are used

in special-purpose communication systems mainly for the military but

are seldom used for commercial digital transmission systems. PCM is by

far the most prevalent form of Pulse modulation.

Introduction

public switched telephone network because with PCM it is easy to

combine digitised voice and digital data into a single, high speed digital

signal and propagate it over either metallic (or) optical fibre cables.

DATA COMMUNICATION

commonly used for digital transmission.

amplitude of each sample is rounded-off to the nearest one of a finite

set of allowable values known as “Quantization” levels, so that both

time and amplitude are in the discrete form.

information contained in the instantaneous sample of analog sig-

nal are represented by digital codes and are transmitted as a serial

bit-stream

simplex (one-way only) PCM system.

3.21.1 PCM-Transmitter

ANALOG AND DIGITAL COMMUNICATION

operations are usually performed in the same circuit which is called

an analog to digital converter.

message of the message signal by band limiting the message signal to fm.

So that, a proper sampling rate can be obtained at PCM-Tranmsitter.

receiver, According to sampling theorem the sampling frequency fs must

be greater than twice the highest frequency component of the message

signal fm in accordance with the sampling theorem (ie) fs ≥ 2fm.

The sample and hold circuit periodically samples the analog input

signal and converts those samples to multilevel PAM-signal.

Quantizer

approximating the sampled signal to the nearest predefined (or)

representation level is called quantization.

is same throughout the signal range is called uniform quantization.

practical purpose because it provides the production for low level

signals which are more precious than large amplitude samples.

Encoder

The process of allocating some digital code to each level is called

encoding. The obtained codes are transmitted as a bit stream.

DATA COMMUNICATION

For the transmission, gray code is preferred, because it has only 1-bit

change for each step in the quantized level noise component.

signal due to the single error in the received PCM- code word.

means we need n-parallel wires for n-bit data, it will increase the

transmission cost and also for long distance parallel communication

is not possible because of implementing n-wires make complexity.

parallel to serial converter and serial communication is carried out.

It reduce the complexity and also transmission cost because only

one-wire is used (eg RS-232 cable is used)

Transmission path

transmision and PCM receiver over which the PCM signal travels.

The PCM-signals are transmitted for long distance with the help of

regenerative repeaters.

are,

i. Equalizations

ANALOG AND DIGITAL COMMUNICATION

are placed on the transmission path must be very close to each other.

the effects of amplitude and phase distortions produced by non-ideal

transmission characteristics of the channel.

from the received pulses, for sampling the equalized pulses at the

instants of time where the signal-to-noise ratio is a maximum.

PCM wave at its input has a ‘O’ value (or) 1 value at the instant of

sampling.

without any trace of noise

input.

pulses from noise and will reconstruct the original PCM signal. The

reconstructed PCM-signal is then passed through a serial-to-parallel

converter.

into n-bit binary parallel data and applied to DAC circuit as a input.

• The DAC perform opposite to ADC operation and produce the output

DATA COMMUNICATION

signal.

The hold circuit produce the PAM signal ouput for the analog input

signal.

The PAM-signals passes through the LPF to recover the analog signal

x(t). The low pass filter is called as reconstruction filter and it’s cut-

off frequency is equal to the message signal bandwidth fm.

3.21.4 Sampling

function:

1. Natural sampling

2. Flat-top sampling.

Natural sampling

their natural shape during the sampling interval”.

is different from that of an ideal sample.

finite-width sample pulses decreases for the higher harmonics in a

(sin x)/x manner. It alters the information frequency spectrum re-

quiring the use of frequency equalizers before recovery by a low pass

filter.

ANALOG AND DIGITAL COMMUNICATION

(a) input

waveform

(b) sample

waveform

(c) output

waveform

Flat Top Sampling

Flat- top sampling sometimes called rectangular sampling.

“In flat-top sampling, top of the samples remains constant and is

equal to the instantaneous value of the analog signal.”

Flat- top sampling is most commonly used for sampling voice signals

in PCM systems.

Flat -top sampling is accomplished in a sample-and-hold circuit.

In Flat-top sampling, the input voltage is sampled with a narrow

pulse and then kept relatively constant until the next sample is tak-

en.

In the flat top sampling, the sampling process alters the frequency

spectrum and introduces an error called aperture error.

When the amplitude of the sampled signal changes during the sample

pulse time, aperture error may occur. This aperture error prevents

the recovery circuit in the PCM receiver from exactly reproducing the

original analog signal voltage.

Flat Top sampling introduces less aperture distortion than natural

DATA COMMUNICATION

sampling.

Aperture error is compensated by using equalizers.

Figure 3.59 shows Flat-top sampling.

(a) Input

waveform

(b) sample

waveform

(c) output

waveform

The schematic diagram of a sample-and-hold circuit is shown in fig-

ure 3.60.

Sampling pulse

Analog

output

+ + PAM output

Z1 Q1 Z2

- -

C2

The FET acts as a simple analog switch. When Q1 turned ON, it of-

fers a low-impedance path to hold the analog sample voltage across

capacitor C1. The time which is ON, is called the aperture time or

acquisition time.

ANALOG AND DIGITAL COMMUNICATION

complete path to discharge through it and hence stores the sampled

voltage.

The storage time of the capacitor is known as the A/D conversion

time because, during this time that the ADC converts the sample

voltage to a PCM code.

The acquisition time should be very short to ensure that a

minimum change occurs in the analog signal while it is being depos-

ited across C1.

If the input to the ADC is changing while it is performing the conver-

sion, results in aperture distortion.

Aperture distortion is reduced by sample and hold circuit by having

a short aperture time and keeping the input to the ADC relatively

constant.

Figure below shows the input analog signal, the sampling pulse, and

the waveform developed across C1.

Input

waveform

Aperture

Conversion

time Q1 Q1

Sample time

pulse on on

Capacitor Q1 Capacitor

changes off discharges

Output

waveform

Droop

Figure 3.61 Input and Output waveforms

It important that the output impedance of voltage follower Z1, and the

resistance of Q1 be as small as possible. So that the RC charging time

constant of the capacitor is kept very short, allowing the capacitor to

charge or discharge rapidly during the short acquisition time.

DATA COMMUNICATION

The inter- electrode capacitance between the gate and drain of the

FET is placed in series with C1, when the FET is OFF. Hence it acting

as a capacitive voltage-divider network.

The gradual discharge across the capacitor during the

conversion time is called droop and is caused by the capacitor

discharging through its own leakage resistance and the input imped-

ance of voltage follower Z2, Hence, the input impedance of Z2 and the

leakage resistance of C1 be as high as possible.

The voltage followers Z1 and Z2 isolate the sample and-hold circuit

from the input and output circuitry.

Sampling Rate:

sampling rate(fs) that can be used for a given PCM system.

For a sample to be recovered accurately in a PCM receiver, each

cycle of the analog input signal (fa) must be sampled at least twice.

Consequently, the minimum sampling rate is equal to twice the

highest audio input frequency.

If fs is less than two times fa an impairment called alias or told-over

distortion occurs. The minimum Nyquist sampling rate is expressed

mathematically as,

fs ≥ 2fa

Where , fs - minimum Nyquist sample rate (Hz).

fa - maximum analog input frequency (Hz)

2fs-fa

3fs-fa

fs-fa fs-fa

2fs-fa

3fs+fa

Audio

0 fa fs 2fs 3fs

Frequency

ANALOG AND DIGITAL COMMUNICATION

fs-fa fs+fa

2fs-fa 4fs+fa

Frequency

A sample-and-hold circuit is a nonlinear device (mixer) which has

two inputs: one is the sampling pulse and another one is analog

input signal. Hence, nonlinear mixing (heterodyning) occurs between

these two signals.

Figure 3.62 shows the frequency-domain representation of the

output spectrum from a sample-and-hold circuit. The output in-

cludes the two original inputs (the audio and the fundamental fre-

quency of the sampling pulse), their sum and difference frequencies

(fs ± fa), all the harmonics of fs and fa (2fs, 2fa, 3fs, 3fa and so on), and

their associated cross products (2fs ± fa, 3fs ± fa and so on).

The sampling pulse is made up of a series of harmonically relat-

ed sine waves. Each of these sine waves is amplitude modulated

by the analog signal and generates sum and difference frequencies

symmetrical around each of the harmonics of fs.

Each sum and difference frequency generated is separated from its

respective center frequency by fa as long as fs is at least twice fa,

none of the side frequencies from one harmonic will split into the

sidebands of another harmonic, and aliasing effects does not occur.

Figure 3.63 shows the results when an analog input frequency

greater than fs/2 modulates fs. The side frequencies from one

harmonic fold over into the sideband of another harmonic. The

frequency which folds over is an alias of the input signal (hence

the names “aliasing” or “foldover distortion”), it cannot be removed

through filtering or any other technique.

DATA COMMUNICATION

ing or anti -fold over filter. Its upper cutoff frequency is chosen such

that no frequency higher than one-half the sampling rate is allowed

to enter the sample-and-hold circuit, hence eliminating the fold-over

distortion occurring.

binary code. The binary code is transmitted to the receiver, where it

is converted back to the original analog signal.

The binary codes used for PCM are n-bit code word, where n may be

any positive in-teger greater than 1. The code currently used for PCM

are sign-magnitude codes, where the most significant bit (MSB) is the

sign bit and the remaining bits are used for magnitude.

3.21.5 Resolution

lution is equal to the voltage of the minimum step size, which is equal

to the voltage of the least significant bit (V bb) of the PCM code.

ble 3.64, the resolution for the PCM code is 1 V. The smaller the

magnitude of a quantum, the better (smaller) the resolution

and the more accurately the quantized signal will resemble the

original analog sample.

ANALOG AND DIGITAL COMMUNICATION

111 +3V

110 +2V +2.6V

101 +1V

}

100 0

000

a)

001 -1V

010 -2V

011 -2V

b)

t1 t2 t3

111 +3V

110 +2V

101 +1V

000

}

100 0

c)

001 -1V

010 -2V

011 -2V

Sample time Sample time Sample time

110 001 111

(d)

Figure 3.64 (a) Analog input signal, (b) Sample pulse, (c). PAM

signal, (d) PCM code

Since the figure 3.64 above each sample voltage is rounded off

(quantized) to the closest available level and then converted to its

corresponding PCM code.

The PAM signal in the transmitter is same PAM signal as produced

in the receiver. Therefore, any round off errors in the transmitted

signal are reproduced when the code is converted back to analog in

the receiver. This error is called the quantization error (Qe).

The quantization error is equivalent to additive white noise as it al-

ters the signal amplitude. This quantization error is also called quan-

tization noise (Qn).

In Figure 3.64 above the first sample occurs at time t1, when the

input voltage is exactly +2V. The PCM code that corresponds to +2

V is 110, and, there is no quantization error. when the input voltage

is + 1V, Sample 2 occurs at time t2 .The PCM code that corresponds

to + 1 V is 001, and again there is no quantization error.

To determine the PCM code for some particular sample voltage,

simply divide the sample voltage by the resolution, convert the

DATA COMMUNICATION

quotient to n-bit binary code, and then add the sign bit.

In figure 3.64 above, for sample 3, the voltage at t3 is approximately

+2.6V. The folded PCM code is,

= = 2.6

Resolution 1

There is no PCM code for +2.6; therefore, the magnitude of the sam-

ple is rounded off to the nearest value, which is, +3 V or 111. The

rounding-off process resulting in a quantization error of 0.4 V.

3.21.6 Quantization

possibilities to a finite number of conditions. Analog signals contain an

infinite number of amplitude possibilities. Thus, converting an analog

signal to PCM-code with a limited number of combinations requires

quantization.

for a linear analog-to digital converter which is also called a linear

quantizer.

The figure 3.65 below shows for a linear analog input signal (ie. a

ramp signal), the quantized signal is a staircase function. Thus, the

maximum quantization error is the same for any magnitude input

signal is shown in Figure below.

ANALOG AND DIGITAL COMMUNICATION

The step size of the staircase signal is constant for linear Quantizer.

Vout

Analog

signal

Vin

signal

Quantized Quantized

signal error

Vin Vi

Maximum Maximum

negative positive

quantizing quantizing

Qe = 1 LSB error error

2

Classifications of quantization process:

Quantization

two types as,

(ii) Non-Uniform Quantization.

DATA COMMUNICATION

Uniform Quantization

In uniform Quantization the ‘step size’ is same (or) constant

throughout the input range.

There are two types of uniform Quantizer

1. Symmetric Quantizer of midtread type

2. Symmetric Quantizer of midrise type.

Midtread Quantizer

In midtread Quantizer, the origin lies in the middle of a tread of stair

case like graph.

The figure 3.66 below shows the corresponding input-output

characteristics of a uniform Quantizer midtread type.

Midrise Quantizer

In midrise Quantizer, the origin lies in the middle of a rising part of

the stair case like graph.

The figure 3.66 below shows the corresponding input-output

characteristics of a uniform Quantizer midrise type.

Non-Uniform Quantization:

In non-uniform Quantization, a Quantizer characteristic is

non-linear.

Step size is not constant instead if it is variable, dependent on

the amplitude of input signal then quantization is known as

non- uniform quantization.

The step size is varied according to signal level to keep signal to noise

ratio high.

ANALOG AND DIGITAL COMMUNICATION

Output Xq(nTs)

Maximum

quantization 7 δ/2

error 5 δ/2

−δ/2 3 δ/2

δ

-X(nTs) δ/2 X(nTs)

0 δ 2δ 3δ Decision

δ/2 levels

3 δ/2 Overload

5 δ/2 levels

signal

Quantization error (ε)

δ/2

Input

δ X(nTs)

δ/2

(a)

Output (L=9)

4a

3a

2a

a

-4a -3a -2a -a input

a 2a 3a 4a

-a

Midriser

-2a

-3a

-4a

a/2

0

-a/2

Quantization Noise

type

DATA COMMUNICATION

DR is the ratio of the largest possible magnitude to the smallest possible

magnitude that

be decoded by the digital to analog converter in the receiver.

Vmax

Dynamic range =

Vmin

Where

Vmin = The quantum value (resolution)

Vmax = Maximum voltage magnitude

In 3-bit PCM code scheme when the input signal and its

maximum amplitude (101 = +1 or 001 = -1), the worst possible

signal to Quantization noise voltage ratio occurs

Resolution Virb

=SQR = =2

Qe Virb

2

For the maximum amplitude input signal of 3V (either 111 or 011)

the maximum Quantization noise is also equal to the resolution

divided by 2. Hence SQR for a maximum input signal is

1

(SQR )min = =6

0.5

(SQR )min= 20 log 6

in dB = 15.6 dB

pressed as average signal power to average noise power ratio.

For linear PCM codes, the signal power to quantizing noise power

ratio (some times called signal to distortion ratio (or) signal to noise

ratio is

V2

SQR(dB ) = 10 log R

(

a

12

2

/R )

ANALOG AND DIGITAL COMMUNICATION

where,

R = resistance (ohms)

v = rms signal voltage ( volts )

q = quantization interval ( volts )

V2

= average signal power ( watts )

R

(q 2 /12) = average quantization noise power watts

( )

R

Iff the resistances are equal

V2 V 2

(SQR )dB = 10 log 2 = 10 lo

o g 2 × 12

q /12 q

V

= 20 × log Since, log 12 = 1.079 = 1

q

Solved problems

1. For a PCM system with a maximum audio input frequency of

4kHz,determine the minimum sample rate and the alias frequency

produced if a 5kHz audio signal were allowed to enter the sample and

hold circuit.

Solution

Given that f a = 4 kHz

Audio signal frequency = 5 kHz

Using Nyquist sampling theorem,

Sampling rate f s ≥ 2 f a ; since f a = 4kHz

Therefore,f s = 8kHz

Alias frequency

= f s − ( Audio signal frequency in sample & hold circuit )

= 8 kHz − 5kHz

= 3kHz

DATA COMMUNICATION

then determine the quantized voltage, quantization error (Q).

Solution

Given that Analog sample voltage = 1.8V

Resolution = 1V

Quantization level = Analog sample voltage/

Resolution

= +1.85V /1V

= 1.8 V

= 2V ( round off )

Quantization error, Qe = ( Quantized level )

− ( Oriiginal sample voltage )

Qe = 2 − 1.8 = 0.2

3. For a PCM system with the following parameters, determine (a) Mini-

mum sampling rate (b) Minimum number of bits used in PCM code

(c) Resolution (d) Quantization error (e) Coding efficiency.

Maximum decoded voltage at the receiver = ± 2.55 V

Minimum dynamic range = 46 dB

Given that: f a = 4 kHz ; DR = 46 dB ; Vmax = 2.55V

(a ) Minimum sample rate is given n by

f s ≥ 2 fa

f s = 2 ( 4 kHz ) = 8 kHz

(b ) Dynamic range is

V

DR − 20 log max

Vmin

V

46 dB = 20 log max

Vmin

V

2.3 = log max

Vmin

V

102.3 = max

Vmin

199.5 = DR

w.k .t for minimum no of bits, 2n − 1 = DR

Then, the minimum number of bit (n ) can be calculated as.

ANALOG AND DIGITAL COMMUNICATION

n= = = 7.63

log 2 log 2

n =8

Therefore 8 bits mu ust be used for magnitude for representing the

sign bit, to

otally 9 bits are required.

(c ) Resolution

V 2.55 2.55

Resolution = nmax = 8 = = 0.01V

2 − 1 2 − 1 256 − 1

(d ) Maximum quantization erroor

Resolution 0.01V

Qc = = = 0.005V

2 2

(c ) The coding efficiency

coding efficiency

Minimum number of bits (including sign bit )

= × 100

actual number of bits ( including sign bit )

=

( 7.63 + 1) × 100 = 8.63 × 100 = 95.89%

8 +1 9

difference between the amplitudes of two samples. This necessitates

transmitting several identical PCM codes, which is redundant.

introduced, In DPCM, the difference in the amplitude of two successive

samples is transmitted rather than actual sample

Transmitter

Analog input signal is first band limited to avoid the aliasing effect

and then applied as one input of differentiator (or) Subtractor. The

previous sampled signal valued is obtained from the integator is also

applied as another input to differentiator. Initially integrator output

is zero (assumed)

DATA COMMUNICATION

difference signal is analog only and it is PCM encoded and

transmitted

The sampler produce the PAM signal and then the analog to digital

converter produce the parallel binary bits.

This parallel binary bits are converted into serial PCM and then

transmitted through the channel.

less when compared to PCM because the difference only transmitted

instead of transmitting actual samples.

sample to the next sample. So it is applied as input to binary adder

and it is converted to analog by DAC and then through the integrator

it is passed.

ANALOG AND DIGITAL COMMUNICATION

nature

another input is the integrator output.

Once again the same process repeated until all the samples are

transmitted.

-receiver

the serial to parallel converter and Digital to analog converters.

Initially hold circuit is at zero, the adder circuit add the current

sample and previous sample value, produce the analog output.

circuit output, Adder circuit output is the analog output (ie) the

original output is obtained.

sample is converted back to analog, stored and then summed with

the next sample received,

DATA COMMUNICATION

transmission of anolog signals.

of both the sign and the magnitude of a particular sample. Therefore,

multiple-bit codes are required to represent the many values that the

sample can be.

representation of the sample, only a single bit is transmitted, which

simply indicates whether that sample is larger (or) smaller than the

previous sample.

the current sample is smaller than the previous sample, a logic O is

transmitted, if the current sample is larger than the previous sample. a

logic 1 is transmitted

transmitter.

compared with the output of DAC. The output of DAC is a voltage

equal to the regenerated magnitude of the previous sample, which

was stored in the up-down counter as a binary number.

ANALOG AND DIGITAL COMMUNICATION

whether the previous sample is Larger (or) smaller than the current

sample.

Therefore, the up-down counter is updated after each comparison

• Initially, the up-down counter is zero and DAC is outputting OV. The

first sample is taken, converted to PAM signal, and compared with

zero volts.

indicating that the current sample is larger in amplitude than the

previous sample.

count of 1. The DAC now outputs a voltage equal to the magnitude of

the minimum step size

• The steps change value at a rate equal to the clock frequency (sample

rate)

until the DAC output exceeds the analog sample. Once the DAC

output exceeds, then the up-down counter will begin counting down

until the output of the DAC drops below the sample amplitude.

In the idealized situation, the DAC output follows the input signal.

transmitted, and each time the up-down counter is decremented, a

logic 0 is transmitted.

DATA COMMUNICATION

ceiver. The receiver is almost identical to the transmitter except for the

comparator.

incremented accordingly. Consequently the output of the DAC in the

decoder is identical to the ouptut of the DAC in the transmitter. The

ouptut of LPF is original transmitted signal

only one bit, therefore the bit rates associated with delta modulation are

lower than conventional PCM systems.

not occur with conventional PCM,

Slope over load noise occurs when the step size (d) is too small

compared to large variation in the input signal. Figure 3.72 below shows

the distortion in delta modulation.

• Let x(t) is the slope of analog signal x(t) and x’(t) is the slope of the

ANALOG AND DIGITAL COMMUNICATION

• Due to small step size (d), the slope of approximated signal x’(t) will

be small.

d

The slope x’(t) = = dfs

Ts

• If the slope of the analog signal x (t) is much higher than that of x’(t)

over a long duration than x’(t) will not be able to follow x(t), at all.

• The difference between x(t) and x’(t) is called as the slope overload

distortion.

• Thus the slope overload error occurs when slope of x(t) is much larger

than slope of x’(t).

• Slope overload error can be reduced by,

i. Increasing the step size of ‘d’ or increasing slope of the

approximated signal x’(t).

ii. Increasing the sampling frequency ‘fs’

However with increase in ‘d’, the granular noise increases and if fs

is increased, signalling rate and bandwidth requirement are increased.

One of the best way to reduce slope overload error is to detect

the overload condition and increase the step size when overloading is

detected.

DATA COMMUNICATION

Granular noise

• Granular noise occurs when the step size (d) is too large compared to

small variations in the input signal. That is for very small variations

in the input signal, the staircase signal is changed by large amount

‘d’ because of large step size.

• When the input signal is almost flat, the staircase signal x’(t) keeps

on oscillating by ± d around the signal.

called granular noise. To reduce the granular noise the step size

should be as small as possible.

¾¾ In the linear delta modulator the step size ‘d’ is not variable, if it is

made variable then the slope overload distortion and granular noise

both can be controlled.

as per the level of input signal, such a modulator is known as

“adaptive delta modulator”.

type of scheme used for adjusting the step size.

¾¾ In one type a discrete set of value is provided for the step size whereas

in another type a continuous range of step size variation is provided.

ANALOG AND DIGITAL COMMUNICATION

modulation.

If you compare this block diagram with that of the linear delta

modulator, then you will find that except for the counter being replaced

by the digital processor, the remaining block are identical.

Operation

• In the reponse to the Kth clock pulse, the processor generates a step

which is equal in magnitude to the step generated in response to the

previous ( K-1)th clock pulse.

• If the direction of both the steps is same than the processor will

increase the magnitude of the present step by ‘d’. If the directions

are opposite then the processor will decrease the magnitude of the

present step size by ‘d’

DATA COMMUNICATION

Where

Where

For example

= d(1) + d(1) = 2 d

granular noise problem is solved. Hence ADM system has a low bit rate

than the PCM system. Therefore, the bandwidth required is also less

than a comparable PCM system.

ANALOG AND DIGITAL COMMUNICATION

step size control logic and then applied to DM-receiver.

the ‘d’ otherwise for Logic 0, step size is decremented by ‘d’. The

corresponding analog output is generated by DAC.

• At the output of low pass filter we get the original signal back.

3.25 Comparison of various pulse communication systems

Pulse Amplitude Pulse width Pulse position

S.No Parameters

modulation Modulation Modulation

Width of the car- Position of the

Amplitude of the carrier

Variable characteris- rier pulse is varied carrier pulse is

1 pulse is varied by modu-

tics of carrier pulse by modulating varied by modu-

DATA COMMUNICATION

lating voltage

Voltage lating voltage

Bandwidth of Bandwidth of

Bandwidth of transmis- the transmission the transmission

sion channel depends on channel depends channel depends

width of the pulse on rise of the on rise of the

2 Bandwidth BW ≥ 1 pulse pulse

2t 1

BW≥ 1 BW ≥

2tr

t - Width of the pulse 2tr

maximum tr` -rise time of the tr-rise time of the

pulse pulse

3 Noise interference Maximum Minimum Minimum

Information is con- Position

4 Amplitude variations Width variations

tained in variations

Necessary of syn-

5 Not necessary Not necessary Necessary

chronization pulse

Complexity in

6 generation and Complex Simple Simple

detection

Varies with amplitude of Varies with width

7 Transmitted power Constant

pulse of the pulse

Similarity with other Frequency modu- Phase Modula-

8 Amplitude modulation

modulation system lation tion

9 Output waveform

ANALOG AND DIGITAL COMMUNICATION

3.26 Comparison of Source Coding methods

(DM)

1 Number of bits It can use 4,8 or It uses only one bit Only one bit is Bits can be more

DATA COMMUNICATION

16 bits per sample for one sample used to encode than one but are

one sample less than PCM

2 Levels, step size The number of Step size is fixed According to the Fixed number of

levels depends on and cannot be signal variation, levels are used.`

the number of bits varied step size varies

step size is fixed (adapted)

3 Quantization er- Quantization error Slope overload dis- Quantization er- Slope overload

ror and distor- depends on num- tortion and granu- ror is present but distortion and

tion ber of levels used. lar noise is present other errors are quantization noise

absent is present

4. Bandwidth of Highest Bandwidth Lowest Bandwidth Lowest Band- Bandwidth re-

transmission is required since is required width is required quired is lower

channel number of bits are than PCM

high

5 Feedback There is no feedback Feedback exists Feedback exists Feedback exists

in transmitter or in transmitter

receiver

6 Complexity of System is complex Simple Simple Simple

notation

7 Signal to noise Good Poor Better than DM Fair

ratio

8 Area of Audio and video Speech and im- Speech and Images Speech and

applications telephony ages video

ANALOG AND DIGITAL COMMUNICATION

DATA COMMUNICATION

PART - A

1. What is Pulse Amplitude modulation?

amplitude modulation, the amplitude of a carrier consisting of

a periodic train of rectangular pulses is varied in proportion to

sample values of a message signal.

slot is varied according to the amplitude of the sample of the

analog signal. This is known as pulse position modulation (PPM).

to the amplitude of the analog signal at the time the signal is

sampled. This is known as pulse width modulation. PWM is also

called as pulse duration modulation (PDM) or pulse length mod-

ulation (PLM).

which the message signal is sampled, the amplitude of each

sample is rounded off to the nearest one of a finite set of

discrete levels and encoded so that both time and amplitude

are represented in discrete form. This allows the message to be

transmitted by means of a digital waveform.

It is a type of pulse modulation technique where the information

is transmitted in the form of code words. The essential

operations in PCM transmitter are sampling, quantizing and

ANALOG AND DIGITAL COMMUNICATION

encoding.

Advantages:

(i) High noise immunity.,

(ii) Private and secured communication is possible through the

use of encryption.

Disadvantages:

(i) Increased transmission bandwidth.

(ii) Increased system complexity.

corrected in the receivers

(ii) Because of the advances in digital IC technologies and high

speed computers, digital communication systems are simpler

and cheaper compared to analog systems.

little difference between the amplitudes of two samples. This

necessitates transmitting several identical PCM codes, which is

redundant.

In DPCM, the difference in the amplitude of two successive

DATA COMMUNICATION

9. A PCM system uses sampling frequency of 16 k samples/s. Then,

find out the maximum frequency of the signal up to which the

signal can be perfectly reconstructed.

A continuous time signal can be completely represented in its

samples and received back. If the sampling frequency is twice of

the highest frequency content of the signal.ie.,

2fm samples/sec for a signal bandwidth fm. Hence it is called

Nyquist rate.

tinuous signal faithfully in its sampled form

fs = 2fm samples/sec

produces SNR of 1023 at the output.

Given

S

= 1023

N

B = 50 kHz

( )

Solution

S

Rmax = B log2 1 + = 50000 log2 (1 + 1023)

N

= 150514.99 bits/sec

12. A PCM system uses sampling frequency of 16 k samples/s.

Then, find out the maximum frequency of the signal upto

which the signal can be perfectly reconstructed.

A continuous time signal can be completely represented in its

samples and received back .If the sampling frequency is twice of

the highest frequency content of the signal. ie.,

fs ≥ 2 fm

Here,

fs = sampling frequency = 16 k samples/sec

ANALOG AND DIGITAL COMMUNICATION

10 x 103 ≥ 2fm

fm ≤ 5 x 103 Hz

13. Determine the Nyquist rate for analog input frequency of (a)

4 kHz (b) 10 kHz

Solution

Nyquist rate = 2 fm

Where fm - Signal frequency

(a) Nyquist rate = 2 x 4 = 8 KHz

(b) Nquist rate = 2 x 10 = 20 KHz

14. What is meant by fading?

Fading is nothing but decreasing the signal strength when it

propagate through the channel.

The phenomenon of a high-frequency in the spectrum of the

original signal g(t) seemingly taking on the identity of a lower

frequency in the spectrum of the sampled signal g(t) is called

aliasing or fold over.

The effect of aliasing as the output of the reconstruction filter

depends on both the amplitude and phase component of the

original spectrum G (f), making an exact analysis of the output

difficult resulting in distortion.

is called quantizing process. Graphically the quantizing process

means that a straight line representing the relation between the

input and the output of a linear analog system.

DATA COMMUNICATION

signal bandwidth of W hertz is called the nyquist rate.

19. How many Hamming bits are required for an ASCII character

'D'

Given

For character 'D' ASCII code (from table) is

1 0 0 0 1 00

m=7

\

Formula 2r ≥ m + r +1 ( r = No. of redundant bits)

Let r = 1

21 ≥ 7 + 1+ 1

21 ≥ 9 (false)

Let r = 2

22 ≥ 7 + 2+ 1

4 ≥ 10 (false)

Let r = 3

23 ≥ 7 + 3 + 1

8 ≥ 11 (false)

Let r = 4

24 ≥ 7 + 4+ 1

16 ≥ 12 (True)

Hence No. of Hamming bits is r = 4

20. Calculate odd and even parity bits for the EBCDIC character

'G'.

The Hex value for the character 'G' is C7 (from the EBCDIC

table), whose binary equivalent is,

C 7

1100 0111 (8bits)

ANALOG AND DIGITAL COMMUNICATION

The parity bit at the MSB position is given as, p 1100 0111

For odd parity: 011000111

For even parity: 111000111

21.

Calculate the odd and even parity bits for ASCII character

W.

Solution:

The Hex value for the character W is 57 (from the ASCII table),

whose binary equivalent is 1010 11 1

5 7

101 0111 (7bits)

During transmission, p101 0111 will gets transmit, if an odd

parity is used the value of p will be 0, since number of l' s in 101

0111 is 5 ( odd). For an even parity, the value of p will be 1.

i.e. to make even number of 1's in 101 0111',

0 101 0111 1 101 0111

error has occurred or not.

Error-Correcting Codes: It includes some extraneous

information which helps the receiver to determine when an error

has occurred and which bit is in error.

transmits, receives, or interprets data messages.

DTEs contain the hardware and software necessary to

establish and control communications between endpoints in a

data communications system.

Examples of DTEs: video display terminals, printers, and

personal computers.

DATA COMMUNICATION

It is the type of error correction scheme, where the errors are

detected and corrected without retransmission but by adding the

redundant bit to the message before transmission commences.

• When digital signals are transmitted from one place to

another, transmission errors will occur, it can be caused

by electrical interference from natural sources, such as

lightning as well as from man-made sources, such as

motors, generators, power lines and fluorescent lights.

• To maintain the quality of the signals transmitted, error

control mechanism has to be implemented.

encoding.

into a digital data. (code word).

In line encoding, the digital data is converted into a waveforms.

Cyclic Redundancy Check (CRC)

sample and received back. If the sampling frequency is twice of

the highest frequency content of the signal.

fs ≥ 2 fm

Where fs = sampling frequency

fm = modulating frequency

ANALOG AND DIGITAL COMMUNICATION

REVIEW QUESTIONS

PART – A

1. What do you mean by non-linear encoding in PCM system?

2. What is the advantage of differential PCM?

3. What are the types of data transmission?

4. Mention the usage of Scrambler and Descrambler

5. Differentiate between error detection and correction

6. Find the minimum sampling frequency for a signal having fre-

quency for a signal having frequency from 10MHz to 10.2MHz,

in order to avoid aliasing.

7. What are the types of pulse modulation systems?

8. List the methods for error correction.

9. What is pulse stuffing?

10. What is meant by fading?

11. Mention any two error control codes.

12. Define sampling theorem.

13. Determine the Nyquist rate for analog input frequency of

(a) 4KHz (b) 10 KHz.

14. List any two data communication standard organization.

15. What is the need for error control coding?

16. What is meant by differential pulse code modulation?

17. What are the advantages of digital transmission?

18. What is data terminal equipment? Give examples.

19. What is forward error correction?

20. What is meant by ASCII code?

21. Which error detection technique is simple and which one is more

reliable?

22. Give some of the alternative names for data communication codes.

23. What are the two types of noises present in Delta modulation

system?

24. Explain why the quantization noise cannot be removed

completely in PCM. How do you reduce this noise?

25. What are the two types of noises present in Delta modulation

system?

DATA COMMUNICATION

PART – B

1. With the neat block diagram, explain the concept of UART

transceiver operation.

2. What are the parallel interfaces? Describe in detail about

centronics parallel interfaces.

3. With block diagram explain the PCM transmitter and receiver.

4. Describe delta modulation system. What are its limitations? How

can they be overcome?

5. (i). Explain any two data communication codes presently used

for character encoding.

(ii). Give brief notes on error detection.

6. With neat block diagram explain the data communication

hardware.

7. Define PWM and explain one method of generating PWM.

8. Describe the processing steps to converts a k bit message word

to n-bit code word (n>k). Introduce a error and demonstrate how

a error can be corrected with an example.

9. Draw• the block diagram and explain the principle of operation

of a PCM system. A binary channel with bit rate =36000 bits/

sec is available for . PCM voice transmission. Find number of

bits per sample, number of quantization levels and sampling fre-

quency assuming highest frequency component of voice signal is

3.2 KHz.

10. (i) Write a note on data communication codes.

(ii) Explain serial and parallel interfaces in detail.

11. Explain in detail about error detection and correction.

12. Explain the standard organization for data communication.

13. Describe the mechanical, electrical and functional

characteristics of Rs. 232 interface.

14. Draw the block diagram and describe the operation of a del-

ta modulator. What are its advantages and disadvantages

compared to a PCM system?

15. Draw the transmitter and receiver block diagram of differential

PCM and describe its operation.

16. The PCM system has the following parameters, maximum analog

input frequency is 4KHz, maximum decoded voltage at the re-

ANALOG AND DIGITAL COMMUNICATION

(i) Minimum sample rate (ii) Minimum number bits used in the

PCM mode (iii) Resolution and (iv) Quantization error.

17. (i) Draw the block diagram of typical DPCM system an explain.

(ii) In a binary PCM system, the output signal to quantization

noise ratio is to be held to a minimum of 40 dB. Determine the

number of required levels, and find the corresponding out signal

to quantization noise ratio.

Entropy, Source encoding theorem, Shannon fano coding,

Huffman coding, mutual information,channel capacity, channel coding

theorem, Error Control Coding, linear block codes, cyclic codes,

convolution codes, viterbi decoding algorithm.

ANALOG AND DIGITAL COMMUNICATION

ERROR CONTROL CODING Unit 4

4.1 INTRODUCTION TO INFORMATION THEORY

content in a message signal leading to different source coding

techniques for efficient transmission of message.

The information theory used for mathematical modelling and

analysis of the communication system.

With information theory, and the modelling for communication

systems, following two main facts resolved.

i. The irreducible complexity below which the signal cannot

be compressed.

ii. The transmission rate for reliable communication of the

noisy channel.

In this chapter the concept of information entropy, channel

capacity, information rate etc., and source coding techniques

are discussed.

Discrete information source

A discrete information source which has only a finite set of

symbols is called the alphabet, and the elements of the set are

called symbols or letters.

A Discrete Memory less Source (DMS) can be characterized by

the list of symbols, the probability assigned to these symbols

and the specification of the rate of generating these symbols by

the source.

Uncertainty

Information is related to the probability of occurrence of event.

More is the uncertainty, more is the information associated with it.

The following example related to uncertainty (or) surprise.

Example

1. Sun rises in east

Here uncertainty is zero because there is no surprise in the

statement. The probability of occurrence of sun rising in the east is

SOURCE CODING AND ERROR CONTROL CODING

always 1

2. Sun does not rise in east:-

Hear uncertainty is high,because there is maximum information

as it is not possible

Consider a communication system which transmits

messages m1,m2....with probabilities P1,P2,... the amount of information

transmitted through the message mkwith probability Pk is given as,

Amount of information, IK=Log2(1/PK)

Properties of information

1. If there is more uncertainty about the message, information carried

is also more

2. If receiver knows the message being transmitted,the amount of

information carried is zero

3. If I1 is information carried by message m1 and I2 is information carried

by message m2, then amount of information carried completely due

to m1and m2 is I1+I2,

4. If there m=2N equally likely messages,then amount of information

carried by each message will be N bits.

Let us assume a communication system in which the allowable

message are m1,, m2,.....with probabilities of occurrence p1 ‘p2 .... Let the

transmitter select the message mk of probability Pk.

Assume that the receiver has correctly identified the message. Then

by the definition of the term information,the system has conveyed an

amount of information Ik given by

IK=log21/Pk

The concept of amount of information is also essential to

examine with some care the suggestion of the above equation. It can

be first noted that while Ik is an entirely dimensionless number,by

convention,the unit assigned is the bit.

Therefore by an example,if Pk=1/4,Ik= log24 = 2 bits. The unit bit is

employed principally as a reminder that in the above equation the

base of the logarithm is 2.(when the natural logarithmic base is used,

ANALOG AND DIGITAL COMMUNICATION

the unit is the nat,and when the base is 10,the unit is the Hartley or

decit. The use of such unit in the present case is analogous to unit

radian used in angle measure and decibel used in connection with

power ratios.)

The use of base 2 is especially convenient when binary PCM is

employed ,If the 2 possible binary digit(bits) may occur with equal

likelihood,each with a probability 1/2,then the correct identification

of the binary digit conveys an amount of informational I=log22=1 bit.

In the past term bit was used as an abbreviation for the phrase

binary digit. When there is an uncertainly whether the word bit is

untended as an abbreviation for binary digit as binit.

Assume there are M equally likely and independent messages that

M=2N,with Nan integer .In this case the information in each message

is

= I log

= 2M log 2 2N = N log 2 2

To identify each message by binary PCM code word ,the number of

binary digits required for the each of the 2N message is also N.Hence

in this case the information in each message,as measured in bits, is

numerically the same as the number of binits needed to encode the

messages.

When pK=1, one possible message s allowed. In this instance,

since the receiver knows the message,there is really no need for

transmission. We find that 1= log 2, I = 0. As PK decreases from 1

to 0,Ik increases monotonically, going from to infinity. Therefore,a

greater amount of information has been conveyed when the receiver

correctly identifies a less likely message.

When two independent messages mK and mj are correctly identified,we

can readily prove that the amount of information conveyed is the sum

of the information associated with each of the message individually.

Therefore,we conclude that the information amount are

I k = log 2 1 / pk

I l = log 2 1 / pl

message is p!!! p, with corresponding information content of message

mk and mj is

SOURCE CODING AND ERROR CONTROL CODING

Problem 1

A source produces one of the four possible symbols

during each interval having probabilities p1=1/2,P2=1/4,P3=P 4=1/8.

obtain the information content of each of these symbols.

Solution

we know that the information content of each symbol is

given as, 1

I k = log 2

Pk

Thus we can write

1 1

I 1 = log 2 = log 2 = log 2 ( 2) = 1 bit

p1 1/ 2

1 1

= log ( 2) = 2 bits

2

I 2 = log 2 = log 2

p2 1/ 4

1 1

= log 2 ( 2) = 3 bits

3

I 3 = log 2 = log 2

p3 1/ 8

1 1

= log 2 ( 2) = 3 bits

3

I4 = log 2 = log 2

p4 1/ 8

Problem 2

Calculate the amount of information,if it is given that pk=1/2.

Solution

The amount of information

1

I k = log 2

pk

1

log10

pk log10 2

= = = 1 bit

log10 2 log10 2

or

1

=log 2 = log 2 ( 2) = 1 bit

1/ 2

ANALOG AND DIGITAL COMMUNICATION

Problem 3

Calculate the amount of information ,if binary digits occur with

equal likelihood in a binary pcm system.

Solution

we know that in binary PCM, there are 2 binary levels (i.e.,)1 or 0

Therefore the probabilities,

p1(0 level)=P2(1 level)=1/2

Here the amount of information content is given as,

1

I 1 = log 2

1/ 2

1

I 2 = log 2

1/ 2

1 log10 2

I 1 = log 2 = log 2 ( 2) = = 1 bit

1/ 2 log10 2

1 log10 2

I2 = log 2 = log 2 ( 2) = = 1 bit

1/ 2 log10 2

I1 = I 2 = 1 bit

Thus,the correct identification of the binary digits in binary PCM

carries 1 bit of information

• In a practical communication system,it is defined as the average

information per message. Denoted ‘H’ and its units are bits/

message.

• Entropy must be as high as possible in order to ensure maximum

transfer of information.

• Thus for quantitative representation of average information per

symbol,we make the following assumptions.

I. The source is stationary so that the probabilities may remain

constant with time.

ii. Successive symbols are statistically independent and come from

the source at an average rate of r symbol per second.

m1,m2,m3.....mM and their probablities P1, P2, P3,....Pm

respectively

SOURCE CODING AND ERROR CONTROL CODING

• Consider that we have M different messages.

• Let these message be m1m2m3,...mM and their probabilities p1p2p3....

pM respectively.

• Suppose that a sequence of L message is transmitted,Then if L is very

very large then we may say that,

P1 L messages of m1 are transmitted.

P2 L messages of m2 are transmitted

PmL messages of mM are transmitted.

• The information-due to message m1 will be,

1

I 1 = log 2

p1

• Since ,there are P1 L number of ml,the total information due to all

message of ml will be,

1

I 1 (total ) = P1L log 2

p1

1

I 2(total ) = P1L log 2 and so on

p2

sages will be,

I (total ) = I 1(total ) + I 2(total ) + ...Im(total )

I (total ) = p1L log 2 (1 / p1 ) + P2L log 2 (1 / p2 ) + .... + PM L log 2 (1 / pM ) ... (1)

Total information

Average information =

Number of messages

I (total )

=

L

• The average information is represented by entropy, which is

represented by H. Thus,

I (total )

Entropy,H=

L

ANALOG AND DIGITAL COMMUNICATION

M

1

Entropy; H = ∑p k log 2

K =1 pk

or

M

H = − ∑ Pk log 2 pk

K =1

1. Entropy (H) is zero ,if the event is sure or it is impossible

(i.e.,)H=0 if pk=0 or 1.

2.When Pk=M for all M symbols,then the symbols are equally

likely. For such a source entropy is given by,

H=log2M

3.Upper bound on entropy is given by,

Hmax ≤ log 2 M

These above properties can be proved as.

Property 1

Calculate entropy when pK=0 and when pk=1

Proof

We know that

M

1

H = ∑ pk log 2

k =1 pk

• Since,Pk=1,the above equation becomes,

M

1

H = ∑ log

K =1

2

1

M log10 (1)

= ∑ log ( 2)

K =1 10

• Next consider the second case,when pk=0,Instead of putting

pk=0,directly,Let us consider the limiting case,(i.e.;)

SOURCE CODING AND ERROR CONTROL CODING

M

1

H = ∑P K log 2

K =1 pk

M

1

H = ∑ lim P k log 2

K =1

Pk →0

Pk

• The Right hand side of the above equation will be zero, when pk → 0

Hence entropy will be zero(i.e.;)

H=0

Therefore,entropy is zero for both certain and most rated message.

Property 2

When pk=1/M for all M symbols are equally likely .For such a

source entropy is given by H=log2M.

Proof

We know that the probability of M number of equally likely

messages is

1

P =

M

• This probability is same for all M messages,(i.e.,)

1

P1 = P2 = P3 = ...PM = ... (1)

M

M

1

H = ∑P K log 2

K =1 pk

1 1 1

= p1 log 2 + P2 log 2 + ...PM log 2

p1 p2 pM

Putting the Probabilities form equation we get,

1 1 1

H = log 2 M + log 2 M + ... log 2 M

M M M

Hence after adding these terms above equation becomes,

H=log2M

ANALOG AND DIGITAL COMMUNICATION

Property 3

The upper bound on entropy is given as Hmax≤log2M.Hear ‘M’ is the

number of messages emitted by the source.

Solution

• To prove the above property,the property of natural logarithm is

used,it can be written as,

qm} on the alphabet X={x1,x2,...xM} of the discrete memory less source.

M

q

∑P k log 2 k .It can be written as,

K =1 Pk

qk

log10

qk M Pk

M

∑ Pk log 2 ∑ k

= P

log102

K =1 Pk K =1

q

log10 k

q log10 e Pk

M M

∑ pk log 2 k = ∑ Pk 2

K =1 Pk K =1 log10 log10 e

M

q

= ∑ Pk log 2 e log e k

K =1 Pk

qk qk

Here log e = In Hence above equation becomes,

Pk Pk

M

q M

q

∑P k log 2 k = log 2 e ∑ Pk In k

K =1 Pk K =1 Pk

SOURCE CODING AND ERROR CONTROL CODING

qk qk

In ≤ − 1

pk pk

M

qk M

qk

∑ pk log 2

pk

≤ log 2

e

∑p k − 1

K =1 K =1 pk

M

≤ log 2e ∑ (q

K =1

k − pk )

M M

≤ log 2e ∑ qk − ∑ pk

K =1 K =1

M M

• Note that ∑q

k =1

k = 1 as well as ∑P

k =1

k =1

• Hence above equation becomes,

M

q

∑P k log 2 k ≤ 0

K =1 pk

1

Now consider that qk = k for all k. That is all symbols in the alphabet

are equally likely.

Then above equation becomes,

M

1

∑P log 2 qk + log 2 ≤ 0

k

Pk

K =1

M M

1

∴ ∑ Pk log 2 qk + ∑ Pk log 2 ≤0

K =1 K =1 Pk

M

1 M

∴ ∑ Pk log 2 ≤ − ∑ Pk log 2 qk

K =1 Pk K =1

M

1 1

≤ ∑P

K =1

k log 2

Pk

log 2

qk

1

Replace qk = in above equation,

M

ANALOG AND DIGITAL COMMUNICATION

M

1 M

∑P

K =1

k log 2 ≤ ∑ Pk log 2 M

Pk K =1

M

≤ log 2 M ∑ Pk

K =1

We Know that ∑P

K =1

k =1

,hence above equation becomes,

M

1

∑P

K =1

k log 2

Pk

≤ log 2 M

distribution.(i.e.,)

H ( X ) ≤ log 2 M

H max ( X ) = log 2 M

Problem 1

In binary PCM if ‘0’ occur with probability 1/4 and ‘1’ occur with

the probability equal to 3/4,then calculate the amount of information

carried by each bin it.

Solution

And binary ‘1’ has P(x2)=3/4

SOURCE CODING AND ERROR CONTROL CODING

1

I ( x i ) = log 2

P ( xi )

1

P ( x1 ) =

4

1

With P ( x1 ) =

4

log10 4

We have I ( x1 ) = log 2 4 = = 2bit

log10 2

3

And with I ( x 2 ) =

4

4

log10

We have I ( x 2 ) = 3 = 0.415 bits

log10 2

Here,it may observed that binary ’0’ has probability 1/4 and it

carries 2 bits of information.

Whereas binary it’1’ has probability 3/4 and it carries 0.415 bits

of information.

Thus, this reveals the fact that if probability of occurrence is less,then

the information carried is more and vice versa.

Problem 2

If there are M equally likely and independent symbol,then prove

that amount of information carried by each symbol will be,

I(xi)=N bits,where M=2N and N is an integer

Solution

Since, it is given that all the M symbols are equally likely and

independent,therefore,the probability of occurrence symbol must be

1/M.

We know that amount of information is given as,

1

I ( x i ) = log 2 ... (1)

P ( xi )

ANALOG AND DIGITAL COMMUNICATION

1

P ( xi ) =

M

Hence ,equation(1) will be,

I ( x i ) = log 2 M ... ( 2)

will be

I ( x i ) = log 2 2N = N log 2 2

[Since log 22=1]

= N bits

Hence,amount of information carried by each symbol will be ‘N’

bits. We know that M=2.

This means that there ‘N’ binary digits (bin its)in each symbol.

This indicate that when the symbols are equally likely and coded with

equal number of binary digits (bin its), then the information carried by

each symbols(measured n bits) is numerically same as the number of

bin its used for each symbols.

Problem 3

Prove the statement stated as under “if a receiver knows the

message being transmitted,the amount of information carried will be

zero”.

Solution

Here it is stated that receiver “Knows” the message. This means

that only one message is transmitted. Thus,probability of occurrence o

this message will be P(xi) =1. This is because only one message and its

occurrence is certain(probability of certain events is’1’)The amount of

information carried by this type of message will be,

1 log10 1

I ( x i ) = log 2 =

P ( x i ) log10 2

Substituting(xi)=1

SOURCE CODING AND ERROR CONTROL CODING

Or

I(xi)=0 bits

This proves the statement if the receiver knows message,the

amount of information carried will be zero.

Also,as P(xi) is decreased from 1 to 0,I(xi ) increased monotonically

from 0 to infinity. This shows that the amount of information conveyed

is greater when receiver correctly identifies less likely messages.

Problem 4

Verify the following expression

Solution

If xiand xj independent then we know that

P ( xi x j ) = P ( xi ) P ( x j )

1

also I ( x i x j ) = log 2

P ( xi x j )

1

I ( x i x j ) = log 2

P ( xi ) P ( x j )

1 1

I ( x i x j ) = log + log

P ( xi ) P (x j )

I ( xi x j ) = I ( xi ) + I ( x j )

Problem 5

A discrete source emits one of five symbols once every millisecond

with probabilities 1/2,1/4,1/8,1/16 and 1/16 respectively. Determine

the source entropy and information rate

Solution

We know that the source entropy is given as

m

1

H ( x ) = ∑ P ( X i ) log 2

i =1 P ( xi )

ANALOG AND DIGITAL COMMUNICATION

5

1

= ∑ P ( x i ) log 2 bits / symbol

i =1 P ( xi )

1 1 1 1 1

(or ) H ( X ) = log 2 2 + log 2 4 + log 2 8 + log 2 16 + log 2 16

2 4 8 16 16

1 1 3 1 1 15

(or ) H ( X ) = + + + + =

2 2 8 4 4 8

(or ) H ( X ) = 1.875 bits/symbol

1 1

The symbol rate r= = = 1000 sym

mbols/sec

Tb 10−3

Therefore,the information rate is expressed as

Problem 6

The probabilities of the five possible outcomes of an experiment

are given as

1 1 1 1

P ( x1 ) = , P ( x 2 ) = , P ( x 3 ) = , P ( x 4 ) = P ( x 5 ) =

2 4 8 16

Determine the entropy and information rate if there are 16 out

comes per second.

Solution

The entropy of the system is given as

5

1

H ( X ) = ∑ P ( x i ) log 2 bits / symbol

i =1 P ( xi )

1 1 1 1 1 15

(or ) H ( X ) = log 2 2 + log 2 4 + log 2 8 + log 2 16 + log 2 16 =

2 4 8 16 16 8

H ( X ) = 1.875bits / outcome

Therefore,rate of information R will be

SOURCE CODING AND ERROR CONTROL CODING

Problem 7

An analog signal is band limited to fm Hz and sampled at Nyquist

rate. The samples are quanti zed into four levels. Each level represents

one symbol. Thus there are four symbols. The probabilities of these

four levels(symbols) are P(xi)=P(x4)=1/8 and P(x2)=P(x3)=3/8. Obtain

information rate of the source.

Solution

We are given four symbols with probabilities p(x1)=P(x4)=1/8 and

P(x2)=P(x3)=3/8. Average information H(X)(or entropy)is expressed as,

1 1 1 1

H ( X ) = P ( x1 ) log 2 + P ( x 2 ) log 2 + P ( x 3 ) log 2 + P ( x 4 ) log 2

P ( x1 ) P ( x2 ) P ( x3 ) P (x4 )

1 3 8 3 8 1

H (X ) = log 2 8 + log 2 + log 2 + log 2 8

8 8 3 8 3 8

(or ) H ( X ) = 1.8bits / symbols

It is given that the signal is sampled at Nyquist rate for fm Hz band

limited signal is,

Nyquist rate =2 fm samples/sec

Since every sample generated one source symbol,

Therefore,symbols per second,r=2 fm symbols/sec

Information rate is given by : R=r H(x)

Putting values of r and H(X) in this equation,we get

R=2 fm symbols/sec X 1.8 bits/symbols

=3.6 fm bits/sec.

In this example there are four levels. Those four levels may be

coded using binary PCM as show in Table 6.1

ANALOG AND DIGITAL COMMUNICATION

Symbol or

S.No Probability Binary digits

level

1 Q1 1/8 00

2 Q2 3/8 01

3 Q3 3/8 10

4 Q4 1/8 11

Table 6.1

Hence,two binary digits(bin its) are required to send each

symbols are sent at the rate of 2fm symbols/sec. Therefore,transmission

rate of binary digits will be binary rate=2 binary/symbol×2fm sym-

bols/sec=4 fm bin its/sec. Because one bin it is capable of conveying

one bit of information,therefore the above coding scheme is capable of

conveying 4 fm bits of information per second. But in this example, we

have obtained that we are transmitting 3.6 fm bits of information per

second.This means that the information carrying ability of binary PCM

is not completely utilized by the transmission scheme.

4.3 SOURCE CODING TO INCREASE AVERAGE INFORMATION

PER BIT

• Coding offers the most significant application of the information

theory

• The main purpose of the coding is to improve the efficiency of the

communication system.

• Coding is a procedure, for mapping a given set of messages or

information {m1m2,....mN} into a new set of encoded messages{ c1,,c2,...

cN} in such away that the transmission is one to one (i.e) for each

message,there is only one encoded message. This is called “Source

coding”.

• The device which performs source coding is called source encoder.

• The main problem of coding technique is the development of an

efficient source encoder.

The primary requirements are:

1. The code words produced by source encoder should be in

binary nature.

2. The source code should be unique in nature. The every code

SOURCE CODING AND ERROR CONTROL CODING

Let there be L number of messages emitted by the source.

The probability of the kth message is pkand number of bits

assigned to this message be nk. Then the average number of bits(N) in

the code word of the message are given by,

L −1

N = ∑P N

K =0

k k ... (1)

Let Nminbe the minimum value of N. Then the coding efficiency of the

source encoder is defined as,

= N min / N k ... ( 2)

is called efficient.

• In other words,Nmin≤ N ,and coding efficiency is maximum when

Nmin≈ N

• The value of Nmin can be determined with help of Shannon s first

theorem called source coding theorem.

Statement

Given a discrete memory less source of entropy H, the average

code word length N for any distortion less source encoding is bounded

as,

N ≥ H ...(3)

Note:

i. (Where ,the entropy H indicates the fundamental limit on the

average number of bits peer symbol(i.e) N this limit says tha

t average number of bits per symbol cannot be made smaller them

entropy H.

ii. (ii) Hence Nmin=H and we can write the efficiency of source encoder

from equation(2)as,

η = H/N ....(4)

• It is measure of redundancy of bits in the encoded message sequence.

It is given by,

Redundancy r=1- code efficiency

ANALOG AND DIGITAL COMMUNICATION

NOTE;Redundancy should be as low as possible.

be removed from the signal prior to transmission. This operation with no

loss of information,is ordinarily performed on a signal form. It is referred

as data

Compaction or loss less data compression.

• Basically, data compression is achieved by assigning short

description to the most frequently outcomes of source output and

longer description to the less frequent out comes.

• The various source coding schemes for data compaction are:

(i)Prefix coding.

(ii) Shannon Fanocoding.

(iii) Huffman coding.

(iv) Lempel ziv coding.

• There are two algorithms of variable length coding techniques which

is done to increase the efficiency of the source encoder. They are

1. Shannon Fano algorithm

2. Huffman coding.

Need

(i)If the probability of occurrence of all the messages are not

equally likely,then average information or entropy is reduced

and Results in information rate is reduced.

(ii) This problem can be solved by coding the messages with

different number of bits.

NOTE

(i).Shannon - Fano coding is used to encode the messages

depending upon their probabilities.

(ii).This algorithm is assigns less number off or

highly probable message and more number of bits for rarely

occurring messages.

SOURCE CODING AND ERROR CONTROL CODING

Producer

known as Shannon Fano coding algorithm.

Step 2:Partition the set into two sets that are as close to equi-probable

as possible and assign 0 to the upper set and assign 1 to the lower

set.

Step 3:Continue this process each time partitioning the sets with as

nearly probabilities as possible until further partitioning is not

possible.

Problem 1

probabilities of 0.4,0.2,0.1,0.2,0.1 respectively. Construct a Shannon

Fano code for the source and calculate code efficiency ‘ η ’.

Solution

Given Probabilities

P1=0.4,P2=0.2,P3=0.1,P4=0.2,P5=0.1.

Symbols Probabilities

x1 0.4

x2 0.2

x3 0.2

x4 0.1

x5 0.1

ANALOG AND DIGITAL COMMUNICATION

Step 2:

as equiprobable in two methods.

Method 1:

bol ability 1 2 message(nk)

x1 0.4 0 0 00 2

x2 0.2 0 1 01 2

x3 0.2 1 0 10 2

x4 0.1 1 1 0 110 3

x5 0.1 1 1 1 111 3

L =1 L

N = ∑ Pknk

k =0

(or ) =∑ Pknk

K =1

5

N = ∑P n

K =1

k k

= ( 0.4 × 2) + ( 0.2 × 2) + ( 0.1 × 3 ) + ( 0.2 × 2) + ( 0.1 × 3 )

= 0.8 + 0.4 + 0.3 + 0.4 + 0.3

= 2.2 bits/symbol.

Method II

No of

Symbol Probability Stage 1 Stage Stage Code bits per

2 3 word message

(nk)

x1 0.4 0 0 1

x2 0.2 1 0 0 100 3

x3 0.2 1 0 1 101 3

x4 0.1 1 1 0 110 3

x5 0.1 1 1 1 111 3

Table 6.3

SOURCE CODING AND ERROR CONTROL CODING

5

N = ∑P n

K =1

k k

= ( 0.4 × 1) + ( 0.2 × 3 ) + ( 0.1 × 3 ) + ( 0.2 × 3 ) + ( 0.1 × 3 )

= 0.4 + 0.6 + 0.3 + 0.6 + 0.3

= 2.2 bits/symbol.

M

1

H = ∑ Pk log 2

K =1 Pk

5

1

= ∑ Pk log 2

K =1 Pk

1 1 1 1 1

= P1 log 2 + p2 log 2 + P3 log 2 + P4 log 2 + P5 log 2

P1 p2 P3 P4 P5

1 1 1 1 1

= 0.4 log 2 + 0.2 log 2 + 0.1log 2 + 0.2 log 2 + 0.1 log 2

0.4 0.2 0.1 0.2 0.1

1 1 1 1 1

log10 log10 log10 log10 log10

= 0.4 0.4 + 0.2 0.2 + 0.1 0.1 + 0.2 0.2 + 0.1 0.1

log10 2 log10 2 log10 2 log10 2 log10 2

= ( 0.4 × 1.3219 ) + ( 0.2 × 2.3219 ) + ( 0.1 × 3.3219 ) + ( 0.2 × 2.3219 ) + ( 0.1 × 3.3219 )

= 0.52876 + 0.46439 + 0.33219 + 0.46439 + 0.33219

= 2.12192 bits/ssymbol.

(iii) Efficiency

H

η =

N

2.12192

= = 0.96450

2.2

0 η = 96.45 0

0 0

ANALOG AND DIGITAL COMMUNICATION

• Huffman coding assigns different number of binary digits to the

messages according to their bits/symbol probabilities of occurrence.

• Since Huffman coding,one binary,digit carries almost one bit

of information,Which is the maximum information that can be

conveyed by one digit.

Procedure

Step 1:The messages are arranged in an order of decreasing

probabilities for example x3 and x4 have lowest probabilities and

hence they are put at the bottom in the column of stage -1.

Step 2:The two messages of lowest probabilities are assigned binary ‘0’

and’1’.

Step 3:The two lowest probabilities in stage I are added and the sum

is placed in stage II,such that probabilities are in descending

order.

Step 4: Now last two probabilities are assigned 0 and 1 and they are

added. The sum of last two probabilities placed in stage III such

that probabilities are in descending order. Again’0’ and’1’ is

assigned to the last two probabilities

Step 5:This prosses continued till the last stage contains only two val-

ues. These two values are assigned digits 0 and 1 and no further

repetition required. This results in a construction of tree is know

as Huffman tree.

Step 6: Start encoding with the last stage,which consist of exactly two

ordered probabilities Assign 0 as the first digit in the code words

for all the source of symbols associated with probability,assign 1

to the second probability.

Step 7: Now go back and assign 0 and 1 to the second digit for the two

probabilities that were combined in the previous step retaining

all assignments made in that stage.

SOURCE CODING AND ERROR CONTROL CODING

Problem 1

A discrete memory less source has 6 symbols x1 ,x2,x3,x4,x5,x6

with probabilities 0.30,0.35,0.20,0.12,0.08,0.05 respectively. Construct

a huffman code and calculate its efficiency also calculate redundancy of

the code.

Solution

Code words obtained in bracket in stage. We can write the code

words for the respective probabilities,as follows

Stage I Stage

Xi Stage II Stage IV Stage V

P(xi) III

x1 0.30 0.30 0.30 0.45 0.55

x2 0.25 0.25 0.25 0.30 0.45

x3 0.20 0.20 0.25 0.25

x4 0.12 0.13 0.20

x5 0.08 0.12

x6 0.05

Number of

Message Probability Code word

bits nk

x1 0.3 00 2

x2 0.25 01 2

x3 0.2 11 2

x4 0.12 101 3

x5 0.08 1000 4

x6 0.05 1001 4

Table 6.5

(iii) To find efficiency h we have to calculate average code word length(N)

and entropy (H).

M

N = ∑P n

K =1

k k where nk is code word

6

= ∑P n

K =1

k k

= 0.30 × 2 + 0.25 × 2 + 0.20 × 2 + 0.12 × 3 + 0.08 × 4 + 0.05 × 4

= 0.60 + 0.50 + 0.40 + 0.36 + 0.32 + 0.20

= 2.38 bits/symbol

ANALOG AND DIGITAL COMMUNICATION

Entropy

M

1

H = ∑ Pk log 2

K =1 Pk

6

1

= ∑ Pk log 2

K =1 Pk

1 1 1 1 1 1

= P1 log 2 + P2 log 2 + P3 log 2 + P4 log 2 + P5 log 2 + P6 log 2

P1 P2 P3 P4 P5 P6

1 1 1 1 1 1

0.30 log 2 + 0.25 log 2 + 0.20 log 2 + 0.12 log 2 + 0.08 log 2 + 0.05 log 2

0.30 0.25 0.20 0.12 0.08 0.05

log 0.30 log 0.25 log 0.20 log 0.12 log 0.08 log 0.05

0.30 10 + 0.25 10 + 0.20 10 + 0.12 10 + 0.08 10 + 0.05 10

log10 2 log10 2 log10 2 log10 2 log10 2 log10 2

= 0.521 + 0.5 + 0.4643 + 0.367 + 0.2915 + 0.216

= 2.3598 bits of information/message

H 2.3598

η= = = 0.99

N 2.38

o η = 99 o

o o

Redundancy of the code(g)

γ = 1 − γ ⇒ 1 − 0.99

= 0.01

Problem 2

A discrete memory less source X has four symbols x1,x2,x3, and x4

with probabilities 1 1 1 construct a

P ( x1 ) = , P ( x 2 ) = , P ( x 3 ) = P ( x 4 ) =

2 4 8

shannon fano code has the optimum property that ni=I(xi)and the code

efficiency is 100 o/o

SOURCE CODING AND ERROR CONTROL CODING

Solution

Given

1 1 1

P ( x1 ) = , P ( x 2 ) = , P ( x 3 ) = P ( x 4 ) = ,n (i ) = ( x i )

2 4 8

1

We know that,I(x i ) = log 2

P ( xi )

1 1

I ( x1 ) = log 2 ⇒ log 2

P ( x1 ) 1

2

1

log10

1

2 log 2

= = 10

=1

log10 2 log10 2

1 1

I ( x 2 ) = log 2 ⇒ log 2

P ( x ) 1

2

4

1

og10

lo

1

4

= = 2

log10 2

1 1

I ( x 3 ) = log 2 ⇒ log 2 =3

P ( x3 ) 1

8

1 1

I ( x 4 ) = log 2 ⇒ log 2 =3

P (x4 ) 1

8

ANALOG AND DIGITAL COMMUNICATION

word message(nk)

x1 1/2 0 0 1

x2 1/4 1 0 10 2

x3 1/8 1 1 0 110 3

x4 1/8 1 1 1 111 3

We know that,

entropy,

4 1

H ( X ) = ∑ P ( x i ) log 2 (or )

i =1 P ( xi )

4

= ∑ P ( xi ) I ( xi )

i =1

= P ( x1 ) I ( x1 ) + P ( x 2 ) I ( x 2 ) + P ( x 3 ) I ( x 3 ) + P ( x 4 ) I ( x 4 )

1 1 1 1

= × 1 + × 2 + × 3 + × 3

2 4 8 8

1 1 3 3

= + + +

2 2 8 8

= 1.75 bits/message

M M

N = ∑ P n (or )∑ P ( x ) n

K =1

k k

i =1

i i

= P ( x1 ) n1 + P ( x 2 ) n 2 + P ( x 3 ) n 3 + P ( x 4 ) n 4

1 1 1 1

= × 1 + × 2 + × 3 + × 3

2 4 8 8

= 1.75 bits/syymbol

code efficiency

H (X ) 1.75

η= = =1

N 1.75

o η = 100 o

o o

SOURCE CODING AND ERROR CONTROL CODING

Problem 3

A DMS has five equaly likely symbols. Construct a Shannon

fano code for x and calculate the efficiency of code. Construct another

Shannon- fano code and compare the results. Repeat for the Huffman

code and compare results.

Solution

(i)A Shannon fano code[by choosing two approximately equi-

probable (0.4 versus 0.6) sets] is constructed as follows

1

Stage Stage Code

2 3 word

No of bits per

message(nk)

x1 0.2 0 0 00 2

x2 0.2 0 1 01 2

x3 0.2 1 0 10 2

x4 0.2 1 1 0 110 3

x5 0.2 1 1 1 111 3

Entropy

5 1

H ( X ) = ∑ P ( x i ) log 2

i =1 P ( xi )

Here all five probabilitie es are same(i.e.,) 0.2 so we can write,

1

H ( X ) = 5 × P ( x i ) log 2

P ( xi )

1

= 5 × 0.2 × log 2

0.2

1

0.2 log10

= 5× 0.2

log10 ( 2)

H ( X ) = 2.32 bits/message

ANALOG AND DIGITAL COMMUNICATION

5

N = ∑ Pk nk

k =1

= 0.4 + 0.4 + 0.4 + 0.6 + 0.6

= 2.4 bits/symbol.

H (X )

coding efficiency η=

N

2.32

= = 0.967

2.4

o η = 96.7 o

o o

(ii) Another method for Shannon fano code[by choosing another two

approximately equiprobable (0.6 versus 0.4) sets]is constructed as

follows

Stage Stage Stage Cord No of bits per

Symbol Probability

1 2 3 word message (nk)

x1 0.2 0 0 00 2

x2 0.2 0 1 0 010 3

x3 0.2 0 1 1 011 3

x4 0.2 1 0 10 2

x5 0.2 1 1 11 2

5 1

H ( X ) = ∑ P ( x i ) log 2

i =1 P ( xi )

= 2.32 bits/message

SOURCE CODING AND ERROR CONTROL CODING

5 5

N = ∑ P n (or ) ∑ P ( x ) n

K =1

k k

i =1

i i

= P ( x1 ) n1 + P ( x 2 ) n 2 + P ( x 3 ) n 3 + P ( x 4 ) n 4 + P ( x 5 ) n 5

= ( 0.2 × 2) + ( 0.2 × 3 ) + ( 0.2 × 3 ) + ( 0.2 × 2) + ( 0.2 × 2)

= 0.4 + 0.6 + 0.6 + 0.4 + 0.4

= 2.4 bits/symbol

ding coefficiency ( η)

Cod

H (X )

coding efficiency η=

N

2.32

= = 0.967

2.4

o η = 96.7 o

o o

Since, average code word length is same as that for the code of

part(i), the efficiency is same.

(iii)The huffman code is constructed as follows

Stage 1

xi Stage II Stage III Stage IV

P(xi)

x1 0.2 (01) 0.4 (1) 0.4 (1) 0.6 (0)

x2 0.2 (000) 0.2 (01) 0.4 (00) 0.4 (1)

x3 0.2 (001) 0.2 (000) 0.2 (01)

x4 0.2 (10) 0.2 (001)

x5 0.2 (11)

x1 0.2 01 2

x2 0.2 000 3

x3 0.2 001 3

x4 0.2 10 2

x5 0.2 11 2

ANALOG AND DIGITAL COMMUNICATION

M 5

N = ∑ P n (or ) N = ∑ P ( x ) n

K =1

k k

i =1

i i

= P ( x1 ) n1 + P ( x 2 ) n 2 + P ( x 3 ) n 3 + P ( x 4 ) n 4 + P ( x 5 ) n 5

Here all probability have same value (0.2).

=0.2 × [n1 + n 2 + n 3 + n 4 + n 5 ]

so,=

= 0.2[2 + 3 + 3 + 2 + 2]

= 0.2 × 12

= 2.4 bits/symbol

Entropy & efficiency are also same as that the Shannon fano code

due to same code word length.

Entropy

5 1

H ( X ) = ∑ P ( x i ) log 2

i =1 P ( xi )

1 1 1

= P ( x1 ) log 2 + P ( x 2 ) log 2 + P ( x 3 ) log 2

P ( x1 ) P ( x2 ) P ( x3 )

1 1

+P ( x 4 ) log 2 + P ( x 5 ) log 2

P (x4 ) P ( x5 )

Here all five probab bilities have same value as 0.2 so we can write,

1

=5 × P ( x1 ) lo

og 2

P ( x1 )

1

= 5 × 0.2 log 2

0.2

1

0.2 log10

= 5× 0.2

log10 2

= 2.32 bits/message

Coding efficiency ( η)

H (X )

Coding efficiiency η =

N

2.32

= = 0.967

2.4

o η = 96.7 o

o o

Problem 4

A Discrete memory less source (DMS) has five symbols x1x2,x3,x4,

SOURCE CODING AND ERROR CONTROL CODING

(i) Construct the Shannon-fano code for x and calculate efficiency of the

code.

(ii) Repeat for the Huffman code and compare the results.

word message (nk)

x1 0.4 0 0 00 2

x2 0.19 0 1 01 2

x3 0.16 1 0 10 2

x4 0.15 1 1 0 110 3

x5 0.1 1 1 1 111 3

Entropy

5 1

H ( X ) = ∑ P ( x i ) log 2

i =1 P ( xi )

1 1 1

= P ( x1 ) log 2 + P ( x 2 ) log 2 + P ( x 3 ) log 2

P ( x1 ) P ( x2 ) P ( x3 )

1 1

+P ( x 4 ) log 2 + P ( x 5 ) log 2

P (x4 ) P ( x5 )

1 1 1

= 0.4 log 2 + 0.19 log 2 + 0.16 log 2

0.4 0.19 0.16

1 1

+0.15 log 2 + 0.1 log 2

0 .15 0.1

H ( X ) = 2.15 bits/symbol

Code efficiency ( η)

H (X ) 2.15

η= = = 0.956

N 2.25

o η = 95.6 o

o o

Huffman code is constructed as follows

ANALOG AND DIGITAL COMMUNICATION

Stage I

Xi Stage II Stage III Stage IV

P(xi)

x1 0.4 (1) 0.4 (1) 0.4 (1) 0.6 (0)

x2 0.19 (000) 0.25 (01) 0.35 (00) 0.4 (1)

x3 0.16 (001) 0.19 (000) 0.25 (01)

x4 0.15 (010) 0.16 (001)

x5 0.1 (011)

Entropy H(X)

Entropy H(X) of Huffman code is same as that for the Shannon-

Fano code.

5 1

H ( X ) = ∑ P ( x i ) log 2

i =1 P ( xi )

1 1 1

= P ( x1 ) log 2 + P ( x 2 ) log 2 + P ( x 3 ) log 2

P ( x1 ) P ( x2 ) P ( x3 )

1 1

+P ( x 4 ) log 2 + P ( x 5 ) log 2

P (x4 ) p ( x5 )

1 1 1

= 0.4 log 2 + 0.19 log 2 + 0.16 log 2

0.4 0.19 0.16

1 1

+0.15 log 2 + 0.1 log 2

0.15 0.1

H ( X ) = 2.15 bits/message

SOURCE CODING AND ERROR CONTROL CODING

x1 0.4 1 1

x2 0.19 000 3

x3 0.16 001 3

x4 0.15 010 3

x5 0.1 011 3

5

N = ∑ P ( xi ) ni

i =1

= P ( x1 ) n1 + P ( x 2 ) n 2 + P ( x 3 ) n 3 + P ( x 4 ) n 4 + P ( x 5 ) n 5

= ( 0.4 × 1) + ( 0.19 × 3 ) + ( 0.16 × 3 ) + ( 0.15 × 3 ) + ( 0.1 × 3 )

N = 2.2 bits/symbol

Code efficiency ( η)

H 2.15

η= = = 0.977

N 2.2

o η = 97.7 o

o o

ANALOG AND DIGITAL COMMUNICATION

x1 x2 x3 x4 x5 x6 x7

0.05 0.15 0.2 0.05 0.15 0.3 0.1

Solution

Arranging the symbols in decreasing order and obtain the

Huffman code as follows

Stage I Stage

Xi Stage III Stage IV Stage V Stage VI

P(Xi) II

x6 0.3 (00) 0.3 (00) 0.3 (00) 0.3 (00) 0.4 (1) 0.6 (0)

x3 0.2 (10) 0.2 (10) 0.2 (10) 0.3 (10) 0.3 (00) 0.4 (1)

x2 0.15 (010) 0.15 (010) 0.2 (11) 0.2 (10) 0.3 (01)

x5 0.15 (011) 0.15 (011) 0.15 (010) 0.2 (11)

x7 0.1 (110) 0.1 (110) 0.15 (011)

x1 0.05 (1110) 0.1 (111)

x4 0.05 (1111)

x1 0.05 1110 4

x2 0.15 010 3

x3 0.2 10 2

x4 0.05 1111 4

x5 0.15 011 3

x6 0.3 00 2

x7 0.1 110 3

Table 6.15

Average codeword lenght N ( )

7

N = ∑ P ( xi ) ni

i =1

= P ( x1 ) n1 + P ( x 2 ) n 2 + P ( x 3 ) n 3 + P ( x 4 ) n 4 + P ( x 5 ) n 5

+ P ( x6 )n6 + P ( x7 )n7

= ( 0.05 × 4 ) + ( 0.15 × 3 ) + ( 0.2 × 2) + ( 0.05 × 4 ) + ( 0.15 × 3 )

+ ( 0.3 × 2) + ( 0.1 × 3 )

N = 2.6 bits/symboll

Entropy H(X)

SOURCE CODING AND ERROR CONTROL CODING

7 1

H ( X ) = ∑ P ( x i ) log 2

i =1 P ( xi )

1 1 1 1

= 0.05 log 2 + 0.15 log 2 + 0.2 log 2 + 0.05 log 2

0.05 0.15 0.2 0.05

1 1 1

+0.15 log 2 + 0.3 log 2 + 0.1 log 2

0.15 0.3 0.1

H ( X ) = 2.57 bits/message

Coding efficiency ( η )

H 2.57

η= = = 0.9885

N 2.6

o η = 98.85 o

o o

Problem 7

A discrete memory less source has a alphabet given below.

Compute two different Huffman codes for this source,hence for each of

the two codes,find,

(i) The average code-word length.

(ii) The variance of the average code-word length over the

ensemble of source symbol.

Symbol S0 S1 S2 S3 S4

Probability 0.55 015 0.15 0.10 0.05

Solution

The two different Huffman codes are obtained by placing the com-

bined probability as high as possible or as low as possible.

1. Placing combined probability as high as possible

Symbol Stage III

I P(xi) II IV

0.55

S0 0.55 (0) 0.55 (0) 0.55 (0)

(0)

0.45

S1 0.15 (100) 0.15 (11) 0.3 (10)

(1)

S2 0.15 (101) 0.15 (100) 0.15 (11)

S3 0.1 (110) 0.15 (101)

S4 0.05 (111)

ANALOG AND DIGITAL COMMUNICATION

s0 0.55 0 1

s1 0.15 100 3

s2 0.15 101 3

s3 0.1 110 3

s4 0.05 111 3

1. Average code-word length

4

∴N = ∑P n

k =0

k k

= 1.9 bits/symbol

(ii )Variance of the code

4 2

σ2 = ∑P

k =0

k

nk − N

2 2 2 2

+ 0.05 [3 − 1.9]

2

= 0.99

2. Placing combined probability as low as possible

Stage I Stage Stage

Symbol Stage IV

P(Xi) II III

s0 0.55 (0) 0.55 (0) 0.55 (0) 0.55 (0)

s1 0.15 (11) 0.15 (11) 0.3 (10) 0.45 (1)

s2 0.15 (100) 0.15 (100) 0.15 (11)

s3 0.1 (1010) 0.15 (101)

s4 0.05 (1011)

(i ) Average code-word length

4

∴N = ∑P n

K =0

k k

= 1.9 bits/symbol

(ii )Variance of the code

4 2

σ2 = ∑P

K =0

k

nk − N

SOURCE CODING AND ERROR CONTROL CODING

2 2 2 2

+ 0.05 [4 − 1.9]

2

= 1.29

Average code-word

Method Varaiance

length

As high as possible 1.9 0.99

As low as possible 1.9 1.29

of information transferred when xi transferred and transmitted and yj

received .Its represented by,I(xi,yj)

x

P i

yj

I ( x i , y j ) = log bits ... (1)

P ( xi )

xi

P

Here I(xi,yj) is the mutual information, y is the conditional

j

probability of symbol xi for transmission.

The average mutual information is represented by I(X;Y).It is

calculated in bits/symbol. Average mutual information is defined as the

amount of source information gained per received symbol. It is given by

m n

I ( X ;Y ) = ∑ ∑ P ( x i , y j ) I ( x i y j ) ... ( 2)

i =1 j =1

x

P i

m m

yj

I ( X ;Y ) = ∑ ∑ P ( x i , y j ) log 2

i =1 j =1

P ( xi )

ANALOG AND DIGITAL COMMUNICATION

(i) The mutual information of a channel is symmetric.

I ( X ;Y ) = I (Y ; X )

(ii) The mutual information can be expressed in terms in terms of

entropies of channel input or channel out put and conditional entropies.

I ( X ;Y ) = H ( X ) − H ( X /Y )

I (Y ; X ) = H (Y ) − H (Y / X )

where , H (X /Y ) and H (Y / X ) are conditional entropies.

I ( X ;Y ) ≥ 0

following relation,

I ( X ;Y ) = H ( X ) + H (Y ) − H ( X ,Y )

Property 1

The mutual information of a channel is symmetric.

(i.e.,) I(X;Y)=I(Y;X)

Proof

Let us consider some standard relationships from probability

theory.These are as follows

X

P ( X i ,Y j ) = P i P (Y j ) ... (1)

Yj

Yj

and P ( X i ,Y j ) = P P ( Xi ) ... ( 2)

Xi

From equation (1) and (2) we can write,

X Yj

P i

Yj P (Y j ) = P P ( Xi ) ... ( 3 )

Xi

SOURCE CODING AND ERROR CONTROL CODING

X

P i

m n

Y j

I ( X ;Y ) = ∑ ∑ P ( X i ,Y j ) log 2

i =1 j =1

P ( Xi )

Yj

P

m n X

I ( X ;Y ) = ∑ ∑ P ( X i ,Y j ) log 2 i

i =1 j =1

P (Y )

j

X

P i

m n

Y j

I (Y ; X ) = ∑ ∑ P ( X i ,Y j ) log 2

i =1 j =1

P ( Xi )

= I ( X ;Y )

symmetric

Property 2

I (X;Y)=H (X)-H (X/Y)

I (Y;X)=H (Y)-H (Y/X)

Solution H(X/Y) is the conditional entropy and it is given as,

m n

1

H ( X /Y ) = ∑ ∑ P ( X i ,Y j ) log 2 ... (1)

i =1 j =1 P ( X i /Y j )

other words H(X/Y) is the information lost in the noisy channel. It is the

average conditional self information.

We know that,average mutual information is given as,

ANALOG AND DIGITAL COMMUNICATION

X

P i

m n

Y j

I ( X ;Y ) = ∑ ∑ P ( X i ,Y j ) log 2

i =1 j =1

P ( Xi )

m n

1

I ( X ;Y ) = ∑ ∑ P ( X i ,Y j ) log 2

i =1 j =1 P ( Xi )

m n

1

−∑ ∑ P ( X i ,Y j ) log 2

i =1 j =1 P ( X i ,Y j )

m n

1

I ( X ;Y ) = ∑ ∑ P ( X i ,Y j ) log 2 − H ( X /Y ) ... ( 2)

i =1 j =1 P ( Xi )

∑ P ( X ,Y ) = P ( X )

j =1

i j i

m

1

I ( X ;Y ) = ∑ P ( X i ) log 2 − H ( X /Y ) ... ( 3 )

i =1 P ( Xi )

First term of the above equatiion represents entropy.(i.e.,)

m

1

H ( X ) = ∑ P ( X i ) log 2 ... ( 4 )

i =1 P ( Xi )

Hence equation(3) becomes,

I ( X ;Y ) = H ( X ) − H ( X /Y ) ... ( 5 )

per symbol across the channel. It is equal to source entropy minus

information lost in the noisy channel is given by above equation.

SOURCE CODING AND ERROR CONTROL CODING

Yj

P

m n Xi

I (Y ; X ) = ∑ ∑ P ( X i ,Y j ) log 2

i =1 j =1

P (Y )

j

m n

1

= ∑ ∑ P ( X i ,Y j ) log 2

j =1 i =1 P (Y j )

m n

1

−∑ ∑ P ( X i ,Y j ) log 2 ... ( 6 )

i =1 j =1 P (Y j / X i )

The conditional entropy H(Y/X) is given as,

m n

1

/X)=∑ ∑ P ( X i ,Y j ) log 2

H(Y/ ... ( 7 )

i =1 j =1 P (Y j / X i )

With this result, equation (6) becomes,

m n

1

I (Y ; X ) = ∑ ∑ P ( X i ,Y j ) log 2 − H (Y / X ) ... ( 8 )

j =1 i =1 P (Y j )

By using th

he standard probability equation,

m

∑ P ( X ,Y ) = P (Y )

i =1

i j j ... ( 9 )

n

1

I (Y ; X ) = ∑ P (Y j ) log 2 − H (Y / X )

j =1 P (Y j )

1 n

H (Y ) = ∑ P (Y j ) log 2

We know that j =1 P (Y j )

Hence first term of above equation represents H (Y).Hence above

equation becomes,

I (Y ; X ) = H (Y ) − H (Y / X ) ... (10 )

Property 3

I(X;Y) ≥ 0

Solution

Average mutual information can be written as,

m n P (X )

I ( X ;Y ) = ∑ ∑ P ( X i ,Y j ) log 2 i

... (1)

P ( X i /Y j )

i =1 j =1

ANALOG AND DIGITAL COMMUNICATION

P ( X ) P ( X i ) P (Y j )

i

= log e P = In p

P ( X i /Y j ) P ( X i ,Y j )

1

In e=

In 2

We can write equation (1)) as under

1 m n P ( X i ) P (Y j )

-I (Y ; X ) = P ( X ,Y )

∑ ∑ i j P X ,Y In ... ( 2)

In 2 i =1 j =1

( i j)

Also we know that

In α ≤ α − 1

There fore we have

1 m n P ( X i ) P (Y j )

-I (Y ; X ) ≤ P (

∑ ∑ i j P X ,Y

X ,Y ) − 1

In 2 i =1 j =1

( i j )

1 m n m n

−I (Y ; X ) ≤ ∑ ∑ P ( X i ) P (Y j ) − ∑ ∑ P ( X i ,Y j ) ... ( 3 )

In 2 i =1 j =1 i =1 j =1

Since

m n m n

∑ ∑ P ( X i ) P (Y j ) = ∑ P ( X i )∑ P (Y j ) = (1)(1)

i =1 j =1 i =1 j =1

m n m n m

∑ ∑ P ( X ,Y ) = ∑ ∑ P ( X ,Y ) = ∑ P ( X ) = 1

i j i j i

i =1 j =1 j =1 i =1 i =1

-I ( X ;Y ) ≤ 0

I ( X ;Y ) ≥ 0

Hence proved.

Property 4

I(X;Y) =H (X) +H (Y)-H (X,Y)

Solution

We know the relation

H (X,Y) = H (X,Y) -H (Y)

Therefore

SOURCE CODING AND ERROR CONTROL CODING

Mutual information is given by

I (X;Y) =H (X) -H (X/Y) ...(2)

Substituting equation (1) in (2)

I (X;Y) =H (X) + H (Y) - H (X,Y)

Thus the required relation is proved.

Problem 1

Verify the following expression

Solution

We know that

P (Xi,Yj)=P (Xi/Yj) P (Yj) and

m

∑P (X

i =1

i /Y j ) = P (Y j )

n m

H ( X ,Y ) = −∑ ∑ P ( X i ,Y j ) log P ( X i ,Y j )

j =1 i =1

n m

H ( X ,Y ) = −∑ ∑ P ( X i ,Y j ) log P ( X i /Y j ) P (Y j )

j =1 i =1

n m

H ( X ,Y ) = −∑ ∑ P ( X i ,Y j ) log P ( X i /Y j )

j =1 i =1

n

m

−∑ ∑ P ( X i ,Y j ) log P (Y j )

j =1 i =1

information per symbol transmitted in the system.

• A suitable measure for efficiency of transmission of information may

be introduced by comparing the actual rate and the upper bound of

the rate of information transmission for a given channel.

• Shannon has introduced a significant concept of channel capacity

defined as the maximum of mutual information.

• Thus,the channel capacity C is given by

C=Max I(X;Y)=Max[H(X)-H(X/Y)] ...(1)

ANALOG AND DIGITAL COMMUNICATION

sometimes the unit of I(X,Y) and C is taken as bits/sec.

• The transmission efficiency or channel efficiency is defined as

actual transinformation

η= (or )

max imum transinformation

I ( X ;Y ) I ( X ;Y )

η= = ... ( 2)

max I ( X ;Y ) C

R =1− η

C − I ( X ;Y )

= ... ( 3 )

C

4.9 MAXIMUM ENTROPY FOR CONTINOUS CHANNEL OR GAUSSIAN

CHANNEL

• Probability density function of Gaussian function is given as’

X2

1

P (x ) = e 2σ

2

σ 2π

Where σ2 = Average power of the so urce

The maximum entropy is computed as follows

∞

1

h (x ) = ∫ p ( x ) log dx

P (x )

2

−∞

∞

= − ∫ P ( x ) log 2 P ( x ) dx

−∞

1 x 2

∞

= − ∫ P ( x ) log 2 e 2σ dx

2

−∞ σ 2π

x

2

∞

= − ∫ P ( x ) log 2 σ 2πe 2σ dx [ log 2 ( AB ) = log 2 A + log 2 B ]

2

−∞

− 2

2

∞ x

( )

= ∫ P ( x ) log 2 σ 2π + log 2 e 2σ dx

−∞

∞ ∞ x2

1

( )

− 2

= ∫ P ( x ) 2 log 2 σ 2π dx + ∫ P ( x ) log 2 e 2σ dx

−∞

2 −∞

SOURCE CODING AND ERROR CONTROL CODING

x2

−log ex 2

[ log 2 e 2 σ2

]=

2σ2

[ n log m=logmn ]

∞

1 log e 2

( )

2

= ∫ P ( x ) log 2 σ 2π dx + ∫ x P ( x ) dx

2 −∞ 2σ2

∞

1 log e

= log 2 ( 2πσ2 ) ∫ P ( x ) dx + 2 ∫

x 2P ( x ) dx

2 −∞

2σ

∞

∫ P ( x ) dx = 1, from properties of pdf

−∞

∞

∫ x 2P ( x ) dx = σ2 , from definition of variance

−∞

1 log e 2

log 2 ( 2πσ2 ) + σ

2 2σ2

1

log 2 ( 2πσ2 ) + log e

2

1

h ( x ) = log 2 ( 2πσ2e )

2

called information rate.

• Shannon’s theorem says that it is possible to transmit

information with an arbitrarily small probability of error provided

that the information rate ‘R’ is less than or equal to a rate ’C’, called

channel capacity.

• Thus the channel capacity is the maximum information rate with

which the error probability is within the tolerable limits.

Statement

• There exists a coding technique such that the output of the source

may be transmitted over the channel with a probability of error in the

received message which may be made arbitrarily small.

Explanation

• This theorem says that if R≤C,it is possible to transmit information

without any error even if the noise is present.

ANALOG AND DIGITAL COMMUNICATION

Negative statement of channel coding theorem

• Given a source of ‘M’ equally likely messages,with M>>1,which is

generating information at a rate ‘R’,then if’

R>C,

the probability of error is close to unity for every possible set of M

transmitter signals.

• Hence, the negative statement of Shannon’s theorem says that if

R>C,then every message will be in error.

(cr)Information Capacity Theorem

to a channel in which the noise is Gaussian is know as Shannon-

Hartley theorem.

• It is also called information capacity theorem.

Statement of theorem

• The channel capacity of a white band limited Gaussian channel is,

C = Blog 2 (1 + S / N ) bite/sec

Where

B →is the channel bandwidth,

S →is the signal power

N →is the total noise power within the channel bandwidth

B

Power ( P ) = ∫ Power spectral density

−B

No

Here B is bandwidth and power spectral density of white noise is

2

hence noise power N becomes,

B

No

N = ∫

−B

2

df

Noise power

N = NoB

SOURCE CODING AND ERROR CONTROL CODING

T

X Y

Source Destination

N

Consider a source x and receiver y. As x and y are dependent.

H [ x , y ] = H [y ] + H [ x / y ] ... ( 2)

The noise is added to the system is Gaussian in nature

As source is independent of noise

H [ x , y ] = H [ x ] + H [N ] ... ( 3 )

As y depends on x and N so

Y = f ( x , N ) and Y=x+N

Therefore, H [ x , y ] = H (x , N ) ... ( 4 )

Combining equation(2),(3) and (4)

H [y ] + H [ x / y ] = H [ x ] + H [N ]

H [ x ] − H [ x / y ] = H [y ] − H [N ] ... ( 5 )

H [x ] − H [x / y ] = I [x;y ]

Hence , equation (5)becomes,

I [ x ; y ] = H [y ] − H [N ]

Channel Capacity ⇒ C=max I [ x ; y ]

= max H ( y ) − max H ( N )] ... ( 6 )

As noise is gaussian,

max H ( N ) = H ( N ) = log 2 2πe σ2N ... ( 7 )

Where σ2N = N = noisepower

max H ( y ) = log 2 2πe σ2y

Where σ2y = power at receiver

=S +N

= Signal power +Noise power

ANALOG AND DIGITAL COMMUNICATION

C = log 2 2πe σ2y − log 2 2πe σ2N

1/2

2πe (S + N )

= log 2

2πeN

1 S

= log 2 1 +

2 N

1 S

C = 2B × log 2 1 +

2 N

S

C = B log 2 1 + bits /sec.

N

Where B is the channel band width. We Know the power spectral density

of noise is

N = NoB

S

C = B log 2 1 + bits /sec

NoB

4.10.2 Tradeoff Between Bandwidth and Signal to Noise Ratio

• Channel capacity of the Gaussian channel is given as,

S

C = B log 2 1 +

N

Above equation shows that the channel capacity depends on two factors.

i. Band width(B) of the channel

ii. Signal to Noise ratio(S/N)

Noiseless channel has infinite capacity

If there is no noise in the channel,then N=0. Hence S/N=∞.Such

a channel is called noiseless channel. Then capacity of such a channel

will be

C = B log 2 (1 + ∞ ) = ∞

SOURCE CODING AND ERROR CONTROL CODING

• Now if the band width ‘B’ is infinite, the channel capacity is

limited. This is because, as band width increases,noise power (N)

also increases. Noise power is given as

N=NoB

decreases. Hence even if B approaches infinity,capacity does not

approach infinity,capacity infinity,As B→∞,capacity approaches an

upper limit. This upper limit is given as,

lim C S

C ∞ = B → ∞1.44

No

Problem 1

The data is to be transmitted at the rate of 10000 bits/sec over a

channel having band width B=3000 Hz .Determine the signal to noise

ratio required. If the band width is increased to 10000 Hz ,then determine

the signal to noise ratio.

Solution

The data is to be transmitted at the rate of 10,000 bits /sec.Hence

channel capacity must be atleast 10000 bits/sec for error-free

transmission.

Channel capacity(C) =10000 bits/sec

Band width(B) =3000Hz

The channel capacity of Gaussian channel,

S

C = B log 2 1 +

N

Putting the values,

S

10000=3000log 2 1 +

N

S

∴ =9

N

ANALOG AND DIGITAL COMMUNICATION

S

10000 = 10000 log 2 1 +

N

S

∴ =1

N

S

Here , B=3000 =9

N

S

B=10000 =1

N

signal to noise ratio is reduced by nine times.

• This means the required signal power is reduced,When band width

is increased.

Problem 2

Channel capacity is given by

S

C = B log 2 1 + bits /sec. ... ( 6 )

N

In the above equation when the signal power is fixed and white

gaussian noise present,the channel capacity approaches an upper limit

with increase band width ’B’.prove that this upper limit is given as,

lim C S 1 S

C ∞ = B → ∞ = 1.44 =

N o In 2 N o

expression fo limiting capacity of the channel.

Solution

We Know that,noise power is given as,

N=NoB

Putting this value in equation (1)we get,

S

C = B log 2 1 +

NoB

SOURCE CODING AND ERROR CONTROL CODING

S NoB S

C= . log 2 1 +

No S NoB

NoB

S S S

log 2 1 +

No NoB

1

S S NSB

log 2 1 + o

No NoB

1

lim C S

lim S NSB

C∞ = B → ∞ = B → ∞ log 2 1 + o

No NoB

S

In the above equation put x= .Then asB → ∞,x → 0,i.e.,

NoB

S im 1

C∞ = x = 0 log 2 (1 + x )

No x

lim 1

Here let us use the standard relation, x = 0 log 2 (1 + x ) = e,

x

then above equation becomes,

S S log10 e

C∞ = log 2 e =

No N o log10 e

S

1.44

No

capacity as band width B approaches infinity.

ANALOG AND DIGITAL COMMUNICATION

Problem 3

A black and white TV picture consists of about 2 x106 Picture

elements with 16 different brightness levels,with equal probabilities. If

pictures are repeated at the rate of 32 per second,calculate average rate

of information conveyed by this TV picture source. If SNR is 30 dB,What

is the maximum band width required to support the transmission of the

resultant video signal

Solution

Given

Picture elements =2x106

Source levels(symbols) =16 i.e.,M=16

Picture repetition rate =32/sec.

S

= 30

N dB

(i) The source symbol entropy(H)

Source emits any one of the 16 brightness levels. Here M=16. These

levels are equiprobable. Hence entropy of such source is given by,

H=log2M

=log216

=4 bits/symbol(level)

(ii)Symbol rate(r)

Each picture consists of 2 x 106 picture elements. Such 32

pictures are transmitted per second. Hence number of picture elements

per second will be,

r = 2 × 106 × 32 symbols/sec

= 64 × 06 symbols/sec

Information rate of the source is given by

=2.56 × 108bits /sec.

This is the average rate of by TV picture.

= 30dB

N

SOURCE CODING AND ERROR CONTROL CODING

S S

We Know that = 10 log10

N dB N

S

∴ 30=10log10

N

S

∴ =1000

N

Channel coding theorem states that information can be received

without error if,

R≤C

S

R = 2.56 × 108 and C=Blog 2 1 +

N

S

2.56 × 108 ≤ B log 2 1 +

N

i .e., 2.56 × 10 ≤ Blog 2 (1 + 1000 )

8

2.56 × 108

(or ) B≥ i .e., 25.68MH Z

log 2 (1001)

Therefore ,the transmission channel must have a band width of

25.68 MHZ to transmit the resultant video signal.

Problem 4

A voice grade telephone channel has a band width of 3400 Hz.

If the signal to noise ratio(SNR) on the channel is 30 dB,determine the

capacity of the channel. If the above channel is to be used to transmit

4.8 kbps of data determine the minimum SNR required on the channel.

Solution:

Given data: Channel band width B=3400Hz

S

= 30dB

N dB

We Know that

S S

= 10 log10

N dB N

S

∴ 30=10log10

N

S

log10 =3

N

S

∴ =10000

N

ANALOG AND DIGITAL COMMUNICATION

Capacity of the channel is given as,

S

C = B log 2 1 +

N

=3400log 2 (1 + 1000 )

=33.888 kbits/sec.

N

Hence the data rate is 4.8 kbps.From channel coding theorem,

R≤C

S

Here R=4.8 kbps and C=Blog 2 1 +

N

Hence above equation becomes,

S

4.8 kbps ≤ Blog 2 1 +

N

S

i .e., 4800 ≤ 3400log 2 1 +

N

S

i .e., log 2 1 + ≥ 1.41176

N

S

log10 1 +

N ≥ 1.41176

log10 2

S

∴ ≥ 1.66

N

S

This means = 1.66 to transmit data at the rate of 4.8kbps

N min

Problem 5

For an AWGN channel with 4.0 kHz band width, the noise

spectral density h/2 is 1.0 pico watts/Hz and the signal power at the

receiver is 0.1 mW. Determine the maximum capacity, as also the

actual capacity for the above AWGN channel.

Solution :

Given: B =4000 Hz, S=0.1 x 10-3 W

SOURCE CODING AND ERROR CONTROL CODING

S

C = B log 2 1 + bits /sec.

N

Noise power(N)=NoB = 10−12 × 2 × 4000

=8 × 10-9W

0.1 × 10−3

C=4000log 2 1 +

8 × 10−9

=4000log 2 (12501)

log10 (12501)

=4000

log10 2

C=54.44 kbits/sec.

lim C S

C ∞ = B → ∞ 1.44

No

No

Here = 1 × 10−12Watts / Hz .Hence above equation becomes,

2

0.1 × 10−3

C ∞ = 1.44

2 × 10−12

=72 × 106 Hz or 72 MHz.

• When the data passed through the channel, errors are introduced

in the data because channel noise interferes the signal. Hence the

signal power is reduced and errors are introduced.

ANALOG AND DIGITAL COMMUNICATION

Channel

message Modulator +

Encoder

bits

noise

from Demodulator Message bits

Decoder

Channel

• The channel encoder adds extra (redundant bits) to message bits.

The encoded signal is transmitted through the noisy channel.

• The channel decoder identifies the extra bits (or) redundant bits and

using that the decoder will detect and correct the errors presence in

the received message bits if any

• The data rate increased due to extra redundant its. The system

becomes slightly complex because of coding techniques.

i. Code word: The encoded block of ‘n’ bits is called a code word. It

consists of message bits and redundant bits.

ii. Block length: The number of bits ‘n’ after coding is called as block

length of the code.

iii. Code rate: The ratio of message bits (k) and encoded output bits (n)

is called code rate (r)

k

r =

n

iv. Channel data rate: It is the bit rate at the output of the encoder. If

the bit rate at the input of encoder is R’ then channel date rate (i.e.,)

the output of encoder will be

k

channel data rate ( R0 ) = Rs

n

defined as the number of positions in which they differ.

For example, X =110 and Y =101. The two code vectors differ in

second and third bits. Hence Hamming distance between X and Y is ‘2’

SOURCE CODING AND ERROR CONTROL CODING

(i.e.,) d(X,Y) = d = 2.

vi. Minimum distance (dmin): The minimum distance of linear block

code is defined as the smallest hamming distance between any pair

of code words. In the code (or) minimum distance is the same as the

smallest hamming weight of the difference between any pair of code

words.

The following table list some of the requirements of error capabil-

ity of the code.

1. Detect upto ‘s’ errors per word, dmin ≥ s + 1

2. Correct upto ‘t’ errors per word, dmin ≥ 2t + 1

3. Correct upto ‘t’ errors and detect s > t errors per word, dmin ≥ (t+ s

+1)

sage bits and (n-k) parity bits or check bits are transmitted.

• The total bits at the output of the channel encoder are ‘n’

• The below figure illustrates this concept

Message

Channel code block

block

encoder output

output

code word

Message length =nbits

bit bit

Figure 4.3 Linear Block codes

Systematic code: In the systematic block code, the message bits

present at the beginning of the code block output and then parity/check

bits appears as shown in figure but in non-systematic code, it not possi-

ble to differentiate message bits and parity bits; they are mixed together.

Linear code: A code is said to be linear if the sum of the two code vec-

tors produces another code vector.

• A code word consists of k message bits which are denoted by

ANALOG AND DIGITAL COMMUNICATION

m1, m2.....mk and (n-k) parity bits (or) check bits denoted by

c1, c2..........cn-k

• The sequence of message bits is applied to linear block encoder to

produce and n bit code word. The elements of this code are x1, x2,.....

xn .

• We can express this code word mathematically as

represented as

X = [M : C ] ... ( 2)

Where M=k -message vectors

AND C= (n − k ) or q paritty vectors q = n − k

• A block code generates the parity vectors (or parity bits) required to

be added to the message bits to generate the code words. The code

vector X can also be represented as under:

where M = k − message vectors

and C = (n − k )( or ) q parity vectors where q = (n − k )

• A block code generator generates the parity vectors (or parity bits)

required to be added to the message bits to generate the code words.

The code vector X can also be represented as under:

X = MG ... ( 3 )

Where X = Code vector for 1 × n size

M = Message vector of 1× k size

G = Generator matrix of k × n size

Representation of code vector

used. The generator matrix is generally represented as under:

[G ] = [ I k / P ] ... ( 5 )

Where I k = k × k identity matrix

and P = k × (n − k ) coe

efficient matrix (or ) P = k × q coefficient matrix

where q = n − k

SOURCE CODING AND ERROR CONTROL CODING

1 0 ... 0

0 1 ... 0

. . . .

Therefore, Ik = and

. . . .

. . . .

0 0 1 k ×k

P11 P21 ... P1q

P

21 P22 ... P2q

. . . .

and P = ... ( 6 )

. . . .

. . . .

P Pkq

k1 Pk 2 k ×q

C = MP ... ( 7 )

Substituting the matrix form, we obtain

P11 P12 ... P1q

P

21 P22 ... P2q

. . . .

c1,c 2 .....c q = [m1,m2 ,.....mk ]1×k ... ( 8 )

1×q . . . .

. . . .

P Pkq

k1 Pk 2 k ×q

c1 = m1P12 ⊕ m2P22 ⊕ m3P32 ⊕ ........ ⊕ mk Pk 2 ... ( 9 )

c1 = m1P13 ⊕ m2P23 ⊕ m3P33 ⊕ ....... ⊕ mk Pk 3

Similarly we can obtain the expressions for the remaining parity bits

Hamming codes are linear block codes. The family of (n, k)

Hamming codes for q ≥ 3 is defined by the following expressions:

ANALOG AND DIGITAL COMMUNICATION

k = 2q - q -1 q =(n-k)

Message Parity

bits bits

of Hamming code

1. Block length: n = 2q − 1

2. Number of message bits

k = 2q − q − 1 = n − q

3. Number of parity bits is (n - k ) = q where q ≥ 3

(ie ) The minimum number of parity bit is 3

4. the minimum distance dmin = 3

k 2q − q − 1 q

5. The code rate efficiency = = q

=1− q

n 2 −1 2 −1

If q >> 1, then code rate r = 1

Code

Considering the minimum distance, we have dmin = 3

1. The number of errors that can be detected per word = 2

since dmin ≥ ( s + 1) ∴ 3 ≥ s + 1 ∴s ≤ 2

2. The number of errors that can be detected per word =1

since dmin ≥ ( 2t + 1)∴ 3 ≥ ( 2t + 1) ∴t ≤ 1

possible to correct upto only 1 error.

There is another way of expressing the relationship be-

tween the message bits and the parity check bits, of linear

block code. Let H denote an (n − k ) × n matrix defined as under:

H = P T | I n −k

T

SOURCE CODING AND ERROR CONTROL CODING

interchanging the row and columns of the coefficient matrix P given by

P

12 P22 ... Pk 2

. . . .

PT = ... (1)

. . . .

. . . .

P Pqk

1k P2k (q )×k

P11 P21 ... Pk1 : 1 0 0 ... 0

P

12 P22 ... Pk 2 : 0 1 0 ... 0

P13 P22 ... Pk 3 : 0 0 1 ... 0

P = .

T

. . . . . . . .

... (1)

. . . . : . . . . .

. . . . . . . . .

Pk1 P2k Pqk : 0 0 0 ... 1 (n −k )×k

Problem 1

The generator matrix for a (6,3) block code is given below. Find all

the code vectors of this code.

1 0 0 | 0 1 1

G = 0 1 0 | 1 0 1

0 0 1 | 1 1 0

Solution

The general pattern of the block codes , hence, in this case, n = 6

and k = 3. This means that the message block size k is 3 and the length

of code vector (i.e.,) n is 6. For obtaining code vectors, we shall follow the

steps given below:

(i) First, we separate out the identity matrix I and coefficient ma-

trix

We know that the generator matrix is given by

G = [ I k ][P ]

Comparing this with the given generator matrix, we obtain

ANALOG AND DIGITAL COMMUNICATION

1 0 0

I k = I 3×3 = 0 1 0

0 0 1

0 1 1

and Pk ×q = P3×3 = 1 0 1

1 1 0

message blocks:

( 0, 0, 0 )( 0, 0,1)( 0,1, 0 ) , ( 0,1,1) , (1, 0, 0 ) , (1, 0,1) , (1,1, 0 ) , (1,1,1) .

vectors M and the coefficient matrix P as under:

0 1 1

[c1,c 2 ,c 3 ] = [m1,m2 ,m3 ] 1 0 1 (1)

1 1 0

Now let us obtain words for all the message words.

The solution of equation (1) is given by

c1 = (m1 × 0 ) ⊕ (m2 × 1) ⊕ (m3 × 1) = m2 ⊕ m3 ... ( 2)

c 2 = (m1 × 1) ⊕ (m2 × 0 ) ⊕ (m3 × 1) = m1 ⊕ m3 ... ( 3 )

c 3 = (m1 × 1) ⊕ (m2 × 1) ⊕ (m3 × 0 ) = m1 ⊕ m2 ... ( 4 )

tions, it is possible for us to obtain the parity bits c1,c 2 and c 3

1. For the message word (m1,m2 , m3 ) = ( 0, 0, 0 ) we have

c1 = 0 ⊕ 0 = 0

c2 = 0 ⊕ 0 = 0

c3 = 0 ⊕ 0 = 0

∴ c1,c 2 ,c 3 = ( 0, 0, 0 )

The complete code word for this message word is given by

m1 m2 m3 c1 c2 c3

Code word = 0 0 0 0 0 0

message parity

2. For the second message vector (i.e.,) (m1,m2 , m3 ) = ( 0, 0,1) we have

SOURCE CODING AND ERROR CONTROL CODING

c1 = 0 ⊕ 1 = 1

c2 = 0 ⊕1 = 1

c3 = 0 ⊕ 0 = 0

∴ c1,c 2 ,c 3 = (1,1, 0 )

The complete code for this message word is given by

m1 m2 m3 c1 c2 c3

Code word = 0 0 1 1 1 0

message parity

3. Similarly, we can obtain the code words for the remaining message

words. All these code words have been given in table below

S.No Message vectors Parity bits Complete code vector

m1 m2 m3 c1 c2 c3 m1 m2 m3 c1 c2 c3

1 0 0 0 0 0 0 0 0 0 0 0 0

2 0 0 1 1 1 0 0 0 1 1 1 0

3 0 1 0 1 0 1 0 1 0 1 0 1

4 0 1 1 0 1 1 0 1 1 0 1 1

5 1 0 0 0 1 1 1 0 0 0 1 1

6 1 0 1 1 0 1 1 0 1 1 0 1

7 1 1 0 1 1 0 1 1 0 1 1 0

8 1 1 1 0 0 0 1 1 1 0 0 0

ANALOG AND DIGITAL COMMUNICATION

Problem 2

The parity check matrix of a particular (7,4) linear block code is

given by,

1 1 1 0 1 0 0

[H ] = 1 1 0 1 0 1 0

1 0 1 1 0 0 1

(ii) List all the code vectors

(iii) What is the minimum distance between the code vectorsw?

(iv) How many errors can be detected? How many errors can be

corrected ?

Solution

First, let us obtain the PT matrix

PT is the transpose of the coefficient matrix P. The given parity check

matrix H is (n − k ) × n matrix (or) q × n matrix, where q = n − k .

It is given that the code is (7,4). Hamming code. Therefore, we

have

=n 7=

and k 4, q = 3

[H ] = P T |[ I n −k ]

Where PT is (n-k)by k matrix and In-k is (n-k) x (n-k)

1 1 1 0 | 1 0 0

We have [H ]3×3 = 1 1 0 1 | 0 1 0

1 0 1 1 | 0 0 1 3×7

1 1 1 0

P = 1 1 0 1

T

- 1 0 1 1 3×4

SOURCE CODING AND ERROR CONTROL CODING

1 1 1

1 1 0

We have P =

1 0 1

0 1 1 4×3

The generator matrix G is a k × n matrix. So, here, it will be a

4 × 7 matrix. Thus, we have G = [ I | P ] where I is a k × k (i.e.,) 4 × 4

k k

matrix.

Substituting the 4 × 4 identity matrix and the coefficient matrix,

we obtain,

1 0 0 0 | 1 1 1

0 1 0 0 | 1 1 0

G=

0 0 1 0 | 1 0 1

0 0 0 1 | 0 1 1

This is the required generator matrix.

Now, let us obtain the parity bits for each message vector.

The parity bits can be obtained by using the following expression:

C = MP

Therefore [ 1 2 3 ]1×3 = [m1,m2 ,m3m4 ]1×4 [P ]4×3

c , c , c

1 1 1

1 1 0

Therefore [ 1 2 3 ]1×3 [ 1 2 3 4 ]1×4 1 0 1

c , c , c = m , m , m m

0 1 1 4×3

Solving

g, we obtain

c1 = (m1 × 1) ⊕ (m2 × 1) ⊕ (m3 × 1) ⊕ (m 4 × 0 )

c 2 = (m1 × 1) ⊕ (m2 × 1) ⊕ (m3 × 0 ) ⊕ (m 4 × 1)

c 3 = (m1 × 1) ⊕ (m2 × 0 ) ⊕ (m3 × 1) ⊕ (m 4 × 1)

c1 = 0 ⊕ 1 ⊕ 0 ⊕ 1 = 0

c2 = 0 ⊕1 ⊕1 = 0

c3 = 0 ⊕ 0 ⊕1 = 1

Hence, the parity bits are c1c 2c 3 = 101

Therefore, the complete code word for the message word 0101 is

Message Parity

ANALOG AND DIGITAL COMMUNICATION

Similarly, we can obtain the code words for the other message

vectors and the corresponding parity bits and code words are given in

table given below. The weight of the code word is also given.

Code words for the (7,4) Hamming code

S.No Weight of

the code

m1 m2 m3 m4 c1 c2 c3 x1 x2 x3 x4 x5 x6 x7

vector

1. 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

2. 0 0 0 1 0 1 1 0 0 0 1 0 1 1 3

3. 0 0 1 0 1 0 1 0 0 1 0 1 0 1 3

4. 0 0 1 1 1 1 0 0 0 1 1 1 1 0 4

5. 0 1 0 0 1 1 0 0 1 0 0 1 1 0 3

6. 0 1 0 1 1 0 1 0 1 0 1 1 0 1 4

7. 0 1 1 0 0 1 1 0 1 1 0 0 1 1 4

8. 0 1 1 1 0 0 0 0 1 1 1 0 0 0 3

9. 1 0 0 0 1 1 1 1 0 0 0 1 1 1 4

10. 1 0 0 1 1 0 0 1 0 0 1 1 0 0 3

11. 1 0 1 0 0 1 0 1 0 1 0 0 1 0 3

12. 1 0 1 1 0 0 1 1 0 1 1 0 0 1 4

13. 1 1 0 0 0 0 1 1 1 0 0 0 0 1 3

14. 1 1 0 1 0 1 0 1 1 0 1 0 1 0 4

15. 1 1 1 0 1 0 0 1 1 1 0 1 0 0 4

16. 1 1 1 1 1 1 1 1 1 1 1 1 1 1 7

mum weight of any non-zero code vector. Looking at table 7.3 we obtain

dmin = 3

Number of errors that can be detected is given by

dmin ≥ s + 1

3 ≥ s +1

or s≤2

Thus for the (7,4) linear block code, at the most two errors can be

detected and at the most only one error can be corrected

SOURCE CODING AND ERROR CONTROL CODING

Message register

Input m4 m3 m2 m1

sequence

Message bits

code words

S

Modulo - 2

addition + + +

Parity or

Parity bits c3 c2 c1 check bits

or

check bits Parity bit register

The encoder for (7,4) Hamming code is shown in figure 4.5. This

encoder produces all the code words corresponding to various message

words listed in table given above

The parity check or check bits (c 3 ,c 2 ,c1 ) are being generated for

each message word (m 4 ,m3 ,m2 ,m1 ) . The parity bits are obtained from

the message bits by means of modulo-2-adders. The output switch S is

first connected to the message register to transmit all the message bits

in a code word. Then it is connected to the parity bit register to transmit

the corresponding parity bits. Thus, we get a 7 bit code word at the out-

put of the switch.

1. Basic concept

The decoding of linear block codes is done by using a special

technique called syndrome decoding, which reduces the memory

requirement of the decoder are as under: (i) Error detection in the received

code word, (ii) Error correction. The syndrome decoding technique is

explained as follows.

2. Practical Assumptions

(i) Let X represent the transmitted code word and Y represent the re-

ceived code word.

ANALOG AND DIGITAL COMMUNICATION

(ii) Then if X =Y, no errors in the received signal and if X ≠ Y, then some

errors are present.

3. Detection of Error

a. For an (n,k) linear block code, there exists a parity check matrix of size

(n − k ) × n . We know that,

Parity check matrix, H = P : I n −k

T

( or ) H = P T : I q

(n −k )×n q ×n

matrix.

b. The transpose of the parity check matrix is given by,

P Q

H = ... ( or ) ...

T

I n −k I q

n ×q

c. The transpose of parity check matrix (HT) exhibits a very important

property (i.e.,) XH T = ( 0, 0,.....0 )

This means that the product of any coder vector X and the

transpose of the parity check matrix will always be 0.

We shall use this property for the detection of errors in received

code words as under:

At the receiver, we have

If YH T = ( 0, 0,...0 ) , then Y = X ( i.e.,) there is no error

But, if YH T ≠ ( 0, 0,...0 ) , then Y ≠ X ( i.e.,) error exists in the received

code word

4. Syndrome and its use for error detection

The syndrome is defined as the non-zero output of the

product YHT. Thus, the non-zero syndrome represents some errors

present in the received code word Y. The syndrome is represented by S

and is mathematically given as,

S = YH T

or [S ]1×(n −k ) = [Y ]1×n H T

n ×(n −k )

SOURCE CODING AND ERROR CONTROL CODING

Thus, when s = 0

is no error in the received other valid codeword (other than X)

signal

Figure 4.6 Two different possibilities for S = 0

Then, all zero elements of syndrome represent that there is no

error, and a non-zero value of an element in syndrome represents the

presence of error. But sometimes, even if all the syndrome elements

have zero value, the error exists. This has been shown in figure

5. Error Vector (E)

i. For the n-bit transmitted and received code words X and Y

respectively, let us define an n-bit error vector E such that its non-

zero elements represents the locations of errors in the received code

vector Y as shown in figure 4.7

ii. The encircled entries in figure indicate the presence of errors.

Transmitted

0 0 1 1 1 1 0

code vector, X:

Received

code vector, Y: 1 0 01 0 1 0

Error vector E: 1 0 1 0 1 0 0

location of errors in the received code vector Y

The elements in the code word vector Y can be obtained by using

the modulo 2 additions, as under:

Y = X ⊕E

From figure 4.7 we can write that,

Y = [0 ⊕ 1, 0 ⊕ 0,1 ⊕ 1,1 ⊕ 0,1 ⊕ 1,1 ⊕ 0, 0 ⊕ 0]

or Y= [1, 0, 0,1, 0,1, 0]

iii. The principle of modulo-2-addition can be applied in a slightly differ-

ent way as under:

X =Y ⊕E

From figure 4.7 we can write that

ANALOG AND DIGITAL COMMUNICATION

X = [1 ⊕ 1, 0 ⊕ 0, 0 ⊕ 1,1 ⊕ 0, 0 ⊕ 1,1 ⊕ 0, 0 ⊕ 0]

X = [0, 0,1,1,1,1, 0]

S = YH T

We know that Y = Y ⊕ E , then

S = [X ⊕ E ] H T

= XH T ⊕ EH T

But XHT = 0

This is the relationThere

between syndrome SS and

fore, = 0⊕the

EHerror

T vector.

Problem 1

or S = EH T

For a code vector and the parity check matrix H given below,

prove that

XH T = ( 0, 0....0 )

1 1 1 0 1 0 0

H = 1 1 0 1 0 1 1

1 0 1 1 0 0 1 3×7

Solution

The transpose of the given parity matrix H, can be obtained by

interchanging the rows and columns and under:

1 1 1

1 1 0

1 0 1

T

H = 0 1 1

1 0 0

0 1 0

0 1 1 7×3

Also the product XHT is given by

1 1 1

1 1 0

1 0 1

XH = ( 0111000 )1×7 0

T

1 1

1 0 0

0 1 0

0

1 1 7×3

SOURCE CODING AND ERROR CONTROL CODING

= ( 0 × 1) ⊕ (1 × 1) ⊕ (1 × 1) ⊕ (1 × 0 ) ⊕ ( 0 × 1) ⊕ ( 0 × 0 ) ⊕ ( 0 × 0 )

( 0 × 1) ⊕ (1 × 1) ⊕ (1 × 0 ) ⊕ (1 × 1) ⊕ ( 0 × 0 ) ⊕ ( 0 × 1) ⊕ ( 0 × 1)

( 0 × 1) ⊕ (1 × 0 ) ⊕ (1 × 1) ⊕ (1 × 1) ⊕ ( 0 × 0 ) ⊕ ( 0 × 0 ) ⊕ ( 0 × 1)

or XH T = 0 ⊕ 1 ⊕ 1 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0, 0 ⊕ 1 ⊕ 0 ⊕ 1 ⊕ 0 ⊕ 0 ⊕ 0,

0 ⊕ 0 ⊕1 ⊕1 ⊕ 0 ⊕ 0 ⊕ 0

or XH = [0, 0, 0]1×3

T

The parity check matrix proved that for valid code word, the

product XH T = ( 0, 0, 0 ) .

Problem 2

The parity check matrix of a (7,4) Hamming code is as under:

1 1 0 1 1 0 0

H = 1 1 1 0 0 1 1

1 0 1 1 0 0 1

Calculate the syndrome vector for single bit errors.

Solution

We know that syndrome vector is given by

S = EH T = [E ]1×7 H T

7×3

Therefore, syndrome vector will be represented by a 1 × 3 matrix

Thus, S1×3 = [E ]1×7 H

T

7×3

Now let us write various error vectors

Various error vectors with single bit errors are shown in table

given below The bolded/encircled bits represent the locations of errors

1. 1 0 0 0 0 0 0 First

2. 0 1 0 0 0 0 0 Second

3. 0 0 1 0 0 0 0 Third

4. 0 0 0 1 0 0 0 Fourth

5. 0 0 0 0 1 0 0 Fifth

6. 0 0 0 0 0 1 0 Sixth

7. 0 0 0 0 0 0 1 Seventh

ANALOG AND DIGITAL COMMUNICATION

(i) We have [S ]1×3 = [E ]1×7 H T 7×3

1 1 1

1 1 0

1 0 1

[S ] = [10 0 0 0 0 0]1×7 0 1 1

1 0 0

0 1 0

0 1 1

Therefore, [S ] = [1 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0,1 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0

1 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0]

Here, [S ] = [1, 1, 1]

This is the syndrome for the first bit in error.

(ii) For the second bit in error, we have

1 1 1

1 1 0

1 0 1

[S ] = [0 1 0 0 0 0 0] 0 1 1

1 0 0

0 1 0

0 1 1

The erefore, [S ] = [0 ⊕ 1 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0, 0 ⊕ 1 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0

0 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0]

Here, [S ] = [1, 1, 0]

below

(iv) Here, it may be noted that the first row of table represent an error

vector with no errors. The corresponding syndrome is (0,0,0)

(v) Table below shows that the syndrome vectors for various error

vectors.

SOURCE CODING AND ERROR CONTROL CODING

Syndrome vector

S.No Error vectors with single bit errors

“S”

1. 0 0 0 0 0 0 0 0 0 0

2. 1 0 0 0 0 0 0 1 1 1 ←1st Row of HT

3. 0 1 0 0 0 0 0 1 1 0 ←2nd Row of HT

4. 0 0 1 0 0 0 0 1 0 1 ←3rd Row of HT

5. 0 0 0 1 0 0 0 0 1 1 ←4th Row of HT

6. 0 0 0 0 1 0 0 1 0 0 ←5th Row of HT

7. 0 0 0 0 0 1 0 0 1 0 ←6th Row of HT

8. 0 0 0 0 0 0 1 0 1 1 ←7th Row of HT

Let’s see how single bit errors can be corrected using syndrome

decoding.

Error correction using syndrome vector

(i) For a transmitted code vector X = ( 0 1 0 0 1 1 0 ) , we obtain the

received code vectors Y = ( 0 1 1 0 1 1 0 ) . let there be an error in the

3rd position.

(ii) We calculate the corresponding syndrome vector

S = YH T

(iv) From the syndrome vector, we obtain the error vector.

(v) From the error vector, we obtain the transmitted signal (or) correct

vector as under

X =Y ⊕E

ANALOG AND DIGITAL COMMUNICATION

Problem 2

To clear the above concept of error correction using syndrome

vector, let us consider one particular example. For this, let us use the

following parity check matrix:

1 1 1 0 1 0 0

H = 1 1 0 1 0 1 1

1 0 1 1 0 1 1 3×7

Solution

(i) First, we obtain the received code vector ‘Y’

Assuming X = ( 0 1 0 0 1 1 0 ) to be the transmitted code vector

Let the received code vector be obtained by assuming that the third

bit is in error

Y = (0 1 1 0 1 1 0)

Thus, . Here, third bit represents error.

(ii) Next let us determine the corresponding syndrome vector

We know that,

Syndrome S = YH T

1 1 1

0 1 1

1 0 1

S [ 0 1 1 0 1 1 0 ] 1 1 0

0 0 1

0 1 0

1 1 0

We have S = [ 0 ⊕ 0 ⊕ 1 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0, 0 ⊕ 1 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 1 ⊕ 0,

0 ⊕ 1 ⊕ 1 ⊕ 0 ⊕ 1 ⊕ 0 ⊕ 0]

or S = [1, 0,1]

This is the syndrome vector for the given received signal. This

corresponds to the 3rd row of the transpose matrix HT.

T

(iii) But

= S YH= EH T

Therefore EH T = [1, 0, 1]

(iv) Let us obtain the error vector for the syndrome vector S = [1, 0, 1]

From the table the error vector corresponding to the syndrome

(1, 0, 1) is given by

E = (0 0 1 0 0 0 0)

SOURCE CODING AND ERROR CONTROL CODING

This shows that the error is present in the third position of the

received code vector Y.

(v) To obtain the correct vector. The vector X can be obtained as under;

X =Y ⊕E

Similarly the values of Y and E, we obtain

X = [0 1 1 0 1 1 0 ] ⊕ [0 0 1 0 0 0 0 ]

or X = [0 1 0 0 1 1 0 ]

4.14.2 Syndrome Vector for (n,k) block codes

1. Block Diagram

The Block diagram of a syndrome decoder for (n, k) block codes

for correcting errors is shown in figure 4.8

}

+

X = Y⊕E

+

corrected

code vector

+

code word Y

Syndrome S

Look up table

for error

patterns E

For (n,k) block codes for correcting errors

ANALOG AND DIGITAL COMMUNICATION

2. Working Operation

The received n-bit code word Y is stored in an n-bit register.

This code vector is then applied to a syndrome calculator to calculate

syndrome S = YHT . In order to obtain the syndrome, the transposed

parity check matrix, HT is stored in the syndrome calculator. The (n-k)

bit syndrome vector S is applied to the look-up table containing to the

error patterns. An error pattern is selected corresponding to the

particular syndrome S generated at the output of the of the syndrome

calculator. The selected error pattern E is then added (modulo-2-

addition) to the received signal Y to generate the corrected code vectors

X.

Therefore, X = Y ⊕ E

Problem 1

An error control code has the following parity check matrix

1 0 1 1 0 0

H = 1 1 0 0 1 0

0 1 1 0 0 1

(ii) Find the code word that begins with 101

(iii) Decode the received code word 1 1 0 1 1 0 . Comment on

error detection capability of this code.

Solution

From the parity check matrix [ H ]3×6 , it is obvious that this is a

(6,3) linear block code. Therefore,= n 6= ,k 3 and (n − k ) = q = 3

(i) We know that the parity check matrix is given by,

[H ] = P T : I q

q ×n

or [H ]3×6 = P T : I 3

3×6

1 0 1 1 0 0

H = 1 1 0 0 1 0

0 1 1 0 0 1

1 0 1

or P = 1

T

1 0

0 1 1 3×3

SOURCE CODING AND ERROR CONTROL CODING

1 1 0

P = 0 1 1

1 0 1

We know that the generator matrix is given by,

1 0 0 1 1 0

G = I k : Pk ×(q ) = 0 1 0 0 1 1

k ×n

0 0 1 1 0 1

This is the required generator matrix

(ii ) The message vector is given by

M = [1 0 1]

The three parity bits are obtained by using the following

standard expression

Therefore, C = MP

[c1 c 2 c 3 ] = [m1 m2 m3 ] [P ]

1 1 0

or [c1 c 2 c 3 ] = [m1 m2 m3 ] 0 1 1

1 0 1 [3× 6]

or c1 = (m1 × 1) ⊕ (m2 × 0 ) ⊕ (m3 × 1) = m1 ⊕ m3

Substituting m1 = 1 and m3 = 1, we obtain

c1 = 1 ⊕ 1 = 0

Similarly, c 2 = (m1 × 1) ⊕ (m2 × 1) ⊕ (m3 × 0 ) = m1 ⊕ m2

Substituting m1 = 1 and m2 = 0, we obtain

c2 = 1 ⊕ 0 = 1

and c 3 = (m1 × 0 ) ⊕ (m2 × 1) ⊕ (m3 × 1) = m2 ⊕ m3

or c3 = 0 ⊕1 = 1

Therefore the parity word is C= [0, 1, 1]

Hence, the complete code word is given by

X = 1 0 1 0 1 1

M C

(iii ) The received code word Y = 1 1 0 1 1 0

efore, the syndrome is given by S = YH T

There

Substituting for Y and H T , we obtain

ANALOG AND DIGITAL COMMUNICATION

1 1 0

0 1 1

1 0 1

S = [1 1 0 1 1 0]

1 0 0

0 1 0

0 0 1

or S = [1 ⊕ 0 ⊕ 0 ⊕ 1 ⊕ 0 ⊕ 0,1 ⊕ 1 ⊕ 0 ⊕ 0 ⊕ 1 ⊕ 0, 0 ⊕ 1 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0]

or S = [0, 1, 1]

This is same as the second row of the transpose matrix HT, which

indicates that there is an error in the second bit of the received signal

(i.e.,)

Error

Y=1 1 0 1 1 0

The correct code word is obtained by replacing the second bit by a 0.

(iv) It is possible to verify that this code has minimum dmin = 3. The

relation between dmin and the number of errors can be detected is:

dmin ≥ s + 1

For dmin = 3, we have

3 ≥ s +1

or s≥2

wo errors can be detected and dmin ≥ 2t + 1

This means that upto tw

or 3 ≥ 2t+1 or t ≤ 1

This means that upto one error can be corrected.

Problem 2

Given a (7,4) Hamming code whose generator matrix is given by

1 0 0 0 1 0 1

0 1 0 0 1 1 1

G=

0 0 1 0 1 1 0

0 0 0 1 0 1 1

(ii) Find the parity check matrix

SOURCE CODING AND ERROR CONTROL CODING

Solution

(i) First, we obtain the P matrix from the generator matrix.

(ii) Then, we obtain the parity bits for each message vector using the

expression,

C = MP

(iii) Next, we obtain all the possible code words as

X = [M : C]

(iv) Lastly we obtain the transpose of P matrix (i.e.,) PT and obtain the

parity check matrix as : [H] = [PT | In-k]

1 0 0 0 1 0 1

0 1 0 0 1 1 1

Given generator matrix G =

0 0 1 0 1 1 0

0 0 0 1 0 1 1

Therefore, the P matrix is given by

1 0 1

1 1 1

P =

1 1 0

0 1 1 4×3

(ii ) Next we obtain the parity check bits

Thee parity bits can be obtained using the following expressiion:

C = MP

1 0 1

1 1 1

or [c1 c 2 c 3 ] = [m1 m2 m3 ] = 1 1 0

0 1 1 4×3

Solving, we obtain

we c1 = m1 ⊕ m2 ⊕ m3

c 2 = m2 ⊕ m3 ⊕ m 4

c 3 = m1 ⊕ m2 ⊕ m 4

Using these equations, we can obtain the parity bits for each

messsage vector. For example, let the message word be

m1m2m3m 4 = 0101

Therefore, we write c1 = 0 ⊕ 1 ⊕ 0 = 1

ANALOG AND DIGITAL COMMUNICATION

c2 = 1 ⊕ 0 ⊕1 = 0

c3 = 0 ⊕1 ⊕1 = 0

Hence, the corresponding parity bits arre c1c 2c 3 = 100

Therefore, the compelete code word for the message word 0101 is

given by t

Complete codeword = 0 1 0 1 1 0 0

Message Parity

bits bits

Similarly, we can obtain the codewords for the remaining

message words. All the message vectors, the corresponding parity bits

and codewords are given in table

m4 m3 m2 m1 c4 c4 c4 X6 X5 X4 X3 X2 X1 X0

1. 0 0 0 0 0 0 0 0 0 0 0 0 0 0

2. 0 0 0 1 1 0 1 0 0 0 1 1 0 1

3. 0 0 1 0 0 1 1 0 0 1 0 0 1 1

4. 0 0 1 1 0 1 0 0 0 1 1 0 1 0

5. 0 1 0 0 0 1 1 0 1 0 0 0 1 1

6. 0 1 0 1 1 1 0 0 1 0 1 1 1 0

7. 0 1 1 0 1 0 0 0 1 1 0 1 0 0

8. 0 1 1 1 0 0 1 0 1 1 1 0 0 1

9. 0 0 0 0 1 1 0 1 0 0 0 1 1 0

10. 1 0 0 1 0 1 1 1 0 0 1 0 1 1

11. 1 0 1 0 0 0 1 1 0 1 0 0 0 1

12. 1 0 1 1 1 0 0 1 0 1 1 1 0 0

13. 1 1 0 0 1 0 1 1 1 0 0 1 0 1

14. 1 1 0 1 0 0 0 1 1 0 1 0 0 0

15. 1 1 1 0 0 1 0 1 1 1 0 0 1 0

16. 1 1 1 1 1 1 1 1 1 1 1 1 1 1

SOURCE CODING AND ERROR CONTROL CODING

(iv )

Lastly, let us obtain the parity check matrix

The paritty check matrix [ H ] is a 3 × 7 matrix (i .e.,)

H = P T : I n −k

The transpose matrix PT is given by

1 1 1 0

P = 0

T

1 1 1

1 1 0 1 3×4

1 1 1 0 1 0 0

Therefore, we have H = P T

: I n −k = 0 1 1 1 0 1 0

1 1 0 1 0 0 1 3×7

This is the required check matrix.

Problem 3

For a systematic linear block code, the three parity check digits

c4, c5 and c6 are given by

c 4 = m1 ⊕ m2 ⊕ m3

c 5 = m1 ⊕ m2

c 6 = m1 ⊕ m3

(ii) Construct code generated by this matrix

(iii) Determine the error correcting capability.

(iv) Decode the received words 101100 and 000110

Solution

(i) First, we obtain parity matrix P and generator matrix G

(ii) Then we obtain the values C4, C5, C6 for various combinations of m1,

m2, m3 and obtain dmin and from the value of dmin, we calculate the

error detecting and correcting capability.

(iv) Lastly, we decode the received words with the help of syndromes

listed in the decoding table.

(i( First, let us obtain the parity matrix P and generator G.

We know that the relation between and check (parity) bits,

message bits and the parity matrix P is given by:

ANALOG AND DIGITAL COMMUNICATION

P11 P12 P13

[c 4 c 5 c 6 ]1×3 = [m1 m2 m3 ]1×3 P21 P22 P23 ... ( 2)

P31 P32 P33

c 4 = P11m1 ⊕ P21m2 ⊕ P31m3

Therefore, we have c 5 = P12m1 ⊕ P22m2 ⊕ P32m3 ... ( 3 )

c 6 = P13m1 ⊕ P23m2 ⊕ P33m3

Comparing equation (iii ) with the given n equations for c 4 ,c 5 ,c 6 ,

we obtain

P21 = 1 P22 = 1 P23 = 0

P31 = 1 P32 = 0 P33 = 1

Hence, the parrity matrix below:

1 1 1

P = 1 1 0

1 0 1 3×3

The is the required parity matrix. The generator matrix is given by:

G = [I k : P ][ I 3 : P3×3 ]

1 0 0 1 1 1

G = 0 1 0 1 1 0

0 0 1 1 0 1

[ Ans ]

(ii ) Now let us obtain the codewords

It is given that,

c 4 = m1 ⊕ m2 ⊕ m3

c 5 = m1 ⊕ m2

c 6 = m1 ⊕ m3

Using the equations we can obtain the check bits for various

combinations of the bits m1,m2 and m3 . After that the corresponding

codewords are obtained as shoown in table

For m1m2m3 = 0 0 1, we have

c 4 = m1 ⊕ m2 ⊕ m3 = 0 ⊕ 0 ⊕ 1 = 1

c 5 = m1 ⊕ m2 = 0 ⊕ 0 = 0

SOURCE CODING AND ERROR CONTROL CODING

c 6 = m1 ⊕ m3 = 0 ⊕ 1 = 1

Therefore c 4c 5c 6 = 101 and the codeword is give

en by

m1 m2 m3 c4 c5 c6

Codeword for m1 m2 m3 =

0 0 1 1 0 1

Similarly, the other codewords are obtained. They are listed in table

below

S.No Message Check Code Vector (or) Code

m1 m2 m3 c4 c5 c6 m1 m2 m3 c4 c5 c6

1. 0 0 0 0 0 0 0 0 0 0 0 0 0

2. 0 0 1 1 0 1 0 0 1 1 0 1 3

3. 0 1 0 1 1 0 0 1 0 1 1 0 3

4. 0 1 1 0 1 1 0 1 1 0 1 1 4

5. 1 0 0 1 1 1 1 0 0 1 1 1 4

6. 1 0 1 0 1 0 1 0 1 0 1 0 3

7. 1 1 0 0 0 1 1 1 0 0 0 1 3

8. 1 1 1 1 0 0 1 1 1 1 1 0 4

The error correcting capacting depends on the minimum distance

e

dmin . From the table dmin = 3.

Therefore, numbers of errors detectable is dmin ≥ s + 1

or 3 ≥ s +1 or s≤2

So, at the most, two error can be detected

and dmin ≥ 2t + 1

3 ≥ 2t + 1 or t ≤ 1.

Thus, at the most one error can be corrected.

(iv ) Let us obtain the decoding of the received words.

The first given code word 101 1100. But, this codeword does not exist

in the code word ta able. This shows that the error must be present in the

rec

ceived code vector. Let us represent the received code worrd as under:

Y1 = [1 0 1 1 0 0]

The syndrome for this codeword is given by,

S = Y1H T

ANALOG AND DIGITAL COMMUNICATION

1 1 1

1 1 0

1 0 1

or S = [1 0 1 1 0 0]

1 0 0

0 1 0

0 0 1

S = [1 ⊕ 0 ⊕ 1 ⊕ 1 ⊕ 0 ⊕ 0,1 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 0,1 ⊕ 0 ⊕ 1 ⊕ 0 ⊕ 0 ⊕ 0]

or S = [1 1 0]

Thus, the syndrome of the received word is [110] which is

the same as the second syndrome in the decoding table. Hence the

corresponding error pattern is given by

E = [0 1 0 0 0 0 ]

and the correct word can be obtained as undr

X1 = Y1 ⊕ E = [1 0 1 1 0 0] ⊕ [0 1 0 0 0 0]

X1 = [1 1 1 1 0 0]

This the corrected trransmitted word.

Similarly, we can perform decoding of 0 0 0 1 1 0

Let X2 = 000110....... is the second received codeword. Even this

is not the valid codeword listed in codeword table. The syndrome for this

can be obtained as under

S = Y2 H T

1 1 1

1 1 0

1 0 1

or S=

1 0 0

0 1 0

0 0 1

or S= [1 1 0]

from the decoding table as under

E = [0 1 0 0 0 0 ]

X 2 = Y2 ⊕ E = [ 0 0 0 1 1 0 ] ⊕ [ 0 1 0 0 0 0 ]

or X 2 = [0 1 0 1 1 0]

SOURCE CODING AND ERROR CONTROL CODING

Problem 4

For a (6,3) code, the generator matrix G is given by

1 0 0 1 0 1

G = 0 1 0 0 1 1

0 0 1 1 1 0

(i) Realize an encoder for this code.

Solution

(i )

First, we obtain the expression for the parity bits.

The

e parity matrix can be obtained by using the expression:

C = MP

The parity matix can be obtained from the generator matrix as under

1 0 0 1 0 1

G = 0 1 0 0 1 1

0 0 1 1 1 0

1 0 1

Therefore, [P ] = 0 1 1

1 1 0 3×3

1 0 1

Also, [c1,c 2 ,c 3 ] = [m1,m2 ,m3 ] 0 1 1

1 1 0

c1 = m1 ⊕ m3

From above, we get c 2 = m2 ⊕ m3 ... (1)

c 3 = m1 ⊕ m2

Now, let us draw the encoder.

The encoder is obtained to implement the expresssions given in

equation (1) as shown in figure below

Input

m3 m2 m1

Sequence Message bits

Code words

Modulo-2

addition + + + S

Parity or

Parity or check bits

c3 c2 c1

check bits

Parity bit register

ANALOG AND DIGITAL COMMUNICATION

Definition

Cyclic codes are also linear block codes. A binary code is said to

be a cyclic code if it exhibits the following properties:

(i) Linearity Property

(ii) Cyclic Property

(i) Linearity Property

A code is said to be linear if sum of any two code words also is a

code word. This property that the cyclic codes are linear block codes.

(ii) Cyclic Property

A linear block code is said to be cyclic if every cyclic shift of a word

produces some other code word, Let (x0,x1,............,xn-1) be an n-bit (n,

k) linear block codes. This codes is shifted right by 1 bit every time in

order to get the other code words. All the n bit code words obtained by

the circular right shifting of new code words. This is called as the cyclic

property of the cyclic codes.

4.15.1 Code Words Polynomial

The cyclic property suggests that it is possible to treat the ele-

ments of code word of length n as the coefficient of a polynomial of a

degree (n-1). Thus, the code word:

[x 0 , x1..............xn -1 ]

Can be expressed in the form of a code word polynomial as under

[ X ( p )] = x 0 + x1 p + x 2 p2 + ...... + xn -1 pn -1 ]

Where p = An arbitrary real variable

X ( p ) = polynomial of degree (n - 1)

The generator polynomial of cyclic code is represented G(p). It

is used for generation of cyclic code words, from the message bits. The

generator code word polynomial can be as,

SOURCE CODING AND ERROR CONTROL CODING

X ( p ) = M ( p ) .G ( p )

Where M ( p ) = Message signal polynomial of degree e ≤k

M ( p ) = m0 + m1 p + m2 p + .... + mk −1 p

2 k −1

given by,

G ( p ) = 1 + g1 p + g 2 p 2 + ..... + gn −k −1 p n −k −1 + p n −k

The generator polynomial can be expressed in the summation form

as under

n −k −1

G ( p) = 1+ ∑

i =1

g i p i + p n −k

for the cyclic codes. The other important point about the generator

polynomial is that the degree of generator polynomial is equal to the

number of parity bits (check bits) in the code word.

4.15.3 Generation of Non-Systematic code words

The non-systematic code words can be obtained by multiplication

of message polynomial with the generator polynomial as under:

X1 ( p ) = M 1 ( p ) .G ( p )

X 2 ( p ) = M 2 ( p ) .G ( p )

X 3 ( p ) = M 3 ( p ) .G ( p ) .............and so on.

There are three steps involved in the encoding process for a

systematic (n, k) cyclic code. They are as under

This multiplication is equivalent to shifting th

he message bits by

(n − k ) bits.

(ii ) We divide the shifted message polynomial pn −k M ( p ) by the

generator polynomial G ( P ) to obtain the remainder C ( p ) .

p n −k M ( p ) p 2M ( p )

C ( p ) = rem = rem where n - k = p

G ( p ) G ( p )

ANALOG AND DIGITAL COMMUNICATION

polynomial P n −k M ( P ) to obtain the code word polynomial X ( p ) .

Therefore, code word polynomial X ( p )

Therefore, th code word polynomial X ( p ) is exactly divisible by

the generator polynomial G ( p )

Problem 1

by, G ( p ) = p 3 + p + 1. determine all the systematic and non-systematic

code vectors.

Solution

(i) This is a (7,4) cyclic Hamming code. Therefore, the message vectors

are going to be 4 bit long. There will be total 24 = 16 message vectors.

Let us consider any message code vector as under

M = (m3 m2 m1 m0 ) = ( 0 1 0 1)

M ( p ) = m3 p 3 + m2 p 2 + m1 p + m0 ... (1)

M ( p ) = p2 + 1

G ( p ) = p3 + p + 1

SOURCE CODING AND ERROR CONTROL CODING

X ( p ) = M ( p ) .G ( p ) = ( p 2 + 1) ( p 3 + p + 1) = p 5 + p 3 + p 2 + p 3 + p + 1

( or ) X ( p ) = p 5 + p 3 (1 ⊕ 1) + p 2 + p + 1

But, 1 ⊕ 1 = 0

∴ X ( p ) = p 5 + Op 4 + Op 3 + p 3 + p + 1 = Op 6 + p 5 + Op 3 + p 2 + p + 1

Note that the degree of the code word is 6 (i .e.,) (n − 1) . The code

word polynomial is given by

X = ( 0 1 0 1 1 1 1)

This is the code word for the given message word. Similarily, we

can obtain the other non-systematic code words.

The sys stematic code words, is obtained as,

(i ) We multiply by M ( p ) by pn −k

p n −k M ( p ) p5 + p3 p 5 + 0 p 4 + p 3 + 0p

p2 + 0 p + 0

Therefore, = 3 =

G ( p) p + p +1 p3 + 0 p2 + p + 1

The division is carried out as under:

p 2 + 0 p + 0 ← Quotient polynomial Q ( p )

p3 + 0 p2 + p + 1 p5 + 0 p 4 + p3 + 0 p2 + 0 p + 0

p5 + 0 p 4 + p3 + p2

⊕ ⊕ ⊕ ⊕

0 + 0 + 0 + p2 + 0 p + 0

Remainder polynomial C ( p )

Thus the results of the division are as under:

Quotientt polynomial Q ( p ) = p 2 + 0 p + 0

Remainder polynomial C ( p ) = p 2 + 0 p + 0 ← Represents the parity bits

(i.e.,) C = (100 )

(ii ) We obtain the code word polynomial X ( p )

The code word polynomial can be obtained by adding p n -k M ( p ) to

the remainder polynomial C(p)

Therefore, X ( p ) = p M ( p ) ⊕ C ( p )

n −k

= 0 p 6 + p 5 + 0 p 4 + p 3 + 0 p 2 + 0 p + 0 ⊕ p 2 + 0 p + 0

or X ( p ) = 0 p 6 + p 5 + 0 p 4 + p 3 + 0 p + 0

The code word vector is given by : ( 0 1 0 1 : 1 0 0 )

or X (mk −1....m1 m0 : c q −1c q −2 ...c1c 0 ) = (m3m2m1m0 : c 2c1c 0 )

= (0 1 0 1 : 1 0 0)

ANALOG AND DIGITAL COMMUNICATION

S.No

M= m3 m2 m1 m0 X= m3 m2 m1 m0 c2 c1 c0

1 0 0 0 0 0 0 0 0 0 0 0

2 0 0 0 1 0 0 0 1 0 1 1

3 0 0 1 0 0 0 1 0 1 1 0

4 0 0 1 1 0 0 1 1 1 0 1

5 0 1 0 0 0 1 0 0 1 1 1

6 0 1 0 1 0 1 0 1 1 0 0

7 0 1 1 0 0 1 1 0 0 0 1

8 0 1 1 1 0 1 1 1 0 1 0

9. 1 0 0 0 1 0 0 0 1 0 1

10. 1 0 0 1 1 0 0 1 1 1 0

11. 1 0 1 0 1 0 1 0 0 1 1

12. 1 0 1 1 1 0 1 1 0 0 0

13. 1 1 0 0 1 1 0 0 0 1 0

14. 1 1 1 1 1 1 0 1 0 0 1

15. 1 1 1 0 1 1 1 0 1 0 0

16. 1 1 1 1 1 1 1 1 1 1 1

4.15.5 Generator and parity check matrices of the cyclic codes

The cyclic codes are linear block codes. Therefore, we can define

the generator and parity check matrices for the cyclic codes as well. The

generator matrix G(p) has a size of k x n (i.e.,) It has rows and n col-

umns. Let the generator matrix G (p) is given by

G ( p ) = p q + g q −1 + p p −1 + ...g 2 p 2 + g1 p + 1

We multiply both the sides by pi , (i .e.,)

p iG ( p ) = p i + q + q q −1 p i +q + .... + g , p i +1 + p i

Where i = (k − 1) , (k − 2) ,....2,1,0

The above equation represents the polynomial for the rows of the

generating polynomials. It is possible to obtain the generator matrix from

this equation.

SOURCE CODING AND ERROR CONTROL CODING

Problem 1

For a (7,4) cyclic code, determine the generator matrix if

G(p) = 1+ p+ p3

Solution

H ere, n = 7 and k = 4, hence q = n − k = 3

G ( p ) = 1 + p + p3

(i ) We multiply both the sides of G ( p ) by p i ,i = (k − 1) .....1, 0.

∴ p iG ( p ) = p i + 3 + p i +1 + p i , i = (k − 1) ........1, 0

But k = 4 ∴ i = 3, 2,1, 0

(ii ) By substtituting these values of i into the above equation we get

four different polynomials as under:

These polynomials corrrespond to the four rows of the generator matrix

as under:

Row No.1 : i = 3 → p 3G ( p ) = p 6 + p 4 + p 3

Row No.2 : i = 2 → p 2G ( p ) = p 5 + p 3 + p 2

Row No.3 : i = 1 → p G ( p ) = p4 + p2 + p

Row No.4 : i = 0 → G ( p ) = p3 + p + 1

The generator matrix for (n ,k ) code is of size k × n . Therefore for ( 7, 4 )

cyclic code, the generator matix will be a 4 × 7 matrix. The polynomials

corresponding to the four are therefore, as s under:

Row No.1 : i = 3 → p + 0 p + p + p3 + 0 p2 + 0 p + 0

6 5 4

Row No.2 : i = 2 → 0 p 6 + p 3 + 0 p 4 + p 3 + p 2 + 0 p + 0

Row No.3 : i = 1 → 0 p6 + 0 p5 + p 4 + 0 p3 + p2 + p + 0

Row No.4 : i = 0 → 0 p 6 + 0 p 5 + 0 p 4 + p 3 + 0 p 2 + p + 1

These polynomials can be converrted into generator matrix G as under

p 6 p 5 p 4 p 3 p 2 p1 p 0

1 0 1 1 0 0 0

G=0 1 0 1 1 0 0

0 0 1 0 1 1 0

0 0 0 1 0 1 1 4×7

This is the req quired generator matrix.

The cyclic codes are subclass of linear block codes. Therfore, its code

vectors can be obtaiined by using the generator matrix as under.

X = MG

Where M = 1 × k message vector

ANALOG AND DIGITAL COMMUNICATION

Problem 2

For the generator matrix of the previous example, determine all

the possible code vectors

Solution:

All the code vectors can be obtained by using the following

g

expression:

X = MG

Let consider any 4-bit message vector. M = (m3 m2 m1 m0 ) = (1010 )

1 0 1 1 0 0 0

0 1 0 1 1 0 0

Therefore, X = [1010]

0 0 1 0 1 1 0

0 0 0 1 0 1 1

Therefore, X = [1 ⊕ 0 ⊕ 0 ⊕ 0 0 ⊕ 0 ⊕ 0 ⊕ 0 1 ⊕ 0 ⊕ 1 ⊕ 0

1⊕ 0 ⊕ 0 ⊕ 0 0 ⊕ 0 ⊕1⊕ 0 0 ⊕ 0 ⊕1⊕ 0

0 ⊕ 0 ⊕ 0 ⊕ 0]

Therefore, we have X = [1001 : 110]

Similarily, the other code vectors can be obtained.

We know that the generator matrix in the systematic form is given

by

G = I k : Pk ×(n −k )

k ×(n −k )

matrix. Let us represent the row number (in general) by I. The ith row of

the generator matrix is represented by

i th row of G = p (n -i ) + Ri ( p )..............where i = 1, 2,.......k ... (1)

Now, we divide p(n-i) by the generator matrix G(p). The result of

the division is expressed as,

p (n − i ) Remainder

= Quotient + ... ( 2)

G ( p) G ( p)

Let Remainder = Ri ( p )

Quotient = Qi ( p )

Substitute this into equation ( 2) , we obtain

p (n − i ) R ( p)

= Qi ( p ) + i ... ( 3 )

G ( p) G ( p)

SOURCE CODING AND ERROR CONTROL CODING

p (n −i ) = Qi ( p ) G ( p ) ⊕ Ri ( p ) : where i = 1, 2,....k ... ( 4 )

In mod-2 additions, the addition and subtraction will yield the

same result.

∴ p (n − i ) ⊕ R i ( p ) = Q i ( p ) G ( p )

From equation (1) the above expression represents the ith row the

systematic generator matrix.

Problem 1

For systematic (7,4) cyclic code, determine the generator matrix

and parity check matrix. Given; G(p)=p3+p+1

Solution

(i ) The ith row of the generator matrix is given under by equation as

under

p(n −i ) ⊕ Ri ( p ) = Qi ( p ) G ( p ) ; where i = 1, 2....k ... (1)

(ii ) It is given that the cyclic code is systematic ( 7, 4 ) code

Therefore, n = 7, k = 4 and (n − k ) = 3

Substituting these values into the above expression, we obtain

p ( 7 − i ) ⊕ Ri ( p ) = Qi ( p ) . ( p 3 + p + 2) .....i = 1, 2,....4

(iii ) With i = 1, the above equation is given by

p 6 ⊕ Ri ( p ) = Qi ( p ) ( p 3 + p + 1) ... ( 2)

Let us obtain the value of Qi ( p ) . The quotient Qi ( p ) can be

obtained by dividing p (n − i ) by G ( p ) as per equation ( 2) . Therefore,

to obtain Qi ( p ) ,

let us divide p 6 by ( p 3 + p + 1) .

The division takes place as under

p 3 + p + 1 ← Quotient polynomial Q (n )

p3 + 0 p2 + p + 1 p6 + 0 p5 + 0 p3 + 0 p2 + 0 p + 0

p6 + p4 + p3

Mod - 2 → ⊕ ⊕ ⊕

additions

0 + p4 + p3 + 0 + 0

p4 + 0 + p2 + p

Mod - 2 → ⊕ ⊕ ⊕ ⊕

additions p3 + p2 + p + 0

p3 + 0 + p + 1

⊕ ⊕ ⊕ ⊕

p2 + 1 Remainder

ANALOG AND DIGITAL COMMUNICATION

and the remainder polynomial Ri ( p ) = p 2 + 0 p + 1

Substituting these values into equation (ii ) , we obtain

p 6 ⊕ Ri ( p ) = ( p 3 + p + 1) ( p 3 + p + 1)

= p6 + p4 + p3 + p4 + p2 + p + p3 + p + 1

= p 6 + 0 p 5 + (1 ⊕ 1) p 4 + (1 ⊕ 1) p 3 + p 2 + (1 ⊕ 1) p + 1

= p6 + 0 p5 + 0 p 4 + 0 p3 + p2 + 0 p + 1

∴ 1st Row polynomial ⇒ p 6 + 0 p 5 + 0 p 4 + 0 p 3 + p 2 + 0 p + 1

∴ 1st Row elements ⇒ 1 0 0 0 1 01

Using the same procedure, we e can obtain the polynomials for the

other rows of the gen nerator matrix as under:

i =2 2nd Row polynomial ⇒ p 5 + p 2 + p + 1

i =3 3rd Row polynomial ⇒ p 4 + p 2 + p

i =4 4th Row polynomial ⇒ p 3 + p + 1

Thhese polynomials can be transformed into the generator mattrix as

under

p 6 p 5 p 4 p 3 p 2 p1 p 0

Row 1 → 1 0 0 0 1 0 1

Row 2 → 0 1 0 0 1 1 1

G=

Row 3 → 0 0 1 0 1 1 0

Row 4 → 0 0 0 1 0 1 1 4×7

This is the req

quired generator matrix.

The parity check matrix is given by:

H = P T : I 3×3

The transpose matrix P T is given by interchanging the rows and

columns of the P matrix

1 1 1 0

P = 0 1 1 1

T

1 1 0 1 3×4

Hence the parity check is given by

1 1 1 0 1 0 0

H = 0 1 1 1 0 1 0

1 1 0 1 0 0 1 3×7

This is the required parity che

eck matrix.

SOURCE CODING AND ERROR CONTROL CODING

The encoder for an (n,k) cyclic codes is shown in figure . This

encoder is useful for generating the systematic cyclic codes.

Working Operation of the encoder

The flip-flops (F/F) are used for construction of a shift

register. Operation of all these flip-flops is controlled by an external

clock. The flip-flop contents will get shifted in the direction of the arrow

corresponding to each clock pulse. The feedback switch is connected to

the message input. All the flip-flops are initialized to zero state. First k

message bits are shifted to the transmitter and also shifted into the shift

register. After shifting the k message bits the shift register will contain

the (n-k) parity (or check) bits. Hence after shifting k message bits, the

feedback switch is open circuited and the output switch is thrown to

parity bit position. Now with every shift, the parity bits are transmitted

over the channel. Thus, the encoder generated the code words in the

format shown in figure 4.9

n-bits

message

bits Parity bits

k - bits (n - k ) bits

The encoder thus performs the division operations and generates

the remainder. The remainder is nothing but the parity bits. When all

the message bits are shifted out, what remains inside the shift register is

remainder. The encoder also consists of modulo 2 adders. The output

of the coefficient multipliers (i.e.,) g1, g2....etc are added to the flip-flop

outputs to generate the parity bits.

Feedback

Switch

g1 g2 gn-k-1

Parity

Code word

bits

to the transmitter

Output Message

switch bits

ANALOG AND DIGITAL COMMUNICATION

Problem 1

Draw the encoder for a (7,4) cyclic Hamming code generated by

the generator polynomial G ( p ) = 1 + p + p

3

Solution

The generator polynomial is given by

G ( p ) = p3 + 0 p2 + p + 1 ... (1)

The generator polynomial of an (n ,k ) cyclic code is expressed as

under:

n −k −1

G ( p) = 1+ ∑

i =1

g i p i + p n −k ... ( 2)

7 − 4 −1

Therefore, G ( p ) = 1 + ∑

i =1

g1 pi + p 7 − 4

Hence , G ( p ) = p 3 + g 2 p 2 + g1 p + 1 ... ( 3 )

Comparing equations (1) and ( 3 ) , we get obtain

g1 = 1 and g 2 = 0

Thus the encoder for a ( 7, 4 ) Hamming code is shown in figure below

Feedback

Switch

g2=0

g1=1

Parity

Code word

bits

to the transmitter message

Output bit

switch

Encoder for a cyclic Hamming codes

4.15.8 Syndrome calculator for cyclic codes

Figure shows the syndrome calculator, where there are (n-

k) stages of the feedback shift register to generate (n-k) syndrome

vector. Initially, the output switch is connected to position 1 and all

the flip-flops are in their Reset mode. As soon as all the received bits

are shifted into the shift register, its content will contain the desired

(n-k) syndrome vector S. Once, we know the syndrome S, we can de-

termine the corresponding error patter E and ten make the appropriate

SOURCE CODING AND ERROR CONTROL CODING

corrections. After shifting all the incoming bits of signal Y, the output

switch is transferred to position 2 and clock pulses are applied to the

shift register to out the syndrome. The following example will make the

concept of syndrome calculation obvious

g1 g2 gn-k-1

Syndrome

Received S0 S1 output

Sn-k-1

code

vector Y

4.15.9 Decoder for cyclic codes

Once the syndrome is calculated, an error pattern E is detected

corresponding to this syndrome.

+ +

Sin (combinational logic circuit) Corrected code

Sin vector ‘X’

Buffer register +

Received

vector input

Figure 4.12 General form of a decoder for cyclic codes

This error vector is then added (modulo-2-addition) to the received

code word Y, to get the corrected code vector X at the output.

Therefore, corrected code vector X ′ = Y ⊕ E

Working operation of the decoder

The switches Sin are closed and Sout are opened. The received

code vector Y is then shifted to the buffer register and syndrome

register. After shifting all the n bits of received code vector Y, the

syndrome register holds the corresponding syndrome vector. Then,

the contents of the syndrome register are given to the error pattern

detector. A particular error pattern will be detected for each syndrome

vector present in the syndrome register. Then the switches Sin are opened

and Sout are closed. The contents of the buffer register, error register and

ANALOG AND DIGITAL COMMUNICATION

syndrome register are then shifted. The received code vector Y (which is

stored in the buffer register), is then added with the error vector E (which

is stored in the error register) bit by bit to obtain the corrected code word

X at decoder output.

Advantages of Cyclic codes

The advantage of cyclic codes over most of the other codes are as

under:

(i) They are easy to encode

(ii) They possess a well defined mathematical structure which has led to

development of very efficient decoding schemes for them

(iii) The methods that are to be used for error detection and correction

are simpler and easy to implement.

(iv) These methods do not require look-up table decoding

(v) It is possible to detect the error bursts using the cyclic codes.

Drawbacks of cyclic codes

Even though the error detection is simpler, the error correc-

tion is slightly more complicated. This is due to the complexity of the

combinational logic circuit used for error correction.

The main difference between the block codes and the convolu-

tional (or recurrent) codes may be listed below

(i) Block codes

In block codes, the block n bits generated by the encoder in a

particular time unit depends on the block k message bits within that

time unit, but also on the preceding ‘L’ blocks of the message bits (L>1).

Generally, the values of k and n will be small.

(ii) Convolutional code

In the convolutional codes, the block of n bits generated by the

encoder at given time depends not only on the k message bits within that

time unit, but also on the preceding ‘L’ blocks of the message bits (L>1).

Generally, the values of k and n will be small.

Application of convolutional code

Like block codes, the convolutional codes can be designed to

either detect or correct errors. However, because data is usually

retransmitted in blocks, the block codes are more suitable for error

detection and the convolutional codes are more suitable for error

correction.

SOURCE CODING AND ERROR CONTROL CODING

Encoding of the convolutional codes can be accomplished using

simple shift register. Several practical methods have been developed for

decoding. The convolutional codes perform as well or better than the

block codes in many error control applications

4.16.1 Convolutional Encoder

Current Bit

{ State

message m m1 m2

input

+ Communicator

x1

switch

Encoded bits

x2

+

• Whenever the message bit is entered into position m’ the new values

of x1 and x2 are computed depending upon m, m1 and m2. m1 and m2

store the previous two successive message bits.

• The convolutional encoder of figure 4.13 for n = 2, k = 1 and L = 2.

It therefore generates n = 2 encoded bits as under:

x = m ⊕ m1 ⊕ m2

and x 2 = m ⊕ m2 ... (1)

• The commutator switch selects these encoded bit alternately to

produce the stream of encoded bits as under:

• The output bit rate is twice that of the input bit rate.

4.16.2 Important De initions

1. The code rate (r)

The code rate of the encoder of figure is expressed as

k

r =

n

=

Here, k Number

= of message bits 1

n = Number of encoded bits per message bits = 2

1

Therefore, r =

2

ANALOG AND DIGITAL COMMUNICATION

It is defined as the number of shifts over the single message bits

can influence the output of the encoder. For the encoder of figure , con-

straint length k = 3 bits, since a single message bit influences encoder

output for three successive shifts. At the fourth shift it has not effect on

the output.

3. Code Dimension

The code dimension of a convolutional code depends on n and

k. Here k represents the number of message bits taken at a time by

the encoder, n is the number of encoded bits per message bit. The code

dimension therefore represented by (n,k).

4.16.3 Analysis of convolutional encoder

4.16.3.1 Time Domain approach

The time-domain behaviour of binary convolutional encod-

er may be defined i n t erms o f n - impulse responses. Let the mpulse i

response of the adder generating x1 in figure , be given by, the sequence

{ } {

g 0(1) , g1(1) ,............g L (1) . Similarly, let the sequence g 0(2) , g1(2) ,............g L (2) }

denote the impulse response of the adder generating x2 in figure . These

impulse responses are also called as generator sequences of the code .

Let (m0 ,m1,m2 ,.....) denote the message sequence entering the encoder

of figure one bit at a time (starting from m0). The encoder generates two

output sequences by performing convolutions on the message sequenc-

es and the impulse responses. The bit sequence x1 is given by

L

x1 = x i (1) = ∑ g i (1)mi −l i = 0,1, 2,.... ... (1)

l =0

L

x1 = x i (2) = ∑ g i (2)mi −l i = 0,1, 2,.... ... ( 2)

l =0

Then these bit sequences are multiplexed with the help of the

commutator switch to produce the following output:

{

X = x 0(1) x 0(2) x1(1)x 2(1) x 2(2) ... }

Where x1 = x i (1)

{x 0

(1) (1) (1)

x1 x 2 ..... }

and {

x 2 = x i (2) = x 0(2) x1(2) x 2(2) ..... }

SOURCE CODING AND ERROR CONTROL CODING

Problem 1

From the convolutional figure below determine the following

(i) Dimension of the code

(ii) Code rate

(iii) Constraint length

(iv) Generating sequence

(v) The encoded sequence for the input message (10011)

Mod-2

adder

Commutator

1 switch

Message FF1 FF2 m2

m0 m1

input 2

Mod-2

adder

+

Solution

The given encoder can be drawn in standard form as shown

Given message sequence (m0m1m2m3m 4 ) = (10011)

g0(1)=1 g2(1)=1

+

Input 1 Encoded

m0 m1 m2

2 x =x (2) output

2 1

FF1 FF2

+

g0(2)=1 g2(2)=1

Note that encoder takes one message bit at a time. Hence k =1. It

generates two bits for every message bit

Therefore, n = 2 so dimension = (n, k) = (2,1).

ANALOG AND DIGITAL COMMUNICATION

k 1

Code rate= r

= =

n 2

Observe that every message bit affects output bits for three

successive shifts, hence constraints length k = 3 bits.

In figure x1 (or) xi(1) is obtained by adding all the three bits hence

generating sequence gi(1) is given by

g i (1) = {1,1,1}

where g 0(1) = 1 indicates the connections of bit m

g1(1) = 1 indicates the connection of bit m1

g 2(1) = 1 indicates connection of bit m2

Similarly x2 (or) xi(2) is obtained by adding first and last bits. Hence

generating sequence is given by

g i (2) = {1, 0,1}

where g 0(2) = 1 indicates the connections of bit m

where g 0(2) = 1 indicates the connections of bit m

(v) The output sequence may be obtained as follows:

(a) To obtain the bit stream x1(1)

L

We have x1 = x i (1) = ∑ g i (1)mi −1 i = 0,1, 2...

l =0

Substituting i = 0, we get

x 0(1) = g (0)(1)m0 = 1 × 1 = 1 Here g (0)(1) = 1 and m0 =1

Simlarily, substituting i = 1, we get,

x1(1) = g 0(1)m1 + g1(1)m0

= (1 × 0 ) + (1 × 1) = 1 ( mod 2 addition)

We can obtain the other code bits s in a similiar manner as under:

x 2(2) = g 0(1)m2 + g1(1)m1 + g 2(1)m0

SOURCE CODING AND ERROR CONTROL CODING

= (1 × 0 ) + (1 × 0 ) + (1 × 1)

= 0 + 0 +1 = 1 ( mod 2 addition)

(1) (1) (1) (1)

x3 = g 0 m3 + g1 m2 + g 2 m1

= (1 × 0 ) + (1 × 0 ) + (1 × 0 ) = 1

x 4(1) = g 0(1)m5 + g1(1)m 4 + g 2(1)m2

= (1 × 1) + (1 × 1) + (1 × 0 )

= 1+1+ 0 = 1 ( mod 2 addition)

(1) (1) (1) (1)

x5 = g 0 m5 + g1 m 4 + g 2 m3

= g1(1)m 4 + g 2(1)m3

= (1 × 1) + (1 × 1)

= 1+1 = 0 [ m5 and m6 are not available ]

(1) (1) (1) (1)

x6 = g 0 m6 + g1 m5 + g 2 m 4

= g 2(1)m 4 = 1 × 1 = 1

The bit stream x1(2) can be obtained at the output of the bottom

adder. It is given by,

L

x i = x 2 = ∑ g i (1)mi −1 i = 0,1, 2, 3,....

l =0

l = 0,1, 2... and mi −1 = m0 ,m1,.....

Substituting values of i into equation x i (2) , we get,

x 0(2) = g 0(2)m0

or x 0 ( 2) = 1 × 1 = 1

Similarily substitu uting i = 1, we get

x1 = g 0 m1 + g1(1)m0 = (1 × 0 ) + ( 0 × 1) = 0 + 0 = 0

( 2) ( 2)

x 2(2) = g 0(1)m2 + g1(2)m1 + g (2)m0

= (1 × 0 ) + ( 0 × 0 ) + (1 × 1) = 1

x.3(2) = g 0(2)m3 + g1(2)m2 + g 2(2)m1

= (1 × 1) + ( 0 × 0 ) + (1 × 0 ) = 1

x.4(2) = g 0(2)m 4 + g1(2)m3 + g 2(2)m2

= (1 × 1) + ( 0 × 1) + (1 × 0 ) = 1

x.5(2) = g1(2)m 4 + g 2(2)m3

= ( 0 × 1) + (1 × 1) = 1

x 6 = g 22m 4

2

= 1×1 = 1

ANALOG AND DIGITAL COMMUNICATION

Hence, the code bits obtained at the output of the bottom adder is

given by,

x 0(2) x1(2) x 2(2) x 3(2) x 4(2) x 5(2) x 6(2) = (1 0 1 1 1 1 1)

By interleaving the code bits at the outputs of the two adders, we

can get the encoded sequence at the encoder output as under

Encoded sequence =x 0(1) x 0(2) x1(1) x1(2) ...x 6(1) x 6(2)

Substituting the values, we get

Codeword = [11 10 11 11 01 01 11]

Note that the message sequence of length k = 5 bits produces an output

coded sequence of length 14 bits.

4.16.3.2 Transform domain approach

We know that the convolution in time domain is transformed into

the multiplication of Fourier transforms in the frequency domain. We

can use this principle in the transform domain approach.

In this process, the first step is to replace each path in the

encoder by a polynomial whole coefficients are represented by the

respective elements of the impulse response.

The input top adder output path of the encoder can be expressed

in terms of the polynomial as under:

p defines the number of time units by which the associated bit in the

impulse response has been delayed with respect to the first bit (i.e.) g0(1)

Similarly the polynomial corresponding to the input-bottom ad-

der-output path for the encoder is given by

G (2) ( p ) = g 0(2) + g1(2) p + g 2(2) p 2 + .... + g L (2) p L ... ( 2)

( 2)

The polynomial G (1) P and G (P ) are called as the generator

( )

polynomials of the code.

From the generator polynomials, we can obtain the codeword

polynomial as under

Code word polynomial corresponding to top adder is given by

x (1) ( p ) = G (1) ( p ) .m ( p )

where m ( p ) = Message polynomial

=m0 + m1 p + m2 p 2 + ....m L −1 p L −1

SOURCE CODING AND ERROR CONTROL CODING

given by x ( )(P ) = G ( ) ( P ) .m ( P ) . Once we get the code polynomial, it is

2 2

the individual coefficients. This has been illustrated in the following

problem.

Problem 1

Determine the codeword for the cyclic encoder of figure for the

message signal (1 0 0 1 1), using the transform domain approach. The

impulse response of the input top adder output path is (1, 1, 1) and that

of the input bottom adder path is (1, 0, 1)

Solution

First, let us write the generator polynomial G(1)(p)

The impulse response of the input top adder output path of the

convolutional encoder of figure . Therefore, we have

g 0(1) = 1

g1(1) = 1

and g 2(1) = 1

G (1) ( p ) = g 0(1) + g1(1) p + g 2(1) p

or G(1) ( p ) = 1 + p + p 2 ... (1)

message polynomial is given by,

M ( p ) = m0 + m1 ( p ) + m2 p 2 + m3 p 3 + m 4 p 4

or M ( p ) = 1 + p3 + p4 ... ( 2)

Now, we find the code word polynomial for the top adder

x (1) ( p ) = G (1) ( p ) . M ( p )

x (1) ( p ) = (1 + p + p 2 ) (1 + p 3 + p 4 )

= 1 + p 3 + p 4 + p + p 4 + p5 + p 2 + p5 + p6

or x (1) ( p ) = 1 + p + p 2 + p 3 + (1 + 1) p 4 + (1 + 1) p 5 + p 6

or x (1) ( p ) = 1 + p + p 2 + p 3 + p 6 ...... ( addition is Mod-2) ... ( 3 )

ANALOG AND DIGITAL COMMUNICATION

convolutional encoder of figure is (1, 0, 1). Therefore,

g 0(2) = 1, g1(2) = 0, and g 2(2) = 1,

or G ( 2) ( p ) = 1 + p 2 ... ( 4 )

x (2) ( p ) = G (2) ( p ) .M ( p )

x 2 ( p ) = (1 + p 2 ) (1 + p 3 + p 4 )

or x 2 ( p ) = 1 + 0 p + p2 + p3 + p4 + p5 + p6

Next, let us obtain the code sequences.

The output sequence at the output of the top-adder can be ob-

tained from the corresponding generator polynomial.

x 2 ( p ) = 1 + p2 + p3 + p 4 + p5 + p6

or x 2 ( p ) = 1 + 0 p + p2 + p3 + p4 + p5 + p6

Codeword = 11 10 11 11 01 01 11

Here, it may be noted that the code word obtained using the time

domain approach and the frequency domain approach. It may be ob-

served that we get the same result. But the computation using transform

domain approach demands less efforts than the time-domain approach.

4.16.4 Graphical representation for convolutional encoding

For the convolutional encoding, there are three different graphical

representation that are widely used. They are related to each other. The

graphical representations as under

(i) The code tree

(ii) The code trellis

(iii) The state diagram

SOURCE CODING AND ERROR CONTROL CODING

The two successive message bits m1 and m2 decides the state.

The incoming message bit m change the state of the encoder and change

the outputs x1 and x2. If the new message bit entered into m, the

contents of m1 and m2 define new state and according to the new state

the outputs x1 and x2 are also changed.

Assume initial values of m1 and m2 be zero (i.e.,) m1, m2 = 0 0

and initially

m2 m1 State of

encoder

0 0 a

0 1 b

1 0 c

1 1 d

4.16.4.1 THE CODE TREE

Let us draw the code tree for (2, 1) encoder. We assume that the

register has been cleared so that it contains all zeros, (i.e.,) Initial state

m2 m1= 0 0. Let us consider input message sequence m = 0 1 0 when

m = 0, x and xc can be determined as,

(i) When the input message bit m = 0 (first bit)

x1 = m ⊕ m1 ⊕ m2

=0+0+0

=0

x 2 = m ⊕ m2 = 0 + 0 = 0

Therefore

x1x 2 = 0 0

new state

0 0 0 x1x2=0 0 0 0 0 0

m0 m1 m2 m0 m1 m2 This bit is

before shift after shift discarded

The values of x1 x2 = 0 0 are transmitted to output and register

contents are shifted to right by one bit. The new state is formed. The

code tree has been drawn in figure 4.14 . it begins at a branch point on

ANALOG AND DIGITAL COMMUNICATION

node ‘a’ which represents the initial state. Hence if m =0, we should take

the upper branch from node ‘a’ to obtain the output x1 x2 = 0 0. The new

state of the encoder is m2 m1 = 0 0 (or) a.

(ii) When the input message bit m =1 (second bit)

When m = 1, x1 and x 2 can be determined as,

x1 = m ⊕ m1 ⊕ m2

= 1⊕ 0 ⊕ 0 = 1

x 2 = m ⊕ m2

= 1⊕ 0 = 1

new state

0 1 0 x1x2=1 1 0 1 0

m0 m1 m2 m0 m1 m2 This bit is

before shift after shift discarded

of the register are shifted to right by one bit. The next state is formed.

Now m2 m1 = 01 (i.e.,) b state. Hence we should take the lower branch,

(since m = 1) from a to b to obtain output x1 x2 = 11. This operation is

illustrated in table in second row

(iii) When the input message bit m = 0 (Third bit)

x1 = m ⊕ m1 ⊕ m2

= 0 ⊕1⊕ 0 = 1

x 2 = m ⊕ m2

=0+0 =0

new state

0 1 0 x1x2=1 1 0 1 0

m0 m1 m2 m0 m1 m2 This bit is

before shift after shift discarded

SOURCE CODING AND ERROR CONTROL CODING

00 a

00

00 11 b

a

10 c

This is the code word 11

x1 x2 = 0 0

for m0 = 0 01 d

00 a

11 a

10

00 b

Take this path if m0=0 11

b 01 c

01

10 d

Start a

01 a

11 a

10 c 11 b

10 c

00 b

01 d

11 b 11 a

01 c

This is the code word

00 b

01 d

x1x2 = 1 1 01 c

for m1 = 0

10 d

10 d

register contents are shifted right by one bit. The next state is formed.

Now m2 m1 = 1 0 (i.e.,) c state. For 3rd bit, as m = 0 we should take the

upper branch from node b to node c to obtain the output x1 x2 = 1 0. The

complete code tree for convolutional encoder is shown in figure 4.15

ANALOG AND DIGITAL COMMUNICATION

Figure shows a more compact graphical representation which is

popularly known as code trellis. Here, the nodes on the left denote the

four possible current states and the nodes on the right are the resulting

next state.

Output

Current state 00 Next state

a = 00

00 = a 11

01 = b 11 00 b = 01

10

10 = c c = 10

01

01 10

d = 11

11 = d

Figure 4.16 Code trellis for the (2, 1)

convolution encoder

A solid line represents the state transition of branch m = 0 and

dotted line represents the branch m =1. Each branch is labelled with the

resulting output bits x1 x2.

4.16.4.3 State Diagram

Figure shows a state diagram for the encoder of figure . We can

obtain this state diagram from the code trellis, by coalencing the left and

right sides of trellis. The self loops at the nodes a and d represent the

state transition a-a and d-d.

b

Self loop Self loop

11 01 This indicates the

output code word

x1x2

a 00 10 d

00 10

11 01

figure

SOURCE CODING AND ERROR CONTROL CODING

dotted line represents the state transition for m0 =1. Each branch is

labelled with the resulting output bits x1 , x2.

They are as under:

(i) Virtebi Algorithm

(ii) Feedback decoding

(iii) Sequential decoding

S.No Input Status of shift Calculation of Status of shift New Trans- Code tree

mes- register after outputs x1 and register after state of mitted diagram

sage bit entry of m x2 transmission of encoder outputs upward ar-

o/p and shift m2 m1 x1 x2 row indicated

right by one bit input is 0

1 0 00 00 Upper arrow mean

0 0 0 x1 = 0 ⊕ 0 ⊕ 0 = 0 message bit is ‘0’

0 0 0 i.e., a 00

x2 = 0 ⊕ 0 = 0 a

i.e.,

m m1 m2 m m1 m2

0 0 0 0 0 0

a

Encoder in state ‘a’

Code tree for

m=0

2 1 x1 = 1 ⊕ 0 ⊕ 0 = 1 01 11 00

1 0 0 a

x2 = 1 ⊕ 0 = 1 1 0 0 i.e., b

b

i.e., i.e., 11

m m1 m2 m m1 m2

a

1 0 0 1 0 0 Code tree for

m=01

Encoder in state ‘a’

3 0 x1 = 0 ⊕ 1 ⊕ 0 = 1 10 10 00 10

0 1 0 a c

x2 = 0 ⊕ 0 = 0 0 1 0 i.e.,c

b

i.e., i.e., 11

m m1 m2 m m1 m2

a

Code tree for

0 1 0 0 1 0

m=010

Encoder in state ‘b’

ANALOG AND DIGITAL COMMUNICATION

SOURCE CODING AND ERROR CONTROL CODING

The verterbi algorithm operates on the principle of maximum

likelihood decoding and achieves optimum performance. The maximum

likelihood decoder has to examine the entire received sequence Y and

find a valid path which has the smallest Hamming distance from Y. But

there are 2N possible paths for a message sequence of N bits. The viterbi

algorithm for the decoding of convolutional codes, it is necessary to de-

fine certain important terms.

Metric

It is defined as the Hamming distance of each branch of

each surviving path from the corresponding branch of Y (received

signal). The metric is defined by assuming that 0’s and 1’s have the

same transmission error probability. It is the discrepancy between the

received signal Y and the decoded signal at particular node.

ffSurviving path

(i) Let the received signal be represented by Y. The viterbi decoder as-

signs a metric to each branch of each surviving path.

(ii) By summing the branch metrices we get the path metric. To under-

stand more about viterbi algorithm, let us solve the example given

below.

Problem 1

Given the convolutional encoder of figure and for a received signal

Y. = 11 01 11. Show that the first three branches of the valid paths

emerging from the initial node a0 in the code trellis.

Solution

The received or input signal Y = 11 01 11.

Let us consider the trellis diagram of figure for the convolutional

encoder. It shows that for the current a, the next state will be a or b

depending on the message bit m0 = 0 or 1. We have redrawn these two

branches in figure where a0 represents the initial state and a1 and b1

represent the next possible states. The solid line represents the branch

for m0=0 and dotted line represents the branch for m0=1.

ANALOG AND DIGITAL COMMUNICATION

Received signal

Y=11 00 2

a0 a1

(2)

11

0

(0)

b1

Branch metric = difference, Between encoded signal and

Corresponding received signal Y

Figure 4.18 First step in Viterbi’s algorithm

In figure , the number in the brackets, written below the

branches represent the branch metric which is obtained by taking the

difference between the encoded signal and corresponding received signal

Y. For example for the branch a0 to a1, encoded signal is 00 and received

signal is 11, the discrepancy between these two signals is 2 hence the

branch metric is (2), whereas for the path a0b1, the encoded signal and

encoded signal, hence the branch metric is (0).

The encircled numbers at the right hand end of each branch

represents the running path metric which is obtained by summing the

branch metrices from a0. For example, the running path metric for the

branch a0 a1 =2 and that for the branch a0b1 is 0.

When the next part of inputs bits (i.e.,) Y=01 are received at the

nodes a1 and b1, then four possible branches emerge from these two

nodes are as shown in figure . The next four possible states are a2, b2,

c2 and d2.

The numbers in the brackets written below each branch repre-

sent the branch metric. For example, for branch a1 a2, the branch metric

is (1) which is obtained by taking the difference between the encoded

signal 00 and received signal 01. The running path metric for the same

branch is 3 which is obtained by adding the branch metrices from a0

[(2)+(1)=3 from a0 to a1 and a1 to a2]

SOURCE CODING AND ERROR CONTROL CODING

y = 11 y = 01

a0 00 2 a1 00 3 a2

(2) (1)

0 11 3

(0) (1)

b2

10 2

b1

01

(2) c2

(0)

d2

Similarly the path metric for the path a0 a1 b2 is 3, that of the path

a0 b1 d2 is 0 and so on. The virterbi’s algorithm for all the input bits is as

shown in figure 4.19

Y=11 Y=01 Y=11

a0 00 2 a1 00 3 a2 00 5 a3

11 (1) 11 2

(0) 11 (0) 3

0 3 b2 11 b3

(1) (0) 00 4

b1 10 (2) 2 10

2 c3

01 (1)

(2) c2 1

(1)

01 01

0 (1) 4

(0) 10 d3

d2 (1) 1

Figure 4.20 Paths and their path matrices for the viterbi algorithm

ANALOG AND DIGITAL COMMUNICATION

1. There are different paths shown in figure . We must carefully see the

path metric for each path. For example, the path a0 a1 a2 a3 has the

path metric equal to 5. The other paths and path metric are as listed

in the table

Table 4.1

1. a0a1a2a3 5 x

2. a0a1a2b3(survivor) 3 �

3. a 0 a 1 b 2 c3 4 x

4. a0a1b 2d 3 4 x

5. a0b1c2a3(survivor) 2 �

6. a 0 b 1 c2 b 3 4 x

7. a0b1d2c3(survivor) 1 �

8. a0b1d2d3(survivor) 1 �

and has a metric.

a smaller hamming distance from Y than the other paths arriving at

b3. Thus, this path is more likely to represent the actual transmitted

sequence. Therefore, we discard the large metric paths arriving at

nodes a3b3c3 and d3, leaving the total 2kL =4 surviving paths marked

with (x)sign in table

The paths marked by a (x) in table and figure are large metric

paths and hence they are discarded and the paths with smaller met-

ric paths are declared as hence they are discarded and the paths with

smaller metric paths are declared as survivor at that node. Note that

there is one survivor for each node (table ).

Important Point: Thus the surviving paths are : a0 a1 a2 b3 b1 c2 a3

a0b1 d2c3 and a0b1d2d3. None of the surviving path metrices is equal

to zero. This shows the presence of detectable errors in the received sig-

nal Y.

Figure depicts the continuation of figure 4.22 for a complete

SOURCE CODING AND ERROR CONTROL CODING

message of N=12 bits. All the discarded branches and all labels except

for running path metrices have been omitted for the sake of simplicity.

In case, if there are two paths have same metric, then any of them is

continued. Under such circumstances the choice of survivor is arbitrary.

The maximum likelihood path follows the thick line from a0 to a12, as

shown in figure 4.22. The final value of path metric is 2 which shows

that there are atleast two transmission errors present in Y.

Y= 11 01 11 00 01 10 00 11 11 10 11 00

2 3 2 3 3 4 5 2

a0 a13

2 3 2

2

0 3 3 3 3 3 2 5

b1 3

2 2 maximum

3 4 4 5

1 2 2 2 likelihood

Path

4 4

1 2 1 1 2 3

d1

Y+E= 11 01 01 00 01 10 01 11 11 10 11 00 Decoded

M= 1 1 0 1 1 1 0 0 1 0 0 0 signal

The decoder assumes the corresponding transmitted sequence

Y+E and the message sequence M has been written below the trellis

From figure 4.22, we observe that at node a12 only one path has

arrived with metric 2. This path is shown by a dark line. Note that since

this path has the lowest metric, it is the surviving path and signal Y is

decoded from this path. Whenever this path is a solid line, the message

is 0 and when it is dotted line, the message is 1.

4.17.2 Metric Diversion Effect

For a large number of message bits to be decoded, the

storage requirement is going to be extremely large. This problem can be

avoided by the metric diversion effect. The metric diversion effect is used

for reducing the required memory storage.

4.17.3 Free distance and coding gain

The error detection and correction capability of the block and cy-

ANALOG AND DIGITAL COMMUNICATION

clic codes is dependent on the minimum distance, dmin between the code

vectors. But, in case of convolutional code the entire transmitted se-

quence is to be considered as a single code vector. Therefore, the free

distance (dfree) is defined as the minimum distance between the code

vectors. But, the minimum distance between the code vectors is same as

the minimum weight of the code vector. Hence, the free distance is equal

to minimum weight of the code vector.

Therefore,

Free distance dfree = Minimum distance

=Minimum weight of code vectors

If X represents the transmitted signal, then the free distance is

given by

dfree = [W (X)]min and X is non-zero

In this way, the minimum distance decides the capacity of the

block or cyclic codes to detect and correct errors, the free distance will

decide the error control capacity for the convolutional code.

Coding gain (A)

The coding gain (A) is defined as the ratio of (E0/N0) of an en-

coded signal to (Eb/N0) of a coded signal. The coding gain is used for

comparing the different coding techniques.

( Eb / N 0 ) Encoded or = rd free

Coding gain A = ( )

( Eb / N 0 ) Coded 2

where r = Code rate and d free = The free distance

Problem 1

Using the encode figure given below generate an all zero sequence

which is sent over a binary symmetric channel. The received sequence

01001000... There are two errors in this sequence (at second and fifth

position). Show that this double error detection is possible with correc-

tion by application of viterbi algorithm

Input

s2 s3

+ +

Output

SOURCE CODING AND ERROR CONTROL CODING

Solution

The trellis diagram for the encoder shown in figure is shown in

figure

Output

Current State 00 Next State

00 = a a = 00

11

11

01 = b b = 01

00

10

10 = c 01 c = 10

01

10 d = 11

11 = d

diagram is drawn

Maximum

01 00 10 00 likelihood

1 1 2 path

00 00 00 00 a4

(1) a3

a1 (0) a

2 11 (1) 3 11

(11) (1) 2 (2)

1 3 11 11

3 (2)

(1) b (1) 00 b4

1

b2 00 b3

(0)

10

2

(1)

c c3 c4

01 1 01 4

(1) 2 (2) 01 (1) 01

10 10 (1)

d2 (0) d3 (1) d4

ANALOG AND DIGITAL COMMUNICATION

From the virtebi diagram, let will be able to write the possible

paths for each state a4,b4,c4 and d4 and the running path metric for each

path is as shown below

a4 a0 - a1 - a2 - a3 - a4 2 P

a0 - a1 - b2 - c3 - a4 5 x

b4 a0 - a1 - a2 - a3 - b4 4 x

a0 - b1 - c2 - a3 - b4 5 x

a0 - b1 - d2 - a3 - b4 5 x

a0 - a1 - b2 - c3 - b4 3 P

c4 a0 - a1 - a2 - b3 - c4 3 P

a0 - b1 - c2 - b3 - c4 4 x

a0 - b1 - d2 - d3 - c4 3 x

a0 - a1 - b2 - d3 - c4 6 x

d4 a0 - a1 - a2 - b3 - d4 3 P

a0 - b1 - c2 - b3 - d4 4 x

a0 - b1 - d2 - d3 - d4 3 x

a0 - a1 - b2 - d3 - d4 9 x

paths having the minimum value of running path metric. The survivor

paths are marked by (P) sign.

They are as under:

1. a4 → a0 - a1 - a2 - a3 - a4 2

2. b4 → a0 - a1 - b2 - c3 - b4 3

3. c4 → a0 - a1 - a2 - b3 - c4 3

4. d4 → a0 - a1 - a2 - b3 - d4 3

Out of these survivor paths, the path having minimum running

SOURCE CODING AND ERROR CONTROL CODING

path metric equal to 2 (i.e.,) the path (a0 - a1 -a2 -a3 -a4). Hence the

encoded signal corresponding to this path is given by

a4 → 00 00 00 00

This is corresponding to the received signal 01 00 10 00

Therefore, Received signal → 01 00 10 00

Encoded signal → 0 0 0 0 0 0 0 0

This shows that Viterbi algorithm can correct the errors present

in the received signal.

Problem 2

For the convolutional encoder arrangement shown in figure draw

the state diagram and hence trellis diagram. Determine output digit

sequence for the data digits 1 1 0 1 0 1 0 0. What are the dimensions of

the code (n, k) and constraint length?

Solution

(i) To obtain dimension of the code:

Observe that one message bit is taken at a time in the encoder of

figure . Hence the dimension of the code is (n,k) =(3,1)

m m1 m2

+ +

xi(3)

xi(2)

Output sequence

xi(1)

Here note that every message bit affects three output bits. Hence

Constraint length K = 3 bits

(iii) To obtain the code trellis and state diagram

Let the states of the encoder be as given in table figure above

shows the code trellis of the given encoder.

ANALOG AND DIGITAL COMMUNICATION

Output

Current State 000 Next State

00 = a a = 00

111

010

01 = b b = 01

101

001

10 = c 110 c = 10

011

100 d = 11

11 = d

diagram as shown in the below figure

b

Self loop Self loop This indicates the

110

111

output code word x1x2

000 a 100

101 001 d

011

010

c

Bold line represents 0 input

Dotted line represents 1 input

(a) Obtain generator polynomials

The generating sequence can be written for xi(1) from the given

figure above

gi(1) = { 1, 0, 0} since only m is connected

Similarly, generating sequence for xi(2) will be

gi(2) = { 1, 0, 1} since only m2 is connected

SOURCE CODING AND ERROR CONTROL CODING

Hence the corresponding generating polynomials can be written as,

G(1) (p) = 1

G(2) (p) = 1+p2

G(3) (p) = 1+p

(b) Obtain message polynomials

The given message sequence is

m = { 1 1 0 1 0 1 0 0}

Hence the message polynomial will be,

M (p) = 1 + p + p3 + p5

(c) Obtain output for gi(1)

The firs sequence xi(1) is given as,

xi(1) = G(1) (p) + M (p) = 1 (1+ p + p3 + p5)

=1+ p + p3 + p5

Hence the corresponding sequence will be

{xi(1)} = {1 1 0 1 0 1}

(d) Obtain output for gi(2)

The second sequence xi(2) is given as,

xi(1) = G(3) (p) M (p) =(1 + p). (1 + p + p3 + p5)

=1 + p + p2 + p7

Hence the corresponding sequence will be

xi(2) = { 1 1 1 0 0 0 0 1}

(e) Obtain output for gi(3)

Hence the corresponding sequence will be

xi(1) = G(3) (p) . M (p) = (1+p). (1 + p + p3 + p5)

= 1 + p2 + p3 + p4 + p5 + p6

Hence the corresponding sequence is

xi(3) = { 1 0 1 1 1 1 1}

(f) To multiplex three output sequences

The three sequences xi(1), xi(2) and xi(3) are made equal in length

(i.e.,) 8 bits. Hence, zeros are appended in sequence xi(1) and xi(2). These

sequences are shown below:

xi(1) = {1 1 0 1 0 1 0 0)

xi(2) = {1 1 1 0 0 0 0 1)

xi(3) = {1 0 0 1 1 1 1 0)

The bits from above three sequences are multiplexed

{xi} = {111 110 011 101 001 101 001 010}

ANALOG AND DIGITAL COMMUNICATION

Problem 3

g1 = ( 1 0 0), g2 = ( 1 1 1) and g3 = ( 1 0 1).

(ii) Draw the code tree, state diagram and trellis diagram

Solution

k 1

W.K.t rate = = , ∴ k = 1,n = 3

n 3

Here k = 1 and n = 3. This means each message bit generates

three output bits. There will be three stage shift register. It will contain

m, m1 and m2.

Since g1 = (100), x1 = m

Since g2 = (111), x2 = m ⊕ m1 ⊕ m2

Since g2 = (111), x3 = m ⊕ m2

m m1 m2

+ +

xi

xi

Output sequence

xi

(a) To obtain trellis diagram

The two bits m2 m1 in the shift register will indicate the state of

SOURCE CODING AND ERROR CONTROL CODING

The below table lists the state transition calculations

m2 m1 State

0 0 a

0 1 b

1 0 c

1 1 d

Table below shows the State transition calculations

S.No Current Input Outputs Next state

state m x1 = m m1 m

m2 m1 x2 = m ⊕ m1 ⊕ m2

x3 = m ⊕ m2

1 a=00 0 0 0 0 0 0 (i.e.,) a

1 1 1 1 0 1 (i.e.,) b

2 b=01 0 0 1 0 1 0 (i.e.,) c

1 1 0 1 1 1 (i.e.,) d

3 c=10 0 0 1 1 0 0 (i.e.,) a

1 1 0 0 0 1 (i.e.,) b

4 d = 11 0 0 0 1 0 0 (i.e.,) c

1 1 1 0 1 1 (i.e.,) d

Output

Current State Next State

m2m1 000 m2m1

00 = a a = 00

111

011

01 = b b = 01

100

010

10 = c 101 c = 10

001

110 d = 11

11 = d

ANALOG AND DIGITAL COMMUNICATION

If we combine the nodes in trellis diagram, then we will get state

diagram. It is shown below

b

101

111

000 a 110

100 010 d

001

011

Code tree can be constructed with help of state diagram. The

following steps are performed

i. Start with any node (normally node a)

ii. Draw its next states for m = 0 and 1

iii. For every state determine next states for m = 0 and 1

iv. Repeat step 3 till code tree starts repeating

Assumptions: Upward movement in code tree indicated m = 0

Downward movement indicates m =1

Based on above procedure, the code tree is developed as shown in figure

SOURCE CODING AND ERROR CONTROL CODING

000

a

000

111 b

000

a

010

c

111

000 101

a

d

011

a

010

111

m=0 b 100

b

001

c

101

d

110

Start d

a

000

a

011

m=1 111 b

010

c

010

c

100

111 101

b d

011

a

## Гораздо больше, чем просто документы.

Откройте для себя все, что может предложить Scribd, включая книги и аудиокниги от крупных издательств.

Отменить можно в любой момент.