Вы находитесь на странице: 1из 231

DC

COURSE
FILE
Contents

1. Cover Page
2. Syllabus copy
3. Vision of the Department
4. Mission of the Department
5. PEOs and POs
6. Course objectives and outcomes
7. Brief notes on the importance of the course and how it fits into the curriculum
8. Prerequisites
9. Instructional Learning Outcomes
10. Course mapping with PEOs and POs
11. Class Time Table
12. Individual Time Table
13. Micro Plan with dates and closure report
14. Detailed notes
15. Additional topics
16. University Question papers of previous years
17. Question Bank
18. Assignment topics
19. Unit wise Quiz Questions
20. Tutorial problems
21. Known gaps ,if any
22. Discussion topics
23. References, Journals, websites and E-links
24. Quality Control Sheets
25. Student List
26. Group-Wise students list for discussion topics
GEETHANJALI COLLEGE OF ENGINEERING AND TECHNOLOGY
Department Of Electronics and Communication Engineering
(Name of the Subject / Lab Course) : Digital Communications
(JNTU CODE –A60420) Programme : UG

Branch: ECE Version No : 03


Year: III Year ECE Document Number: GCET/ECE/03
Semester: II No. of pages :
Classification status (Unrestricted / Restricted ) :unrestricted

Distribution List :

Prepared by :

1) Name : Ms. M.Hemalatha 1) Name : Mrs.S, Krishna Priya


2) Sign : 2) Sign :
3) Design : Asst. Prof. 3) Design : Assoc.Prof
4) Date :28/11/2015 4) Date: 28/11/2015

Verified by : 1) Name: Mr.D.Venkata Rami Reddy * For Q.C Only.

2) Sign : 1) Name :
3) Design : 2) Sign :

4) Date : 30/11/2015 3) Design :


4) Date :

Approved by : (HOD ) 1) Name: Dr. P. Srihari

2) Sign :
3) Date:
2. Syllabus Copy
Jawaharlal Nehru Technological University Hyderabad, Hyderabad

DIGITAL COMMUNICATIONS

Programme: B.Tech (ECE) Year & Sem: III B.Tech II Sem

UNIT I : Elements Of Digital Communication Systems

Advantages of digital communication systems, Bandwidth- S/N trade off, Hartley Shannon
law, Sampling theorem

Pulse Coded Modulation

PCM generation and reconstruction , Quantization noise, Non-uniform Quantization and


Companding, Differential PCM systems (DPCM), Adaptive DPCM, Delta modulation,
adaptive delta modulation, Noise in PCM and DM systems

UNIT II : Digital Modulation Techniques

introduction , ASK, ASK Modulator, Coherent ASK detector, non-Coherent ASK detector,
FSK, Band width frequency spectrum of FSK, Non-Coherent FSK detector, Coherent FSK
detector, FSK Detection using PLL, BPSK, Coherent PSK detection, QPSK, Differential
PSK

UNIT III: Base Band Transmission And Optimal Reception of Digital Signal

Pulse shaping for optimum transmission, A Base band signal receiver, Probability of error,
optimum receiver, Optimum of coherent reception, Signal space representation and
probability of error, Eye diagrams for ASK,FSK and PSK, cross talk,

Information Theory

Information and entropy, Conditional entropy and redundancy, Shannon Fano coding, mutual
information, Information loss due to noise, Source codings,- Huffman code, variable length
coding. Source coding to increase average information per bit, Lossy source Coding,

UNIT IV : Error Control Codes

Matrix description of linear block codes, Error detection and error correction capabilities of
linear block codes,

Cyclic codes: Algebraic structure, encoding, Syndrome calculation, decoding


Convolution Codes:

Encoding, decoding using state, Tree and trellis diagrams, Decoding using Viterbi algorithm,
Comparison of error rates in coded and uncoded transmission.

UNIT V : Spread Spectrum Modulation

Use of spread spectrum, direct sequence spread spectrum(DSSS), Code division multiple
access, Ranging using DSSS, Frequency Hopping spread spectrum, PN Sequences:
generation and characteristics, Synchronization in spread spectrum system

TEXT BOOKS:

1. Principles Of Communication Systems-Herberet Taub, Donald L Schiling, Goutham


saha,3rf edition, Mc Graw Hill 2008
2. digital and anolog communiation systems- Sam Shanmugam, John Wiley,2005
3. Digital Communications- John G.Proakis, Masoud Salehi – 5th Edition, Mcgraw-Hill,
2008

REFERNCES:

1. Digital communications- Simon Haykin, John Wiley, 2005


2. Digital Communications 3rd Ed - I. A.Glover, P. M. Grant, 2nd Edition, Pearson Edu,,
2008
3. Communication Systems ---- B.P.Lathi, BS Publications, 2006
4. A first course in Digital Communication Systems – Nguyen, Shewedyh, Cambridge
5. Digital Communication – Theory, Techniques, and Applications – R.N.Mutagi, 2nd
Edition, 2013
3.Vision of the Department

To impart quality technical education in Electronics and Communication Engineering


emphasizing analysis, design/synthesis and evaluation of hardware/embedded software using
various Electronic Design Automation (EDA) tools with accent on creativity, innovation and
research thereby producing competent engineers who can meet global challenges with
societal commitment.

4. Mission of the Department


i. To impart quality education in fundamentals of basic sciences, mathematics, electronics
and communication engineering through innovative teaching-learning processes.
ii. To facilitate Graduates define, design, and solve engineering problems in the field of
Electronics and Communication Engineering using various Electronic Design Automation
(EDA) tools.
iii. To encourage research culture among faculty and students thereby facilitating them to be
creative and innovative through constant interaction with R & D organizations and
Industry.
iv. To inculcate teamwork, imbibe leadership qualities, professional ethics and social
responsibilities in students and faculty.

5. Program Educational Objectives and Program outcomes of B. Tech (ECE)


Program

Program Educational Objectives of B. Tech (ECE) Program :

I. To prepare students with excellent comprehension of basic sciences, mathematics and


engineering subjects facilitating them to gain employment or pursue postgraduate
studies with an appreciation for lifelong learning.
II. To train students with problem solving capabilities such as analysis and design with
adequate practical skills wherein they demonstrate creativity and innovation that
would enable them to develop state of the art equipment and technologies of
multidisciplinary nature for societal development.
III. To inculcate positive attitude, professional ethics, effective communication and
interpersonal skills which would facilitate them to succeed in the chosen profession
exhibiting creativity and innovation through research and development both as team
member and as well as leader.

Program Outcomes of B.Tech ECE Program:

1. An ability to apply knowledge of Mathematics, Science, and Engineering to solve


complex engineering problems of Electronics and Communication Engineering
systems.
2. An ability to model, simulate and design Electronics and Communication Engineering
systems, conduct experiments, as well as analyze and interpret data and prepare a
report with conclusions.
3. An ability to design an Electronics and Communication Engineering system,
component, or process to meet desired needs within the realistic constraints such as
economic, environmental, social, political, ethical, health and safety,
manufacturability and sustainability.
4. An ability to function on multidisciplinary teams involving interpersonal skills.
5. An ability to identify, formulate and solve engineering problems of multidisciplinary
nature.
6. An understanding of professional and ethical responsibilities involved in the practice
of Electronics and Communication Engineering profession.
7. An ability to communicate effectively with a range of audience on complex
engineering problems of multidisciplinary nature both in oral and written form.
8. The broad education necessary to understand the impact of engineering solutions in a
global, economic, environmental and societal context.
9. A recognition of the need for, and an ability to engage in life-long learning and
acquire the capability for the same.
10. A knowledge of contemporary issues involved in the practice of Electronics and
Communication Engineering profession
11. An ability to use the techniques, skills and modern engineering tools necessary for
engineering practice.
12. An ability to use modern Electronic Design Automation (EDA) tools, software and
electronic equipment to analyze, synthesize and evaluate Electronics and
Communication Engineering systems for multidisciplinary tasks.
13. Apply engineering and project management principles to one's own work and also to
manage projects of multidisciplinary nature

6. COURSE OBJECTIVES AND OUTCOMES

Course Objectives
 Design digital communication systems, given constraints on data rate, bandwidth,
power, fidelity, and complexity.
 Analyze the performance of a digital communication link when additive noise is
present in terms of the signal-to-noise ratio and bit error rate.
 Compute the power and bandwidth requirements of modern communication systems,
including those employing ASK, PSK, FSK, and QAM modulation formats.
 Design a scalar quantizer for a given source with a required fidelity and determine the
resulting data rate.
 Determine the auto-correlation function of a line code and determine its power
spectral density.
 Determine the power spectral density of band pass digital modulation formats.
Course outcomes :

Upon successful completion of this course, students have the ability to


1. Analyze digital and analog signals with respect to various parameters like bandwidth,
noise etc.
2. Demonstrate generation and reconstruction of different Pulse Code Modulation
schemes like PCM, DPCM etc.
3. Acquire the knowledge of different pass band digital modulation techniques like
ASK, PSK etc.
4. Calculate different parameters like power spectrum density, probability of error etc of
Base Band signal for optimum transmission.
5. Analyze the concepts of Information theory, Huffman coding etc to increase average
information per bit.
6. Generate and retrieve data using block codes and analyze their error detection and
correction capabilities.
7. Generate and decode data using convolution codes and compare error rates for coded
and uncoded transmission.
8. Familiar with different criteria in spread spectrum modulation scheme and its
applications.
7. Importance of the course and how it fits into the curriculum:
7.1 Introduction to the subject

7.2. Objectives of the subject

1. Design digital communication systems, given constraints on data rate, bandwidth,


power, fidelity, and complexity.
2. Analyze the performance of a digital communication link when additive noise is
present in terms of the signal-to-noise ratio and bit error rate.
3. Compute the power and bandwidth requirements of modern communication systems,
including those employing ASK, PSK, FSK, and QAM modulation formats.
4. To provide the students a basic understanding of the Telecommunications.
5. To develop technical expertise in various modulation techniques.
6. Provide basic understanding of information theory and error correction codes.

7.3. Outcomes of the subject

 Ability to understand the functions of the various parts, analyze theoretically the
performance of a modern communication system.
 Ability to compare analog and digital communications in terms of noise, attenuation,
and distortion.
 Ability to recognize the concepts of digital baseband transmission, optimum reception
analysis and band limited transmission.
 Characterize and analyze various pass band modulation techniques
 Ability to Explain the basic concepts of error detection/ correction coding and
perform error analysis
8.PREREQUISITES:

 Engineering Mathematics
 Basic Electronics
 Signals and systems
 Analog Communications

9. Instructional learning outcomes:

Subject: Digital Communications


UNIT 1: Elements of Digital Communication Systems

DC1: Analyse the elements of digital communication system, the importance and
Applications of Digital Communication.

DC 2: Differentiate analog and digital systems, the advantages of digital communication


systems over analog systems. The importance and the need of sampling theorem in digital
communication systems.

DC 3: Conversion of analog signal to digital signal and the issues occur in digital
transmission techniques like Bandwidth- S/N trade off.

DC 4: Compute the power and bandwidth requirements of modern communication systems.

DC 5: Analyse the importance of Hartley Shannon law in calculating the BER and the
channel capacity.

Pulse Code Modulation

DC 6: Explain the generation and reconstruction of PCM.

DC 7: To Analyze the effect of Quantization noise in Digital Communication.

DC 8: Analyse the different digital communication schemes like Differential PCM


systems (DPCM), Delta modulation, and adaptive delta modulation.

DC 9: Compare the digital communication schemes like Differential PCM systems


(DPCM), Delta modulation, and adaptive delta modulation.

DC 10: Illustrate the effect of Noise in PCM and DM systems.

UNIT 2: Digital Modulation Techniques

DC11: Describe and differentiate the different shift keying formats used in digital
communication.
DC 12: Compute the power and bandwidth requirements of modern communication
systems modulation formats like those employing ASK, PSK, FSK, and QAM.

DC 13: Explain the different modulators like ASK Modulator, Coherent ASK detector,
non-Coherent ASK detector, Band width frequency spectrum of FSK, Non-Coherent FSK
detector, Coherent FSK detector.

DC 14: Analyze the need and use of PLL in FSK Detection.

DC 15: Differentiate the different keying schemes -BPSK, Coherent PSK detection,
QPSK & Differential PSK.

UNIT 3: Base Band Transmission and Optimal reception of Digital Signal

DC16: Identify the need of pulse shaping for optimum transmission and get the
knowledge of Base band signal receiver model.

DC 17: Analyze different pulses and their power spectrum densities.

DC 18: Calculation of Probability of error, optimum receiver, Optimum of coherent


reception and understand the Signal space representation and calculate the probability of
error.

DC 19: Explain the Eye diagram and its importance in calculating error.

DC 20: Describe cross talk and its effect in the degradation of signal quality in digital
communication.

Information Theory

DC 21: Identify the basic terminology used in coding of Digital signals like Information
and entropy and calculate the Conditional entropy and redundancy.

DC 22: Solve problems based on Shannon Fano coding.

DC 23: Solve problems based on mutual information and Information loss due to noise.

DC 24: Compute problems on Source coding methods like - Huffman code, variable
length codes used in digital communication.

DC 25: Explain Source coding and drawbacks of Lossy source Coding and how to
increase the average information per bit.
UNIT 4: Error control codes

Linear Block Codes

DC 26: Illustrate the different types of codes used in digital communication and the
Matrix description of linear block codes.

DC 27: Analyze and find errors, solve the numerical in Error detection and error
correction of linear block codes.

DC 28: Explain cyclic codes, the difference between linear block codes and cyclic
codes.

DC 29: Compute problems based on the representation of cyclic codes and encoding and
decoding of cyclic codes.

DC 30: Solve problems to find the location of error in the codes i.e., syndrome
calculation.

Convolution Codes

DC 31: Identify the difference between the different codes digital communication.

DC 32: Describe Encoding & decoding of Convolutional Codes.

DC 33: Solve problems on error detection & correction using state Tree and trellis
diagrams.

DC 34: Solve problems based on Viterbi algorithm.

DC 35: Compute numerical on error calculations and compare the error rates in coded
and uncoded transmission.

UNIT 5: Spread Spectrum Modulation

DC 36: Analyze the need and use of spread spectrum in digital communication and gain
knowledge of spread spectrum techniques like direct sequence spread spectrum (DSSS).

DC 37: Describe Code division multiple access, ranging using DSSS Frequency Hopping
spread spectrum.

DC 38: Generate PN sequences and solve problems based on sequence generation.

DC 39: Explain the need of synchronization in spread spectrum system.

DC 40: Identify the Advancements in the digital communication.


10. Course mapping with PEO’s and PO’s:
Mapping of Course with Programme Educational Objectives:

S.No Course code course Semester PEO 1 PEO 2 PEO 3


component

Digital
1 Communication 56026 II √ √
Communications

Mapping of Course outcomes with Programme outcomes:

*When the course outcome weightage is < 40%, it will be given as moderately correlated (1).

*When the course outcome weightage is >40%, it will be given as strongly correlated (2).

Pos 1 2 3 4 5 6 7 8 9 10 11 12 13

Digital Communications 2 2 1 1 1 1 2 2 2 2 2

CO 1: To State the function of Analog to 2 2 1 1 1 1 1 2


Digital Converters (ADCs) and vice versa
and to recognize the concepts of digital
baseband transmission, optimum reception
analysis and band limited transmission.

COMMUNICATION
CO 2:Demonstrate generation and 2 2 2 1 1 1 2 2 2
reconstruction of different Pulse Code
Modulation schemes like PCM, DPCM etc.

CO 3: Compare different pass band 2 2 2 1 1 2 2 2 2


digital modulation techniques like ASK,
PSK etc and compute the Probability of
error in each scheme.

CO 4: Calculate different parameters like 2 2 2 1 1 1 2 2 2 2


power spectrum density, probability of
error etc of Base Band signal for optimum
transmission.

CO 5: Analyze the concepts of 1 1 1 1 1 1 2 2


Information theory, Huffman coding etc to
increase average information per bit.

CO 6: Generate and retrieve data using 2 1 1 2 1 1 2 1 1 2


block codes and solve numerical problems
on error detection and correction
capabilities.

CO 7: Solve problems on generation and 2 1 1 2 1 1 2 1 1 2


decoding of data using convolution codes
and compare error rates for coded and
uncoded transmission.

CO 8: Describe the different criteria in 2 2 2 1 1 1 1 2 2 2 2


spread spectrum modulation scheme and
its applications.

11.Class Time Tables:

12.Individual time table:


13.Micro plan :

Sl. Total Topics to be covered Total Date Regular/ Teaching Rem


Unit No.

no no of no. of Additiona aids used arks


Period hours l LCD/OHP
s /BB

1 Elements Of Digital Communication 1 Regular OHP,BB


Systems: Model of digital communication
system
2 Model of digital communication system 1 Regular OHP,BB
Digital representation of analog signal
3 Certain issues of digital transmission 1 Regular OHP,BB
UNIT - I

5 advantages of digital communication 1 Regular OHP,BB


systems, Bandwidth- S/N, Hartley Shannon
law trade off,
6 1 Additional BB
7 Sampling theorem 1 Regular OHP,BB
8 Tutorial class-1 1 BB
07

9 Pulse Coded Modulation: PCM generation 1 Regular BB


and reconstruction , Quantization noise
10 Differential PCM systems (DPCM), Delta 1 Regular OHP,BB
modulation,
11 adaptive delta modulation, Noise in PCM 1 Regular OHP,BB
and DM systems
12 Voice Coders 1 Additional BB
13 Tutorial Class-2 1 Regular BB
14 07 Solving University papers 1 OHP,BB
15 Assignment test-1 1
16 Digital Modulation Techniques: 1 Regular BB
introduction , ASK, ASK Modulator
17 Coherent ASK detector, non-Coherent ASK 1 Regular
detector
18 Band width frequency spectrum of FSK, 1 Regular OHP,BB
UNIT-II

Non-Coherent FSK detector


19 Coherent FSK detector, FSK Detection 1 Regular OHP,BB
08 using PLL
20 BPSK, Coherent PSK detection, 1 Regular BB
21 QPSK, Differential PSK 1 Regular BB
22 Regenerative Repeater 1 Additional OHP,BB
23 Tutorial class-3 1 Regular BB
24 Base Band Transmission And Optimal 1 Regular OHP,BB
reception of Digital Signal: pulse shaping
for optimum transmission
25 A Base band signal receiver, Different 1 Regular OHP,BB
IUNIT- III

08 pulses and power spectrum densities


26 Probability of error, optimum receiver 1 Regular BB
27 Optimum of coherent reception, 1 Regular OHP,BB
28 Signal space representation and probability 1 Regular OHP,BB
of error, Eye diagram, cross talk
29 Tutorial Class-4 1 Regular BB
30 Solving University papers 1 Regular OHP,BB
31 Assignment test-2 1
32 Information Theory: Information and 1 Regular BB
entropy
33 Conditional entropy and redundancy 1 Regular OHP,BB
34 Shannon Fano coding, mutual information 1 Regular OHP,BB
35 Information loss due to noise, 1 Regular BB
36 Source codings,- Huffman code, variable 1 BB
08 length coding
37 Lossy source Coding , Source coding to 1 Regular BB
increase average information per bit
38 Feedback communications 1 Additional BB
39 Tutorial Class-5 1 Regular OHP,BB
40 Linear Block Codes: Matrix description of 1 Regular BB
linear block codes
41 Matrix description of linear block codes 1 Regular BB
42 Error detection and error correction 1 Regular BB
capabilities of linear block codes
UNIT-IV

43 Error detection and error correction 1 Regular BB


capabilities of linear block codes
44 Cyclic codes: algebraic structure, encoding, 1 Regular OHP,BB
45 08 syndrome calculation decoding 1 Regular OHPBB
46 Turbo codes 1 Additional OHP,BB
47 Tutorial Class-6 1 Regular OHP,BB
48 Solving University papers 1 OHP,BB
49 Assignment test-3 1
50 Convolution Codes: Encoding, decoding 1 Regular BB
using state
51 Tree and trellis diagrams 1 Regular BB
52 Decoding using Viterbi algorithm 1 Regular BB
53 08 Comparison of error rates in coded and 1 Regular OHP,BB
uncoded transmission
54 Tutorial Class-7 1 Regular OHP,BB
55 Spread Spectrum Modulation: Use of 1 Regular OHP,BB
spread spectrum, direct sequence spread
spectrum(DSSS)
56 Code division multiple access 1 Regular OHP,BB
57 Ranging using DSSS Frequency Hopping 1 Regular
UNIT- V

spread spectrum
58 PN sequences: generation and 1 Regular BB
characteristics
59 Synchronization in spread spectrum system 1 Regular BB
60 Advancements in the digital communication 1 Missing BB
61 08 Tutorial Class-8 1 Regular BB
62 Solving University papers 1 Regular OHP,BB
61 Assignment test-4 1
62 Total No. of classes 62
14.Detailed Notes
UNIT 1 :
Elements Of Digital Communication Systems
Model of digital communication system,

Digital representation of analog signal,

Certain issues of digital transmission,

advantages of digital communication systems,

Bandwidth- S/N trade off,

Hartley Shannon law,

Sampling theorem

What Does Communication (or Telecommunication) Mean?

The term communication (or telecommunication) means the transfer of some form of
information from one place (known as the source of information) to another place
(known as the destination of information) using some system to do this function
(known as a communication system).

So What Will we Study in This Course?

In this course, we will study the basic methods that are used for communication in
today’s world and the different systems that implement these communication methods.
Upon the successful completion of this course, you should be able to identify the
different communication techniques, know the advantages and disadvantages of each
technique, and show the basic construction of the systems that implement these
communication techniques.

Old Methods of Communication

 Pigeons
 Horseback
 Smoke
 Fire
 Post Office
 Drums
Problems with Old Communication Methods

 Slow
 Difficult and relatively expensive
 Limited amount of information can be sent
 Some methods can be used at specific times of the day
 Information is not secure.
Examples of Today’s Communication Methods

All of the following are electric (or electromagnetic) communication systems

 Satellite (Telephone, TV, Radio, Internet, … )


 Microwave (Telephone, TV, Data, …)
 Optical Fibers (TV, Internet, Telephone, … )
 Copper Cables (telephone lines, coaxial cables, twisted pairs, … etc)
Advantages of Today’s Communication Systems

 Fast
 Easy to use and very cheap
 Huge amounts of information can be transmitted
 Secure transmission of information can easily be achieved
 Can be used 24 hours a day.
Basic Construction of Electrical Communication System

Electric signal (like


Electric Signal (like
audio and video Electric Signal Electric Signal
Sound, picture, ... the outputs of a Sound, picture, ...
outputs of a video (transmitted signal) (received signal)
satellite receiver)
camera

Added Noise

Channel
Input (distorts Output
Input Transmitter Receiver Output
Transducer transmitted Transducer
signal)

Converts the input


Medium though Converts the electric
signal from its
which the signal to its original
original form (sound,
information is form (sount, picture,
picture, … etc) to an
transmitted … etc)
electric signal
Adapts the electric
signal to the channel
Extracts the original
(changes the signal
electric signal from
to a form that is
the received signal
suitable for
transmission)
A communication system may transmit information in one direction such as TV and radio
(simplex), two directions but at different times such as the CB (half-duplex), or two
directions simultaneously such as the telephone (full-duplex).

Basic Terminology Used in this Communications Course

A Signal: is a function that specifies how a specific variable changes versus an


independent variable such as time, location, height (examples: the age of
people versus their coordinates on Earth, the amount of money in your
bank account versus time).

A System: operates on an input signal in a predefined way to generate an output


signal.

Analog Signals: are signals with amplitudes that may take any real value out of an infinite
number of values in a specific range (examples: the height of mercury in
a 10cm–long thermometer over a period of time is a function of time that
may take any value between 0 and 10cm, the weight of people setting in a
class room is a function of space (x and y coordinates) that may take any
real value between 30 kg to 200 kg (typically)).

Digital Signals: are signals with amplitudes that may take only a specific number of
values (number of possible values is less than infinite) (examples: the
number of days in a year versus the year is a function that takes one of
two values of 365 or 366 days, number of people sitting on a one-person
chair at any instant of time is either 0 or 1, the number of students
registered in different classes at KFUPM is an integer number between 1
and 100).

Noise: is an undesired signal that gets added to (or sometimes multiplied with) a
desired transmitted signal at the receiver. The source of noise may be
external to the communication system (noise resulting from electric
machines, other communication systems, and noise from outer space) or
internal to the communication system (noise resulting from the collision
of electrons with atoms in wires and ICs).

Signal to Noise Ratio (SNR):is the ratio of the power of the desired signal to the power of
the noise signal.

Bandwidth (BW): is the width of the frequency range that the signal occupies. For example
the bandwidth of a radio channel in the AM is around 10 kHz and the
bandwidth of a radio channel in the FM band is 150 kHz.

Rate of Communication: is the speed at which DIGITAL information is transmitted. The


maximum rate at which most of today’s modems receive digital
information is around 56 k bits/second and transmit digital information is
around 33 k bits/second. A Local Area Network (LAN) can theoretically
receive/transmit information at a rate of 100 M bits/s. Gigabit networks
would be able to receive/transmit information at least 10 times that rate.

Modulation: is changing one or more of the characteristics of a signal (known as the


carrier signal) based on the value of another signal (known as the
information or modulating signal) to produce a modulated signal.

Analog and Digital Communications

Since the introduction of digital communication few decades ago, it has been gaining a steady
increase in use. Today, you can find a digital form of almost all types of analog
communication systems. For example, TV channels are now broadcasted in digital form
(most if not all Ku–band satellite TV transmission is digital). Also, radio now is being
broadcasted in digital form (see sirus.com and xm.com). Home phone systems are starting to
go digital (a digital phone system is available at KFUPM). Almost all cellular phones are now
digital, and so on. So, what makes digital communication more attractive compared to analog
communication?

Advantages of Digital Communication over Analog Communication

 Immunity to Noise (possibility of regenerating the original digital signal if signal


power to noise power ratio (SNR) is relatively high by using of devices called
repeaters along the path of transmission).
 Efficient use of communication bandwidth (through use of techniques like
compression).
 Digital communication provides higher security (data encryption).
 The ability to detect errors and correct them if necessary.
 Design and manufacturing of electronics for digital communication systems is
much easier and much cheaper than the design and manufacturing of electronics
for analog communication systems.
Modulation

Famous Types

 Amplitude Modulation (AM): varying the amplitude of the carrier based on the
information signal as done for radio channels that
are transmitted in the AM radio band.
 Phase Modulation (PM): varying the phase of the carrier based on the
information signal.
 Frequency Modulation (FM): varying the frequency of the carrier based on the
information signal as done for channels transmitted
in the FM radio band.
Purpose of Modulation

 For a signal (like the electric signals coming out of a microphone) to be


transmitted by an antenna, signal wavelength has to be comparable to the length of
the antenna (signal wavelength is equal to 0.1 of the antenna length or more). If
the wavelength is extremely long, modulation must be used to reduce the
wavelength of the signal to make the length of the required antenna practical.
 To receive transmitted signals from multiple sources without interference between
them, they must be transmitted at different frequencies (frequency multiplexing)
by modulating carriers that have different frequencies with the different
information signals.
Exercise 1–1: Specify if the following communication systems are (A)nalog or (D)igital:

a) TV in the 1970s:

b) TV in the 2030s:

c) Fax machines

d) Local area networks (LANs):

e) First–generation cellular phones

f) Second–generation cellular phones

g) Third–generation cellular phones


These are the basic elements of any digital communication system and It gives a basic
understanding of communication systems.
basic elements of digital communication system

1. Information Source and Input Transducer:


The source of information can be analog or digital, e.g. analog: aurdio or video signal,
digital: like teletype signal. In digital communication the signal produced by this
source is converted into digital signal consists of 1′s and 0′s. For this we need source
encoder.
1.
2. Source Encoder
In digital communication we convert the signal from source into digital signal as
mentioned above. The point to remember is we should like to use as few binary digits as
possible to represent the signal. In such a way this efficient representation of the source
output results in little or no redundancy. This sequence of binary digits is
called information sequence.
Source Encoding or Data Compression: the process of efficiently converting the output
of wither analog or digital source into a sequence of binary digits is known as source
encoding.
3. Channel Encoder:
The information sequence is passed through the channel encoder. The purpose of the
channel encoder is to introduced, in controlled manner, some redundancy in the binary
information sequence that can be used at the receiver to overcome the effects of noise and
interference encountered in the transmission on the signal through the channel.
e.g. take k bits of the information sequence and map that k bits to unique n bit sequence
called code word. The amount of redundancy introduced is measured by the ratio n/k and
the reciprocal of this ratio (k/n) is known as rate of code or code rate.
4. Digital Modulator:
The binary sequence is passed to digital modulator which in turns convert the sequence
into electric signals so that we can transmit them on channel (we will see channel later).
The digital modulator maps the binary sequences into signal wave forms , for example if
we represent 1 by sin x and 0 by cos x then we will transmit sin x for 1 and cos x for 0. ( a
case similar to BPSK)
5. Channel:
The communication channel is the physical medium that is used for transmitting signals
from transmitter to receiver. In wireless system, this channel consists of atmosphere , for
traditional telephony, this channel is wired , there are optical channels, under water
acoustic channels etc.
we further discriminate this channels on the basis of their property and characteristics,
like AWGN channel etc.
6. Digital Demodulator:
The digital demodulator processes the channel corrupted transmitted waveform and
reduces the waveform to the sequence of numbers that represents estimates of the
transmitted data symbols.
7. Channel Decoder:
This sequence of numbers then passed through the channel decoder which attempts to
reconstruct the original information sequence from the knowledge of the code used by the
channel encoder and the redundancy contained in the received data
The average probability of a bit error at the output of the decoder is a measure of the
performance of the demodulator – decoder combination. THIS IS THE MOST
IMPORTANT POINT, We will discuss a lot about this BER (Bit Error Rate) stuff in
coming posts.
8. Source Decoder
At the end, if an analog signal is desired then source decoder tries to decode the sequence
from the knowledge of the encoding algorithm. And which results in the approximate
replica of the input at the transmitter end
9. Output Transducer:
Finally we get the desired signal in desired format analog or digital.
The point worth noting are :
1. the source coding algorithm plays important role in higher code rate
2. the channel encoder introduced redundancy in data
3. the modulation scheme plays important role in deciding the data rate and immunity of
signal towards the errors introduced by the channel
4. Channel introduced many types of errors like multi path, errors due to thermal noise
etc.
5. The demodulator and decoder should provide high BER.

What are the advantages and disadvantages of Digital Communication.?


Advantages of digital communication:

1. It is fast and easier.


2. No paper is wasted.
3. The messages can be stored in the device for longer times, without being damaged, unlike
paper files that easily get damages or attacked by insects.
4. Digital communication can be done over large distances through internet and other things.
5. It is comparatively cheaper and the work which requires a lot of people can be done simply
by one person as folders and other such facilities can be maintained.
6. It removes semantic barriers because the written data can be easily channel to different
languages using software.
7. It provides facilities like video conferencing which save a lot of time, money and effort.

Disadvantages:
1. It is unreliable as the messages cannot be recognised by signatures. Though software can
be developed for this, yet the softwares can be easily hacked.
2. Sometimes, the quickness of digital communication is harmful as messages can be sent
with the click of a mouse. The person does not think and sends the message at an impulse.
3. Digital Communication has completely ignored the human touch. A personal touch cannot
be established because all the computers will have the same font!
4. The establishment of Digital Communication causes degradation of the environment in
some cases. "Electronic waste" is an example. The vibes given out by the telephone and cell
phone towers are so strong that they can kill small birds. In fact the common sparrow has
vanished due to so many towers coming up as the vibrations hit them on the head.
5. Digital Communication has made the whole world to be an "office." The people carry their
work to places where they are supposed to relax. The whole world has been made into an
office. Even in the office, digital communication causes problems because personal messages
can come on your cell phone, internet, etc.
6. Many people misuse the efficiency of Digital Communication. The sending of hoax
messages, the usage by people to harm the society, etc cause harm to the society on the
whole.

Definition of Digital – A method of storing, processing and transmitting information through


the use of distinct electronic or optical pulses that represent the binary digits 0 and 1.

Advantages of Digital -
Less expensive
More reliable
Easy to manipulate
Flexible
Compatibility with other digital systems
Only digitized information can be transported through a noisy channel without degradation
Integrated networks

Disadvantages of Digital -
Sampling Error
Digital communications require greater bandwidth than analogue to transmit the same
information.
The detection of digital signals requires the communications system to be synchronized,
whereas generally speaking this is not the case with analogue systems.

Some more explanation of advantages and disadvantages of analog vs digital


communication.

1.The first advantage of digital communication against analog is it’s noise immunity. In any
transmission path some unwanted voltage or noise is always present which cannot be
eliminated fully. When signal is transmitted this noise gets added to the original signal
causing the distortion of the signal. However in a digital communication at the receiving end
this additive noise can be eliminated to great extent easily resulting in better recovery of
actual signal. In case of analog communication it’s difficult to remove the noise once added
to the signal.

2.Security is another priority of messaging services in modern days. Digital communication


provides better security to messages than the analog communication. It can be achieved
through various coding techniques available in digital communication.

3. In a digital communication the signal is digitized to a stream of 0s and 1s. So at the


receiver side a simple decision has to me made whether received signal is a 0 or a
1.Accordingly the receiver circuit becomes simpler as compared to the analog receiver
circuit.

4. Signal when travelling through it’s transmission path gets faded gradually. So on it’s path
it needs to be reconstructed to it’s actual form and re-transmitted many times. For that reason
AMPLIFIERS are used for analog communication and REPEATERS are used in digital
communication. Amplifiers are needed every 2 to 3 Kms apart where as repeaters are needed
every 5 to 6 Kms apart. So definitely digital communication is cheaper. Amplifiers also often
add non-linearity that distort the actual signal.
5. Bandwidth is another scarce resource. Various Digital communication
techniques are available that use the available bandwidth much efficiently than analog
communication techniques.

6. When audio and video signals are transmitted digitally an AD (Analog to Digital)
converter is needed at transmitting side and a DA (Digital to Analog) converter is again
needed at receiver side. While transmitted in analog communication these devices are not
needed.

7. Digital signals are often an approximation of the analog data (like voice
or video) that is obtained through a process called quantization. The digital representation is
never the exact signal but it’s most closely approximated digital form. So it’s accuracy
depends on the degree of approximation taken in quantization process.

Sampling Theorem:

There are 3 cases of sampling:


Ideal impulse sampling
Consider an arbitrary lowpass signal x (t ) shown in Fig. 6.2(a). Let
Pulse Code Modulation
 PCM generation and reconstruction ,
 Quantization noise,
 Differential PCM systems (DPCM),
 Delta modulation, adaptive delta modulation,
 Noise in PCM and DM systems

Digital Transmission of Analog Signals:


PCM, DPCM and DM

6.1 Introduction
Quite a few of the information bearing signals, such as speech, music, video, etc., are analog
in nature; that is, they are functions of the continuous variable t and for any t = t1, their value
can lie anywhere in the interval, say − A to A. Also, these signals are of the baseband variety.
If there is a channel that can support baseband transmission, we can easily set up a baseband
communication system. In such a system, the transmitter could be as simple as just a power
amplifier so that the signal that is transmitted could be received at the destination with some
minimum power level, even after being subject to attenuation during propagation on the
channel. In such a situation, even the receiver could have a very simple structure; an
appropriate filter (to eliminate the out of band spectral components) followed by an amplifier.
If a baseband channel is not available but have access to a passband channel, (such as
ionospheric channel, satellite channel etc.) an appropriate CW modulation scheme discussed
earlier could be used to shift the baseband spectrum to the passband of the given channel.
Interesting enough, it is possible to transmit the analog information in a digital format.
Though there are many ways of doing it, in this chapter, we shall explore three such
techniques, which have found widespread acceptance. These are: Pulse Code Modulation
(PCM), Differential Pulse Code Modulation (DPCM)
and Delta Modulation (DM). Before we get into the details of these techniques, let us
summarize the benefits of digital transmission. For simplicity, we shall assume that
information is being transmitted by a sequence of binary pulses. i) During the course of
propagation on the channel, a transmitted pulse becomes gradually distorted due to the non-
ideal transmission characteristic of the channel. Also, various unwanted signals (usually
termed interference and noise) will cause further deterioration of the information bearing
pulse. However, as there are only two types of signals that are being transmitted, it is possible
for us to identify (with a very high probability) a given transmitted pulse at some appropriate
intermediate point on the channel and regenerate a clean pulse. In this way, be completely
eliminating the effect of distortion and noise till the point of regeneration. (In long-haul PCM
telephony, regeneration is done every few Kilometers, with the help of regenerative
repeaters.) Clearly, such an operation is not possible if the transmitted signal was analog
because there is nothing like a reference waveform that can be regenerated.
ii) Storing the messages in digital form and forwarding or redirecting them at a later point in
time is quite simple.
iii) Coding the message sequence to take care of the channel noise, encrypting for secure
communication can easily be accomplished in the digital domain.
iv) Mixing the signals is easy. All signals look alike after conversion to digital form
independent of the source (or language!). Hence they can easily be multiplexed (and
demultiplexed)
6.2 The PCM system
Two basic operations in the conversion of analog signal into the digital is time discretization
and amplitude discretization. In the context of PCM, the former is accomplished with the
sampling operation and the latter by means of quantization. In addition, PCM involves
another step, namely, conversion of quantized amplitudes into a sequence of simpler pulse
patterns (usually binary), generally called as code words. (The word code in pulse code
modulation refers
to the fact that every quantized sample is converted to an R -bit code word.)

Fig. 6.1 illustrates a PCM system. Here, m(t ) is the information bearing
message signal that is to be transmitted digitally. m(t ) is first sampled and then
quantized. The output of the sampler is

Ts is the sampling period and n is the appropriate integer.

is called the sampling rate or sampling frequency.

The quantizer converts each sample to one of the values that is closest to it from among a
pre-selected set of discrete amplitudes. The encoder represents each one of these quantized
samples by an R -bit code word. This bit stream travels on the channel and reaches the
receiving end. With fs as the sampling rate and R -bits per code word, the bit rate of the PCM
System is

The decoder converts the R -bit code words into the corresponding (discrete) amplitudes.
Finally, the reconstruction filter, acting on these discrete amplitudes, produces the analog
signal, denoted by m’(t ) . If there are no channel errors, then m’(t ) approx= m(t ) .
777773333333333333333333333333333333333333333333333333
444444477744444444ggggggggggggggggggg77777777774444477777777777
Pulse-Amplitude Modulation:

Let s ( t ) denote the sequence of flat - top pulses as



s (t )   m( nT
n  
s ) h ( t  nTs ) (3.10)

 1, 0 t  T

h (t )   1 , t  0, t  T (3.11)
2
 0,
 otherwise
The instantane ously sampled version of m( t ) is

m ( t )   m( nT
n  
s ) ( t  nTs ) (3.12)

m ( t )  h ( t )  
m ( )h ( t   )d
 
 
 m( nT ) (
n  
s  nTs )h ( t   )d
 
  m( nT ) 
n  
s

 (  nTs )h (t   )d (3.13)
Using the sifting property , we have

 h(
m ( t )The PAM  m(snT
t )  signal (t ) sis)h (t  nTs )
n  
(3.14)
s (t )  m (t )  h(t ) (3.15)
 S ( f )  Mδ ( f ) H ( f ) (3.16)

Recall (3.2) g (t )  fs  G ( f  mf
m  
s ) (3.2)

M ( f )  f s M( f k f
k  
s ) (3.17)

S ( f )  fs M( f k f
k  
s )H ( f ) (3.18)

Pulse Amplitude Modulation – Natural and Flat-Top Sampling:


 The most common technique for sampling voice in PCM systems is to a sample-and-
hold circuit.

 The instantaneous amplitude of the analog (voice) signal is held as a constant charge
on a capacitor for the duration of the sampling period Ts.

 This technique is useful for holding the sample constant while other processing is
taking place, but it alters the frequency spectrum and introduces an error, called
aperture error, resulting in an inability to recover exactly the original analog signal.
 The amount of error depends on how mach the analog changes during the holding
time, called aperture time.
 To estimate the maximum voltage error possible, determine the maximum slope of the
analog signal and multiply it by the aperture time DT
Recovering the original message signal m(t) from PAM signal :

Where the filter bandwidth is W


The filter output is f s M ( f ) H ( f ) . Note that the
Fourier transform of h(t ) is given by
H ( f )  T sinc( f T ) exp( j f T ) (3.19)
amplitude distortion delay  T
2
 aparture effect
Let the equalizer response is
1 1 f
  (3.20)
H ( f ) T sinc( f T ) sin( f T )
Ideally th e original signal m(t ) can be recovered completely.

Other Forms of Pulse Modulation:

In pulse width modulation (PWM), the width of each pulse is made directly proportional
to the amplitude of the information signal.

 In pulse position modulation, constant-width pulses are used, and the position or time of
occurrence of each pulse from some reference time is made directly proportional to the
amplitude of the information signal.
Pulse Code Modulation (PCM) :

 Pulse code modulation (PCM) is produced by analog-to-digital conversion process.

 As in the case of other pulse modulation techniques, the rate at which samples are
taken and encoded must conform to the Nyquist sampling rate.
 The sampling rate must be greater than, twice the highest frequency in the analog
signal,

fs > 2fA(max)

Quantization Process:

Define partition cell


J k : mk  m  mk 1 , k  1,2,, L (3.21)
Where mk is the decision level or the decision threshold .
Amplitude quantizati on : The process of transforming the
Figure 3.10 Two types of quantization: (a) midtread and (b) midrise.

Quantization Noise:
Figure 3.11 Illustration of the quantization process
Let the quantizati on error be denoted by the random
variable Q of sample value q
q  m  (3.23)
Q  M  V , ( E[ M ]  0) (3.24)
Assuming a uniform quantizer of the midrise type
2m m ax
the step - size is   (3.25)
L
 m m ax  m  m m ax , L : total number of levels
 1  
 ,  q
f Q (q)    2 2 (3.26)

 0 , otherwise

1 2 2
  E[Q ]  
2 2 2
 q f Q (q ) dq    q dq
2

 2
Q

2

2
 (3.28)
12

When the quatized sample is expressed in binary form,


L  2R (3.29)
where R is the number of bits per sample
R  log 2 L (3.30)
2m m ax
 (3.31)
2R
1 2
 Q2  mmax 22 R (3.32)
3
Let P denote the average power of m(t )
P
 ( SNR) o  2
Q
3P
( 2
)2 2 R (3.33)
mmax
(SNR) o increases exponentia lly with increasing R (bandwidth ).
Pulse Code Modulation (PCM):

Figure 3.13 The basic elements of a PCM system


Quantization (nonuniform quantizer):

Compression laws. (a) m -law. (b) A-law.

 - law
log(1   m )
  (3.48)
log(1   )
dm log(1   )
 (1   m ) (3.49)
d 
A - law
 A( m) 1
 0 m 
 1  log A
  A (3.50)
1  log( A m ) 1
  m 1

 1  log A A
 1  log A 1
dm  0 m 
 A A (3.51)
d (1  A) m
1
 m 1
 A
Figure 3.15 Line codes for the electrical representations of binary data.

(a) Unipolar NRZ signaling. (b) Polar NRZ signaling.

(c) Unipolar RZ signaling. (d) Bipolar RZ signaling.

(e) Split-phase or Manchester code.

Noise consideration in PCM systems:

(Channel noise, quantization noise)


Time-Division Multiplexing(TDM):
Digital Multiplexers :

Virtues, Limitations and Modifications of PCM:

Advantages of PCM

1. Robustness to noise and interference

2. Efficient regeneration

3. Efficient SNR and bandwidth trade-off

4. Uniform format

5. Ease add and drop

6. Secure

Delta Modulation (DM) :


Let mn  m(nTs ) , n  0,1,2,
where Ts is the sampling period and m(nTs ) is a sample of m(t ).
The error signal is
en  mn  mq n  1 (3.52)
eq n   sgn(en ) (3.53)
mq n  mq n  1  eq n (3.54)
where mq n is the quantizer output , eq n is
the quantized version of en , and  is the step size

The modulator consists of a comparator, a quantizer, and an accumulator

The output of the accumulator is

n
mq n   sgn(ei )
i 1
n
  eq i  (3.55)
i 1
Two types of quantization errors:

Slope Overload Distortion and Granular Noise:

Denote the quantizati on error by qn ,


mq n   mn   qn (3.56)
Recall (3.52) , we have
en  mn   mn  1  qn  1 (3.57)
Except for qn  1, the quantizer input is a first
backward difference of the input signal
To avoid slope - overload distortion , we require
 dm(t )
(slope)  max (3.58)
Ts dt
On the other hand, granular noise occurs when step size
 is too large relative to the local slope of m(t ).
Delta-Sigma modulation (sigma-delta modulation):

The modulation which has an integrator can

relieve the draw back of delta modulation (differentiator)

Beneficial effects of using integrator:

1. Pre-emphasize the low-frequency content

2. Increase correlation between adjacent samples

(reduce the variance of the error signal at the quantizer input)


3. Simplify receiver design

Because the transmitter has an integrator , the receiver

consists simply of a low-pass filter.

(The differentiator in the conventional DM receiver is cancelled by the integrator )

Linear Prediction (to reduce the sampling rate):

Consider a finite-duration impulse response (FIR)

discrete-time filter which consists of three blocks :

1. Set of p ( p: prediction order) unit-delay elements (z-1)

2. Set of multipliers with coefficients w1,w2,…wp

3. Set of adders (  )

The filter output (The linear predition of the input ) is


p
xˆn   wk x( n  k ) (3.59)
k 1

The prediction error is


en   xn  xˆn  (3.60)
Let the index of performance be
 
J  E e 2 n  (mean square error) (3.61)
Find w1 , w2 ,  , w p to minimize J
From (3.59) (3.60) and (3.61) we have

 
p
J  E x n   2 wk E xn xn  k 
2

k 1
Assume X (t ) is stationary process with zero mean ( E[ x[n]]  0)
Linear adaptive prediction :

The predictor is adaptive in the follow sense


1. Compute wk , k  1,2, , p, starting any initial values
2. Do iteration using the method of steepest descent
Define the gradient v ector
J
gk  , k  1,2 , ,p (3.68)
wk
wk n denotes the value at iteration n . Then update wk n  1

wk n  1  wk n 
1
g k , k  1,2, ,p (3.69)
2
1
where  is a step - size parameter and is for convenienc e
2
of presentation.

as , if R X1 exists w 0  R X1rX (3.66)


where 
w 0  w1 , w2 ,  , w p 
T

rX  [ R X [1], R X [ 2],..., R X [ p ]]T


 R X 0 R X 1  R X  p  1
 R 1 R X 0  R X  p  2
RX  X 
    
 
 R X  p  1 R X  p  2  R X 0 

R X 0 , R X 1 ,  , R X  p 
Substituti ng (3.64) into (3.63) yields
p p
J min   X
2
 2 wk R X k    wk R X k 
k 1 k 1
p
X
2
  wk R X k 
k 1

X
2
 rX
T
w0   X
2
 rX
T
R X1rX (3.67)
r R T
X
1
X rX  0,  J min is always less than  2
X

J P
gk   2 R X k   2 w j R X k  j 
wk j 1
p
 2 E xnxn  k   2 w j E xn  j xn  k  , k  1,2,  , p (3.70)
j 1

To simplify t he computing we use xnxn  k  for E[x[n]x[n - k]]


(ignore the expectatio n)
p
gˆ k n  2 xn xn  k   2 w j nxn  j xn  k  , k  1,2,  , p (3.71)
j 1

 p

ˆ k n  1  w
w ˆ k n  xn  k  xn   w ˆ j nxn  j 

 j 1 
wˆ k n   xn  k en , k  1,2,  , p (3.72)
p
where en  xn   w
ˆ j n xn  j  by (3.59)  (3.60) (3.73)
j 1

The above equations are called lease - mean - square algorithm


Figure 3.27

Block diagram illustrating the linear adaptive prediction process

Differential Pulse-Code Modulation (DPCM):

Usually PCM has the sampling rate higher than the Nyquist rate .The encode signal contains
redundant information. DPCM can efficiently remove this redundancy.

Figure 3.28 DPCM system. (a) Transmitter. (b) Receiver.

Input signal to the quantizer is defined by:


en  mn  m ˆ n (3.74)
mˆ n is a prediction value.
The quantizer output is
eq n  en  qn (3.75)
where qn is quantizati on error.
The prediction filter input is
mq n  m
ˆ n  en  qn (3.77)
From (3.74)
mn
 mq n  mn  qn (3.78)
Processing Gain:

The (SNR)o of the DPCM system is


 M2
(SNR)o  2 (3.79)
Q
where  M2 and  Q2 are variances of mn ( E[m[n]]  0) and qn
 M2  E2
(SNR)o  ( 2 )( 2 )
 E Q
 G p (SNR )Q (3.80)
where  E2 is the variance of the predictions error
and the signal - to - quantizati on noise ratio is
 E2
(SNR )Q  2 (3.81)
Q
 M2
Processing Gain, G p  2 (3.82)
E
Design a prediction filter to maximize G p (minimize  E2 )
Adaptive Differential Pulse-Code Modulation (ADPCM):

Need for coding speech at low bit rates , we have two aims in mind:

1. Remove redundancies from the speech signal as far as possible.

2. Assign the available bits in a perceptually efficient manner.

Figure 3.29 Adaptive quantization with backward estimation (AQB).

Figure 3.30 Adaptive prediction with backward estimation (APB).


UNIT 2
Digital Modulation Techniques
 Introduction, ASK, ASK Modulator, Coherent ASK detector, non-Coherent ASK
detector,
 Band width frequency spectrum of FSK,
 Non-Coherent FSK detector,
 Coherent FSK detector,
 FSK Detection using PLL,
 BPSK, Coherent PSK detection, QPSK, Differential PSK
ASK, OOK, MASK:

• The amplitude (or height) of the sine wave varies to transmit the ones and zeros
• One amplitude encodes a 0 while another amplitude encodes a 1 (a form of amplitude
modulation)

Binary amplitude shift keying, Bandwidth:

• d ≥ 0-related to the condition of the line

B = (1+d) x S = (1+d) x N x 1/r

implementation of binary ASK:


Frequency Shift Keying:

• One frequency encodes a 0 while another frequency encodes a 1 (a form of frequency


modulation)

 A cos2f 2t  binary 1
st   
 A cos2f 2t  binary 0
FSK Bandwidth:

• Limiting factor: Physical capabilities of the carrier


• Not susceptible to noise as much as ASK
• Applications
– On voice-grade lines, used up to 1200bps
– Used for high-frequency (3 to 30 MHz) radio transmission
– used at higher frequencies on LANs that use coaxial cable

DBPSK:

• Differential BPSK
– 0 = same phase as last signal element
– 1 = 180º shift from last signal element
  
A cos 2f c t 
  11
 4


 3 

A cos 2f ct  

s t   
01
 4 

 3 
A cos 2f ct   00
  4 



 
A cos 2f c t  
 4
10

Concept of a constellation :
M-ary PSK:

Using multiple phase angles with each angle having more than one amplitude, multiple
signals elements can be achieved

R R
D 
L log 2 M

– D = modulation rate, baud


– R = data rate, bps
– M = number of different signal elements = 2L
– L = number of bits per signal element

QAM:

– As an example of QAM, 12 different phases are combined with two different


amplitudes
– Since only 4 phase angles have 2 different amplitudes, there are a total of 16
combinations
– With 16 signal combinations, each baud equals 4 bits of information (2 ^ 4 =
16)
– Combine ASK and PSK such that each signal corresponds to multiple bits
– More phases than amplitudes
– Minimum bandwidth requirement same as ASK or PSK
QAM and QPR:

• QAM is a combination of ASK and PSK


– Two different signals sent simultaneously on the same carrier frequency

– M=4, 16, 32, 64, 128, 256


• Quadrature Partial Response (QPR)
– 3 levels (+1, 0, -1), so 9QPR, 49QPR
Offset quadrature phase-shift keying (OQPSK):

• QPSK can have 180 degree jump, amplitude fluctuation


• By offsetting the timing of the odd and even bits by one bit-period, or half a symbol-
period, the in-phase and quadrature components will never change at the same time.
Generation and Detection of Coherent BPSK:

Figure 6.26 Block diagrams for (a) binary FSK transmitter and (b) coherent binary FSK
receiver.
Fig. 6.28
6.28

Figure 6.30 (a) Input binary sequence. (b) Waveform of scaled time
function s1f1(t). (c) Waveform of scaled time function s2f2(t). (d)
Waveform of the MSK signal s(t) obtained by adding s1f1(t) and
s2f2(t) on a bit-by-bit basis.
Figure 6.29 Signal-space diagram for MSK system.

Generation and Detection of MSK Signals:


Figure 6.31 Block diagrams for (a) MSK transmitter and (b) coherent MSK receiver.
UNIT 3
Base Band Transmission And Optimal reception of
Digital Signal
 Pulse shaping for optimum transmission,
 A Base band signal receiver,
 Different pulses and power spectrum densities,
 Probability of error, optimum receiver,
 Optimum of coherent reception,
 Signal space representation and probability of error,
 Eye diagram,
 cross talk.

BASEBAND FORMATTING TECHNIQUES

CORRELATIVE LEVEL CODING:


 Correlative-level coding (partial response signaling)
– adding ISI to the transmitted signal in a controlled manner
 Since ISI introduced into the transmitted signal is known, its effect can be interpreted at
the receiver
 A practical method of achieving the theoretical maximum signaling rate of 2W symbol
per second in a bandwidth of W Hertz
 Using realizable and perturbation-tolerant filters

Duo-binary Signaling :

Duo : doubling of the transmission capacity of a straight binary system

 Binary input sequence {bk} : uncorrelated binary symbol 1, 0

1 if symbol bk is 1 ck  ak  ak 1
ak  
1 if symbol bk is 0

H I ( f )  H Nyquist ( f )[1  exp( j 2fTb )]


 H Nyquist ( f )[exp( jfTb )  exp( jfTb )] exp( jfTb )
 2 H Nyquist ( f ) cos(fTb ) exp( jfTb )

1, | f | 1 / 2Tb
H Nyquist ( f )  
0, otherwise

2 cos( fTb ) exp( j fTb ), | f | 1/ 2Tb


HI ( f )  
sin(t / Tb ) sin[ (t  Tb ) / Tb ] 0, otherwise
hI (t )  
t / Tb  (t  Tb ) / Tb
Tb2 sin(t / Tb )

t (Tb  t )
 The tails of hI(t) decay as 1/|t|2, which is a faster rate of decay than 1/|t| encountered
in the ideal Nyquist channel.
 Let represent the estimate of the original pulse ak as conceived by the receiver at
time t=kTb
 Decision feedback : technique of using a stored estimate of the previous symbol
 Propagate : drawback, once error are made, they tend to propagate through the output
 Precoding : practical means of avoiding the error propagation phenomenon before the
duobinary coding

d k  bk  d k 1

 symbol 1 if either symbol bk or d k 1 is 1


dk  
symbol 0 otherwise

 {dk} is applied to a pulse-amplitude modulator, producing a corresponding two-level


sequence of short pulse {ak}, where +1 or –1 as before

ck  ak  ak 1

 0 if data symbol bk is 1
ck  
2 if data symbol bk is 0

 |ck|=1 : random guess in favor of symbol 1 or 0

 |ck|=1 : random guess in favor of symbol 1 or 0


Modified Duo-binary Signaling :

 Nonzero at the origin : undesirable


 Subtracting amplitude-modulated pulses spaced 2Tb second

ck  ak  ak 1

H IV ( f )  H Nyquist ( f )[1  exp( j 4 fTb )]


 2 jH Nyquist ( f )sin(2 fTb )exp( j 2 fTb )

2 j sin(2 fTb ) exp( j 2 fTb ), | f | 1/ 2Tb


H IV ( f )  
 0, elsewhere

sin( t / Tb ) sin[ (t  2Tb ) / Tb ]


hIV (t )  
 t / Tb  (t  2Tb ) / Tb
 precoding

dk  bk  dk 2
 symbol 1 if either symbol bk or d k 2 is 1

symbol 0 otherwise
 |ck|=1 : random guess in favor of symbol 1 or 0

If | ck | 1, say symbol bk is 1
If | ck | 1, say symbol bk is 0

Generalized form of correlative-level coding:

 |ck|=1 : random guess in favor of symbol 1 or 0


N 1
 t 
h(t )   wn sin c  n 
n  Tb 

Baseband M-ary PAM Transmission:


 Produce one of M possible amplitude level
 T : symbol duration
 1/T: signaling rate, symbol per second, bauds
– Equal to log2M bit per second
 Tb : bit duration of equivalent binary PAM :
 To realize the same average probability of symbol error, transmitted power must be
increased by a factor of M2/log2M compared to binary PAM
Tapped-delay-line equalization :

 Approach to high speed transmission


– Combination of two basic signal-processing operation
– Discrete PAM
– Linear modulation scheme
 The number of detectable amplitude levels is often limited by ISI
 Residual distortion for ISI : limiting factor on data rate of the system
 Equalization : to compensate for the residual distortion
 Equalizer : filter
– A device well-suited for the design of a linear equalizer is the tapped-delay-
line filter
– Total number of taps is chosen to be (2N+1)
N
h(t )   w  (t  kT )
k  N
k

 P(t) is equal to the convolution of c(t) and h(t)


N
p(t )  c(t )  h(t )  c(t )   w  (t  kT )
k  N
k

N N
  w c(t )   (t  kT )   w c(t  kT )
k  N
k
k  N
k

 nT=t sampling time, discrete convolution sum

N
p(nT )   w c((n  k )T )
k  N
k
 Nyquist criterion for distortionless transmission, with T used in place of Tb,
normalized condition p(0)=1

1, n  0 1, n0


p(nT )   
0, n  0 0, n  1,  2, ....., N
 Zero-forcing equalizer
– Optimum in the sense that it minimizes the peak distortion(ISI) – worst case
– Simple implementation
– The longer equalizer, the more the ideal condition for distortionless
transmission

Adaptive Equalizer :

 The channel is usually time varying


– Difference in the transmission characteristics of the individual links that may
be switched together
– Differences in the number of links in a connection
 Adaptive equalization
– Adjust itself by operating on the the input signal
 Training sequence
– Precall equalization
– Channel changes little during an average data call
 Prechannel equalization
– Require the feedback channel
 Postchannel equalization
 synchronous
– Tap spacing is the same as the symbol duration of transmitted signal

Least-Mean-Square Algorithm:

 Adaptation may be achieved


– By observing the error b/w desired pulse shape and actual pulse shape
– Using this error to estimate the direction in which the tap-weight should be
changed
 Mean-square error criterion
– More general in application
– Less sensitive to timing perturbations
 : desired response, : error signal, : actual response
 Mean-square error is defined by cost fuction

  E  en2 
 Ensemble-averaged cross-correlation
  e   y 
 2 E en n   2 E en n   2 E  en xn  k   2 Rex (k )
wk  wk   wk 
Rex (k )  E  en xn  k 

 Optimality condition for minimum mean-square error


 0 for k  0,  1,....,  N
wk Mean-square error is a second-order and a parabolic function of tap weights as a
multidimentional bowl-shaped surface
 Adaptive process is a successive adjustments of tap-weight seeking the bottom of the
bowl(minimum value )
 Steepest descent algorithm
– The successive adjustments to the tap-weight in direction opposite to the
vector of gradient )
– Recursive formular ( : step size parameter)

1 
wk (n  1)  wk (n)   , k  0,  1,....,  N
2 wk
 wk (n)   Rex (k ), k  0,  1,....,  N
 Least-Mean-Square Algorithm
– Steepest-descent algorithm is not available in an unknown environment
– Approximation to the steepest descent algorithm using instantaneous estimate

Rex (k )  en xn  k
wk (n  1)  wk (n)   en xn  k
 LMS is a feedback system
 In the case of small , roughly similar to steepest descent algorithm
Operation of the equalizer:

 square error Training mode


– Known sequence is transmitted and synchorunized version is generated in the
receiver
– Use the training sequence, so called pseudo-noise(PN) sequence
 Decision-directed mode
– After training sequence is completed
– Track relatively slow variation in channel characteristic
 Large  : fast tracking, excess mean

Implementation Approaches:

 Analog
– CCD, Tap-weight is stored in digital memory, analog sample and
multiplication
– Symbol rate is too high
 Digital
– Sample is quantized and stored in shift register
– Tap weight is stored in shift register, digital multiplication
 Programmable digital
– Microprocessor
– Flexibility
– Same H/W may be time shared

Decision-Feed back equalization:

 Baseband channel impulse response : {hn}, input : {xn}


yn   hk xn k
k

 h0 xn   hk xn k   hk xn k
k 0 k 0

 Using data decisions made on the basis of precursor to take care of the postcursors
– The decision would obviously have to be correct

 Feedforward section : tapped-delay-line equalizer


 Feedback section : the decision is made on previously detected symbols of the input
sequence
– Nonlinear feedback loop by decision device

 wn(1)  x 
cn   (2)  vn   n 
 wn   an  en  an  cnT vn wn(1)1  wn(1)1  1en xn
wn(2)1  wn(2)1  1en an
Eye Pattern:

 Experimental tool for such an evaluation in an insightful manner


– Synchronized superposition of all the signal of interest viewed within a
particular signaling interval
 Eye opening : interior region of the eye pattern

 In the case of an M-ary system, the eye pattern contains (M-1) eye opening, where M
is the number of discreteamplitude levels
Interpretation of Eye Diagram:
Information Theory
 Information and entropy,
 Conditional entropy and redundancy,
 Shannon Fano coding,
 mutual, information,
 Information loss due to noise,
 Source codings,- Huffman code, variable length coding
 Source coding to increase average information per bit,
 Lossy source Coding.

INFORMATION THEORY AND CODING TECHNIQUES

Information sources

Definition:
The set of source symbols is called the source alphabet, and the elements of the set are
called the symbols or letters.

The number of possible answers ‘ r ’ should be linked to “information.”

“Information” should be additive in some sense.

We define the following measure of information:

Where ‘ r ’ is the number of all possible outcome so far an do m message U.

Using this definition we can confirm that it has the wanted property of additivity:

The basis ‘b’ of the logarithm b is only a change of units without actually changing the
amount of information it describes.

Classification of information sources

1. Discrete memory less.


2. Memory.

Discrete memory less source (DMS) can be characterized by “the list of the symbols, the
probability assignment to these symbols, and the specification of the rate of generating these
symbols by the source”.

1. Information should be proportion to the uncertainty of an outcome.


2. Information contained in independent outcome should add.

Information content of a symbol:

Let us consider a discrete memory less source (DMS) denoted by X and having the alphabet
{U1, U2, U3, ……Um}. The information content of the symbol xi, denoted by I(xi) is defined
as
I(U) = logb = - log b P(U)

Where P(U) is the probability of occurrence of symbol U

Units of I(xi):

For two important and one unimportant special cases of b it has been agreed to use the
following names for these units:

b =2(log2): bit,

b = e (ln): nat (natural logarithm),

b =10(log10): Hartley.

The conversation of these units to other units is given as

log2a=

Uncertainty or Entropy (i.e Average information)

Definition:

In order to get the information content of the symbol, the flow information on the symbol can
fluctuate widely because of randomness involved into the section of symbols.

The uncertainty or entropy of a discrete random variable (RV) ‘U’ is defined as

H(U)= E[I(u)]=

where PU(·)denotes the probability mass function (PMF)2 of the RV U, and where the
support of P U is defined as
We will usually neglect to mention “support” when we sum over

PU(u) · logb PU(u), i.e., we implicitly assume that we exclude all u

With zero probability PU(u)=0.

Entropy for binary source

It may be noted that for a binary souce U which genets independent symbols 0 and 1 with
equal probability, the source entropy H(u) is

H(u) = - log2 - log2 = 1 b/symbol

Bounds on H(U)
If U has r possible values, then 0 ≤ H(U) ≤ log r,

0 ≤ H(U) ≤ log r,

Where

H(U)=0 if, and only if, PU(u)=1 for some u,

H(U)=log r if, and only if, PU(u)= 1/r ∀ u.

Hence, H(U) ≥ 0.Equalitycanonlybeachievedif −PU(u)log2 PU(u)=0

For all u ∈ supp(PU),i.e., PU(u)=1forall u ∈ supp(PU).

To derive the upper bound we use at rick that is quite common in in-

Formation theory: We take the deference and try to show that it must be non positive.
Equality can only be achieved if

1. In the IT Inequality ξ =1,i.e.,if 1r·PU(u)=1=⇒ PU(u)= 1r ,for all u;


2. |supp(PU)| = r.
Note that if Condition1 is satisfied, Condition 2 is also satisfied.

Conditional Entropy

Similar to probability of random vectors, there is nothing really new about conditional
probabilities given that a particular event Y = y has occurred.

The conditional entropy or conditional uncertainty of the RV X given the event Y = y is


defined as

Note that the definition is identical to before apart from that everything is conditioned on
the event Y = y

Note that the conditional entropy given the event Y = y is a function of y. Since Y is also
a RV, we can now average over all possible events Y = y according to the probabilities of
each event. This will lead to the averaged.

• Forward Error Correction (FEC)


– Coding designed so that errors can be corrected at the receiver
– Appropriate for delay sensitive and one-way transmission (e.g., broadcast TV)
of data
– Two main types, namely block codes and convolutional codes. We will only
look at block codes
UNIT 4
Linear Block Codes

 Matrix description of linear block codes,


 Matrix description of linear block codes,
 Error detection and error correction capabilities of linear block codes
 Cyclic codes: algebraic structure, encoding, syndrome calculation, decoding

Block Codes:

•We will consider only binary data


•Data is grouped into blocks of length k bits (dataword)
•Each dataword is coded into blocks of length n bits (codeword), where in general n>k
•This is known as an (n,k) block code
•A vector notation is used for the datawords and codewords,
– Dataword d = (d1 d2….dk)
– Codeword c = (c1 c2……..cn)
• The redundancy introduced by the code is quantified by the code rate,
– Code rate = k/n
– i.e., the higher the redundancy, the lower the code rate
Hamming Distance:

• Error control capability is determined by the Hamming distance


• The Hamming distance between two codewords is equal to the number of differences
between them, e.g.,
10011011

11010010 have a Hamming distance = 3

• Alternatively, can compute by adding codewords (mod 2)


=01001001 (now count up the ones)

• The maximum number of detectable errors is

d min  1
• That is the maximum number of correctable errors is given by,

 d  1
t   min 
 2 
where dmin is the minimum Hamming distance between 2 codewords and means
the smallest integer

Linear Block Codes:

• As seen from the second Parity Code example, it is possible to use a table to hold all
the codewords for a code and to look-up the appropriate codeword based on the
supplied dataword
• Alternatively, it is possible to create codewords by addition of other codewords. This
has the advantage that there is now no longer the need to held every possible
codeword in the table.
• If there are k data bits, all that is required is to hold k linearly independent codewords,
i.e., a set of k codewords none of which can be produced by linear combinations of 2
or more codewords in the set.
• The easiest way to find k linearly independent codewords is to choose those which
have ‘1’ in just one of the first k positions and ‘0’ in the other k-1 of the first k
positions.

• For example for a (7,4) code, only four codewords are required, e.g.,

1 0 0 0 1 1 0
0 1 0 0 1 0 1
0 0 1 0 0 1 1
0 0 0 1 1 1 1

• So, to obtain the codeword for dataword 1011, the first, third and fourth codewords in
the list are added together, giving 1011010
• This process will now be described in more detail

• An (n,k) block code has code vectors


d=(d1 d2….dk) and

c=(c1 c2……..cn)

• The block coding process can be written as c=dG


where G is the Generator Matrix

 a11 a12 ... a1n   a1 


a a22 ... a2 n  a 2 
G   21 
 . . ... .   . 
   
 ak 1 ak 2 ... akn  a k 
• Thus,
k
c   di a i
i 1

•ai must be linearly independent, i.e.,


Since codewords are given by summations of the ai vectors, then to avoid 2 datawords
having the same codeword the ai vectors must be linearly independent.

• Sum (mod 2) of any 2 codewords is also a codeword, i.e.,


Since for datawords d1 and d2 we have;

d 3  d1  d 2

So,

k k k k
c3   d 3i a i   (d1i  d 2i )a i  d1i a i   d 2i a i
i 1 i 1 i 1 i 1

c3  c1  c 2
Error Correcting Power of LBC:

• The Hamming distance of a linear block code (LBC) is simply the minimum
Hamming weight (number of 1’s or equivalently the distance from the all 0
codeword) of the non-zero codewords
• Note d(c1,c2) = w(c1+ c2) as shown previously
• For an LBC, c1+ c2=c3
• So min (d(c1,c2)) = min (w(c1+ c2)) = min (w(c3))
• Therefore to find min Hamming distance just need to search among the 2k codewords
to find the min Hamming weight – far simpler than doing a pair wise check for all
possible codewords.

Linear Block Codes – example 1:

• For example a (4,2) code, suppose;

1 0 1 1
G  
0 1 0 1

a1 = [1011]

a2 = [0101]

• For d = [1 1], then;

1 0 1 1
 0 1 0 1
c 
_ _ _ _
 1 1 1 0

Linear Block Codes – example 2:

• A (6,5) code wit h


1 0 0 0 0 1
0 1 0 0 0 1
 
G  0 0 1 0 0 1
 
0 0 0 1 0 1
0 0 0 0 1 1
• Is an even single parity code

Systematic Codes:

• For a systematic block code the dataword appears unaltered in the codeword – usually
at the start
• The generator matrix has the structure,

1 0 .. 0 p11 p12 .. p1R 


0 1 .. 0 p21 p22 .. p2 R 
G  I | P
.. .. .. .. .. .. .. .. 
 
0 0 .. 1 pk 1 pk 2 .. pkR 
R=n-k

• is often referred to as parity bits


I is k*k identity matrix. Ensures data word appears as beginning of codeword P is k*R matrix.

Decoding Linear Codes:

• One possibility is a ROM look-up table


• In this case received codeword is used as an address
• Example – Even single parity check code;
Address Data

000000 0

000001 1

000010 1

000011 0

……… .

• Data output is the error flag, i.e., 0 – codeword ok,


• If no error, data word is first k bits of codeword
• For an error correcting code the ROM can also store data words
• Another possibility is algebraic decoding, i.e., the error flag is computed from the
received codeword (as in the case of simple parity codes)
• How can this method be extended to more complex error detection and correction
codes?

Parity Check Matrix:

• A linear block code is a linear subspace S sub of all length n vectors (Space S)
• Consider the subset S null of all length n vectors in space S that are orthogonal to all
length n vectors in S sub
• It can be shown that the dimensionality of S null is n-k, where n is the dimensionality
of S and k is the dimensionality of
S sub

• It can also be shown that S null is a valid subspace of S and consequently S sub is also
the null space of S null
• S null can be represented by its basis vectors. In this case the generator basis vectors
(or ‘generator matrix’ H) denote the generator matrix for S null - of dimension n-k = R
• This matrix is called the parity check matrix of the code defined by G, where G is
obviously the generator matrix for S sub - of dimension k
• Note that the number of vectors in the basis defines the dimension of the subspace
• So the dimension of H is n-k (= R) and all vectors in the null space are orthogonal to
all the vectors of the code
• Since the rows of H, namely the vectors bi are members of the null space they are
orthogonal to any code vector
• So a vector y is a codeword only if yHT=0
• Note that a linear block code can be specified by either G or H

Parity Check Matrix:

 b11 b12 ... b1n   b1 


b ... b2 n   b 2 
H   21 b22 
R = n - k . . ... .   . 
   
bR1 bR 2 ... bRn  b R 

• So H is used to check if a codeword is valid,


• The rows of H, namely, bi, are chosen to be orthogonal to rows of G, namely ai
• Consequently the dot product of any valid codeword with any bi is zero

This is so since,

k
c   di a i
i 1

and so,

k k
b j .c  b j . d i a i   d i (a i .b j )  0
i 1 i 1
• This means that a codeword is valid (but not necessarily correct) only if cHT = 0. To
ensure this it is required that the rows of H are independent and are orthogonal to the
rows of G
• That is the bi span the remaining R (= n - k) dimensions of the codespace

• For example consider a (3,2) code. In this case G has 2 rows, a1 and a2
• Consequently all valid codewords sit in the subspace (in this case a plane) spanned by
a1 and a2
• In this example the H matrix has only one row, namely b1. This vector is orthogonal
to the plane containing the rows of the G matrix, i.e., a1 and a2
• Any received codeword which is not in the plane containing a1 and a2 (i.e., an invalid
codeword) will thus have a component in the direction of b1 yielding a non- zero dot
product between itself and b1.

Error Syndrome:

• For error correcting codes we need a method to compute the required correction
• To do this we use the Error Syndrome, s of a received codeword, cr
s = crHT

• If cr is corrupted by the addition of an error vector, e, then


cr = c + e

and

s = (c + e) HT = cHT + eHT

s = 0 + eHT

Syndrome depends only on the error

• That is, we can add the same error pattern to different code words and get the same
syndrome.
– There are 2(n - k) syndromes but 2n error patterns
– For example for a (3,2) code there are 2 syndromes and 8 error patterns
– Clearly no error correction possible in this case
– Another example. A (7,4) code has 8 syndromes and 128 error patterns.
– With 8 syndromes we can provide a different value to indicate single errors in
any of the 7 bit positions as well as the zero value to indicate no errors
• Now need to determine which error pattern caused the syndrome

• For systematic linear block codes, H is constructed as follows,


G = [ I | P] and so H = [-PT | I]

where I is the k*k identity for G and the R*R identity for H

• Example, (7,4) code, dmin= 3


1 0 0 0 0 1 1
0 1
1 0 0 1 0 0 1 1 1 1 0 0
G  I | P  
0 0 1 0 1 1 0  T

H  - P | I  1 0 1 1 0 1 0
 
0 0 0 1 1 1 1 1 1 0 1 0 0 1

Error Syndrome – Example:

• For a correct received codeword cr = [1101001]


In this case,

0 1 1 
1 0 1
 
1 1 0
 
s  c r H T  1 1 0 1 0 0 11 1 1  0 0 0
1 0 0
Standard Array:
 
 0 1 0 
0 as0follows,
1

• The Standard Array is constructed
c1 (all zero) c2 …… cM s0
e1 c2+e1 …… cM+e1 s1
e2 c2+e2 …… cM+e2 s2
e3 c2+e3 …… cM+e3 s3
… …… …… …… …
eN c2+eN …… cM+eN sN

• The array has 2k columns (i.e., equal to the number of valid codewords) and 2R rows
(i.e., the number of syndromes)

Hamming Codes:

• We will consider a special class of SEC codes (i.e., Hamming distance = 3) where,
– Number of parity bits R = n – k and n = 2R – 1
– Syndrome has R bits
– 0 value implies zero errors
– 2R – 1 other syndrome values, i.e., one for each bit that might need to be
corrected
– This is achieved if each column of H is a different binary word – remember s
= eHT
• Systematic form of (7,4) Hamming code is,

1 0 0 0 0 1 1
0 1
1 0 0 1 0 0 1 1 1 1 0 0
G  I | P  
0 0 1 0 1 1 0  T

H  - P | I  1 0 1 1 0 1 0
 
0 0 0 1 1 1 1 1 1 0 1 0 0 1

• The original form is non-systematic,

1 1 1 0 0 0 0
1 0
0 0 1 1 0 0 0 0 1 1 1 1
G 
0 1 0 1 0 1 0 H  0 1 1 0 0 1 1
 
1 1 0 1 0 0 1 1 0 1 0 1 0 1
• Compared with the systematic code, the column orders of both G and H are swapped
so that the columns of H are a binary count
• The column order is now 7, 6, 1, 5, 2, 3, 4, i.e., col. 1 in the non-systematic H is col. 7
in the systematic H.

Transmission and Storage Transmission and Storage

Introduction

◊ A major concern of designing digital data transmission and storage Systems is the control
of errors so that reliable reproduction of data systems is the control of errors so that reliable
reproduction of data can be obtained.

◊ In 1948, Shannon demonstrated that, by proper encoding of the information, errors induced
by a noisy channel or storage medium can be reduced to any desired level without sacrificing
the rate of information transmission or storage, as long as the information rate is less than the
capacity of the channel.

◊ A great deal of effort has been expended on the problem of devising efficient encoding and
decoding methods for error control in a noisy environment

Typical Digital Communications Systems

◊ Block diagram of a typical data transmission or storage system

Types of Codes

◊ There are four types of codes in common use today:

◊ Block codes

◊ Convolutionalcodes

◊ Turbo codes

◊ Low-Density Parity-Check (LDPC) Codes

◊ Block codes

◊ The encoder for a block code divides the information sequence


into message blocks of k information bits each.

◊ A message block is represented by the binary k-tuple ( )lld u=(u1,u2,…,uk) called a


message.

◊ There are a total of 2k different possible messages.

Block Codes

◊ Block codes (cont.)

◊ The encoder transforms each message u into an n-tuple

◊ The encoder transforms each message u into an n-tuple

v=(v1,v2,…,vn) of discrete symbols called a code word.

◊ Corresponding to the 2k different possible messages, there are 2k different possible code
words at the encoder output.

◊ This set of 2k code words of length n is called an (n,k) block code.

◊ The ratio R=k/n is called the code rate.

◊ n-k redundant bits can be added to each message to form a code word

◊ Since the n-symbol output code word depends only on the corresponding k-bit input
message, the encoder is memoryless, and can be implemented with a combinational logic
circuit.

Block Codes

◊ Binary block code with k=4 and n=7

6Finite Field (Galois Field) Finite Field (Galois Field)

◊ Much of the theory of linear block code is highly mathematical in nature and requires an
extensive background in modern algebra nature, and requires an extensive background in
modern algebra.

◊ Finite field was invented by the early 19th century mathematician,

◊ Galois was a young French math whiz who developed a theory of finite fields, now know as
Galois fields, before being killed in a duel at the age of 21.

◊ For well over 100 years, mathematicians looked upon Galois fields as elegant mathematics
but of no practical value.
Convolutional Codes

◊ The encoder for a convolutional code also accepts k-bit blocks of the information sequence
u and produces an encoded sequence (code word) v of n-symbol blocks.

◊ Each encoded block depends not only on the corresponding k-bit message block at the same
time unit, but also on m previous message blocks. Hence the encoder has a memory order of
m message blocks. Hence, the encoder has a memory order of m.

◊ The set of encoded sequences produced by a k-input, n-output encoder of memory order m
is called an (n, k, m) convolutional y ( , , ) code.

◊ The ratio R=k/n is called the code rate.

◊ Since the encoder contains memory, it must be implemented with a sequential logic circuit.

◊ Binary convolutional encoder with k=1, n=2, and m=2

◊ Memorylesschannels are called random-error channels.

Transition probability diagrams for binary symmetric channel (BSC).1.5 Types of Errors 1.5
Types of Errors

◊ On channels with memory, the noise is not independent from Transmission to transmission
◊ Channel with memory are called burst-error channels.

Simplified model of a channel with memory.1.6 Error Control Strategies 1.6 Error Control
Strategies

◊ Error control for a one-way system must be accomplished using

Forward error correction (FEC) that is by employing error- forward error correction (FEC),
that is, by employing error correcting codes that automatically correct errors detected at the

receiver.

◊ Error control for a two-way system can be accomplished using error detection and
retransmission, called automatic repeat request (ARQ).

This is also know as the backward error correction (BEC).

◊ In an ARQ system, when errors are detected at the receiver, a request is sent

For the transmitter to repeat the message and this continues until the message for the
transmitter to repeat the message, and this continues until the message is received correctly.

◊ The major advantage of ARQ over FEC is that error detection requires much simpler
decoding equipment than does error correction.

151.6 Error Control Strategies 1.6 Error Control Strategies


◊ ARQ is adaptive in the sense that information is retransmitted only when errors occur when
errors occur.

◊ When the channel error rate is high, retransmissions must be sent too frequently, and the
system throughput, the rate at which newly generated messages are correctly received, is
lowered by ARQ.

◊ In general, wire-line communications (more reliable) adopts BEC scheme, while wireless
communications (relatively unreliable) adopts FEC scheme.

Error Detecting Codes Error Detecting Codes

◊ Cyclic Redundancy Code (CRC Code) –also know as the polynomial code polynomial
code.

◊ Polynomial codes are based upon treating bit strings as representations of polynomials with
coefficients of 0 and 1 only.

◊ For example, 110001representsasix-termpolynomial:x5+x4+x0

◊ When the polynomial code method is employed, the sender and receiver must agree upon a
generator polynomial, G(x), in advance.

◊ To compute the checksum for some frame with m bits, corresponding to the polynomial
M(x), the frame must be longer than the generator polynomial.

Error Detecting Codes

◊ The idea is to append a checksum to the end of the frame in such a way that the polynomial
represented by the check summed frame divisible by G(x).

◊ When the receiver gets the checksummed frame, it tries dividing it by G(x). If there is a
remainder, there has been a transmission error.

◊ The algorithm for computing the checksum is as follows:

Calculation of the polynomial code checksum Calculation of the polynomial code checksum

Calculation of the polynomial code checksum Calculation of the polynomial code checksum
Convolution Codes
 Encoding,
 Decoding using state Tree and trellis diagrams,
 Decoding using Viterbi algorithm,
 Comparison of error rates in coded and uncoded transmission.

Introduction:

• Convolution codes map information to code bits sequentially by convolving a


sequence of information bits with “generator” sequences
• A convolution encoder encodes K information bits to N>K code bits at one time step
• Convolutional codes can be regarded as block codes for which the encoder has a
certain structure such that we can express the encoding operation as convolution

• Convolutional codes are applied in applications that require good performance with
low implementation cost. They operate on code streams (not in blocks)
• Convolution codes have memory that utilizes previous bits to encode or decode
following bits (block codes are memoryless)
• Convolutional codes achieve good performance by expanding their memory depth
• Convolutional codes are denoted by (n,k,L), where L is code (or encoder) Memory
depth (number of register stages)
• Constraint length C=n(L+1) is defined as the number of encoded bits a message bit
can influence to

• Convolutional encoder, k = 1, n = 2, L=2


– Convolutional encoder is a finite state machine (FSM) processing
information bits in a serial manner
– Thus the generated code is a function of input and the state of the FSM
– In this (n,k,L) = (2,1,2) encoder each message bit influences a span of C=
n(L+1)=6 successive output bits = constraint length C
– Thus, for generation of n-bit output, we require n shift registers in k = 1
convolutional encoders
x ' j  m j 3  m j  2  m j
x '' j  m j 3  m j 1  m j
x ''' j  m j2  m j

Here each message bit influences

a span of C = n(L+1)=3(1+1)=6
successive output bits

Convolution point of view in encoding and generator matrix:


Example: Using generator matrix

 g  [1 0 11] 
(1)

 g ( 2 )  [111 1] 
 
Representing convolutional codes: Code tree:

(n,k,L) = (2,1,2) encoder

 x ' j  m j 2  m j 1  m j

 x '' j  m j 2  m j

xout  x '1 x ''1 x '2 x ''2 x '3 x ''3 ...


Representing convolutional codes compactly: code trellis and state diagram:

State diagram

Inspecting state diagram: Structural properties of convolutional codes:

• Each new block of k input bits causes a transition into new state
• Hence there are 2k branches leaving each state
• Assuming encoder zero initial state, encoded word for any input of k bits can thus be
obtained. For instance, below for u=(1 1 1 0 1), encoded word v=(1 1, 1 0, 0 1, 0 1, 1
1, 1 0, 1 1, 1 1) is produced:

- encoder state diagram for (n,k,L)=(2,1,2) code

- note that the number of states is 2L+1 = 8

Distance for some convolutional codes:

THE VITERBI ALGORITHEM:


• Problem of optimum decoding is to find the minimum distance path from the initial
state back to initial state (below from S0 to S0). The minimum distance is the sum of
all path metrics

• that is maximized by the correct path


• Exhaustive maximum likelihood
method must search all the paths
in phase trellis (2k paths emerging/
entering from 2 L+1 states for
an (n,k,L) code)
• The Viterbi algorithm gets its
efficiency via concentrating intosurvivor paths of the trellis

ln p (y, x m )   j0 ln p( y j | xmj )

THE SURVIVOR PATH:

• Assume for simplicity a convolutional code with k=1, and up to 2k = 2 branches can
enter each state in trellis diagram
• Assume optimal path passes S. Metric comparison is done by adding the metric of S
into S1 and S2. At the survivor path the accumulated metric is naturally smaller
(otherwise it could not be the optimum path)
• For this reason the non-survived path can
be discarded -> all path alternatives need not
to be considered
• Note that in principle whole transmitted
sequence must be received before decision.
However, in practice storing of states for
input length of 5L is quite adequate
The maximum likelihood path:

The decoded ML code sequence is 11 10 10 11 00 00 00 whose Hamming

distance to the received sequence is 4 and the respective decoded


sequence is 1 1 0 0 0 0 0 (why?). Note that this is the minimum distance path.

(Black circles denote the deleted branches, dashed lines: '1' was applied)

How to end-up decoding?

• In the previous example it was assumed that the register was finally filled with zeros
thus finding the minimum distance path
• In practice with long code words zeroing requires feeding of long sequence of zeros to
the end of the message bits: this wastes channel capacity & introduces delay
• To avoid this path memory truncation is applied:
– Trace all the surviving paths to the
depth where they merge
– Figure right shows a common point
at a memory depth J
– J is a random variable whose applicable
magnitude shown in the figure (5L)
has been experimentally tested for
negligible error rate increase
– Note that this also introduces the
delay of 5L!

J  5L stages of the trellis


Hamming Code Example:

• H(7,4)
• Generator matrix G: first 4-by-4 identical matrix

• Message information vector p

• Transmission vector x
• Received vector r
and error vector e

• Parity check matrix H


Error Correction:

• If there is no error, syndrome vector z=zeros

• If there is one error at location 2

• New syndrome vector z is


Example of CRC:

Example: Using generator matrix:

 g  [1 0 11] 
(1)

 g ( 2 )  [111 1] 
 

11  00  01  11  01
11 10

01
correct:1+1+2+2+2=8;8  (0.11)  0.88
false:1+1+0+0+0=2;2  (2.30)  4.6
total path metric:  5.48
Turbo Codes:

• Backgound
– Turbo codes were proposed by Berrou and Glavieux in the 1993 International
Conference in Communications.
– Performance within 0.5 dB of the channel capacity limit for BPSK was
demonstrated.
• Features of turbo codes
– Parallel concatenated coding
– Recursive convolutional encoders
– Pseudo-random interleaving
– Iterative decoding
Motivation: Performance of Turbo Codes

• Comparison:
– Rate 1/2 Codes.
– K=5 turbo code.
– K=14 convolutional code.
• Plot is from:
– L. Perez, “Turbo Codes”, chapter 8 of Trellis Coding by C. Schlegel. IEEE
Press, 1997

Pseudo-random Interleaving:

• The coding dilemma:


– Shannon showed that large block-length random codes achieve channel
capacity.
– However, codes must have structure that permits decoding with reasonable
complexity.
– Codes with structure don’t perform as well as random codes.
– “Almost all codes are good, except those that we can think of.”
• Solution:
– Make the code appear random, while maintaining enough structure to permit
decoding.
– This is the purpose of the pseudo-random interleaver.
– Turbo codes possess random-like properties.
– However, since the interleaving pattern is known, decoding is possible.
Why Interleaving and Recursive Encoding?

• In a coded systems:
– Performance is dominated by low weight code words.
• A “good” code:
– will produce low weight outputs with very low probability.
• An RSC code:
– Produces low weight outputs with fairly low probability.
– However, some inputs still cause low weight outputs.
• Because of the interleaver:
– The probability that both encoders have inputs that cause low
weight outputs is very low.
– Therefore the parallel concatenation of both encoders will produce
a “good” code.

Iterative Decoding:

• There is one decoder for each elementary encoder.


• Each decoder estimates the a posteriori probability (APP) of each data
bit.
• The APP’s are used as a priori information by the other decoder.
• Decoding continues for a set number of iterations.
– Performance generally improves from iteration to iteration, but
follows a law of diminishing returns

The Turbo-Principle:

Turbo codes get their name because the decoder uses feedback, like a turbo engine
Performance as a Function of Number of Iterations:

0
10

-1
10
1 iteration
-2
10

-3
2 iterations
10
BER

-4
10 6 iterations 3 iterations

-5
10 10 iterations

-6
10 18 iterations

-7
10
0.5 1 1.5 2
Eb/No in dB

Turbo Code Summary:

• Turbo code advantages:


– Remarkable power efficiency in AWGN and flat-fading channels
for moderately low BER.
– Deign tradeoffs suitable for delivery of multimedia services.
• Turbo code disadvantages:
– Long latency.
– Poor performance at very low BER.
– Because turbo codes operate at very low SNR, channel estimation
and tracking is a critical issue.
• The principle of iterative or “turbo” processing can be applied to other
problems.
– Turbo-multiuser detection can improve performance of coded
multiple-access systems.

UNIT 5 :
Spread Spectrum Modulation

 Use of spread spectrum,


 direct sequence spread spectrum(DSSS),
 Code division multiple access,
 Ranging using DSSS Frequency Hopping spread spectrum,
 PN sequences: generation and characteristics,
 Synchronization in spread spectrum system,
 Advancements in the digital communication

SPREAD SPECTRUM MODULATION

• Spread data over wide bandwidth


• Makes jamming and interception harder
• Frequency hoping
– Signal broadcast over seemingly random series of frequencies
• Direct Sequence
– Each bit is represented by multiple bits in transmitted signal
– Chipping code

Spread Spectrum Concept:

• Input fed into channel encoder


– Produces narrow bandwidth analog signal around central frequency
• Signal modulated using sequence of digits
– Spreading code/sequence
– Typically generated by pseudonoise/pseudorandom number generator
• Increases bandwidth significantly
– Spreads spectrum
• Receiver uses same sequence to demodulate signal
• Demodulated signal fed into channel decoder
General Model of Spread Spectrum System:

Gains:

• Immunity from various noise and multipath distortion


– Including jamming
• Can hide/encrypt signals
– Only receiver who knows spreading code can retrieve signal
• Several users can share same higher bandwidth with little interference
– Cellular telephones
– Code division multiplexing (CDM)
– Code division multiple access (CDMA)
Pseudorandom Numbers:

• Generated by algorithm using initial seed


• Deterministic algorithm
– Not actually random
– If algorithm good, results pass reasonable tests of randomness
• Need to know algorithm and seed to predict sequence

Frequency Hopping Spread Spectrum (FHSS):

• Signal broadcast over seemingly random series of frequencies


• Receiver hops between frequencies in sync with transmitter
• Eavesdroppers hear unintelligible blips
• Jamming on one frequency affects only a few bits

Basic Operation:

• Typically 2k carriers frequencies forming 2k channels


• Channel spacing corresponds with bandwidth of input
• Each channel used for fixed interval
– 300 ms in IEEE 802.11
– Some number of bits transmitted using some encoding scheme
• May be fractions of bit (see later)
– Sequence dictated by spreading code
Frequency Hopping Example:

Frequency Hopping Spread Spectrum System (Transmitter):

Frequency Hopping Spread Spectrum System (Receiver):


Slow and Fast FHSS:

• Frequency shifted every Tc seconds


• Duration of signal element is Ts seconds
• Slow FHSS has Tc  Ts
• Fast FHSS has Tc < Ts
• Generally fast FHSS gives improved performance in noise (or jamming)
Slow Frequency Hop Spread Spectrum Using MFSK (M=4, k=2)
Fast Frequency Hop Spread Spectrum Using MFSK (M=4, k=2)

FHSS Performance Considerations:

• Typically large number of frequencies used


– Improved resistance to jamming
Direct Sequence Spread Spectrum (DSSS):

• Each bit represented by multiple bits using spreading code


• Spreading code spreads signal across wider frequency band
– In proportion to number of bits used
– 10 bit spreading code spreads signal across 10 times bandwidth of 1 bit code
• One method:
– Combine input with spreading code using XOR
– Input bit 1 inverts spreading code bit
– Input zero bit doesn’t alter spreading code bit
– Data rate equal to original spreading code
• Performance similar to FHSS

Direct Sequence Spread Spectrum Example:


Direct Sequence Spread Spectrum Transmitter:

Direct Sequence Spread Spectrum Receiver:


Direct Sequence Spread Spectrum Using BPSK Example:

Code Division Multiple Access (CDMA):

•Multiplexing Technique used with spread spectrum


•Start with data signal rate D
– Called bit data rate
• Break each bit into k chips according to fixed pattern specific to each user
– User’s code
• New channel has chip data rate kD chips per second
• E.g. k=6, three users (A,B,C) communicating with base receiver R
• Code for A = <1,-1,-1,1,-1,1>
• Code for B = <1,1,-1,-1,1,1>
• Code for C = <1,1,-1,1,1,-1>
CDMA Example:
•Consider A communicating with base
•Base knows A’s code
•Assume communication already synchronized
•A wants to send a 1
– Send chip pattern <1,-1,-1,1,-1,1>
• A’s code
• A wants to send 0
– Send chip[ pattern <-1,1,1,-1,1,-1>
• Complement of A’s code
• Decoder ignores other sources when using A’s code to decode
– Orthogonal codes
CDMA for DSSS:

•n users each using different orthogonal PN sequence


•Modulate each users data stream
– Using BPSK
• Multiply by spreading code of user
CDMA in a DSSS Environment:
15.Additional Topics:

Voice coders
Regenerative repeater
Feed back communications
Advancements in the digital communication
Signal space representation
Turbo codes

Voice coders
A vocoder ( short for voice encoder) is an analysis/synthesis system, used to reproduce
human speech. In the encoder, the input is passed through a multiband filter, each band is
passed through an envelope follower, and the control signals from the envelope followers are
communicated to the decoder. The decoder applies these (amplitude) control signals to
corresponding filters in the (re)synthesizer.
It was originally developed as a speech coder for telecommunications applications in the
1930s, the idea being to code speech for transmission. Its primary use in this fashion is for
secure radio communication, where voice has to be encrypted and then transmitted. The
advantage of this method of "encryption" is that no 'signal' is sent, but rather envelopes of the
bandpass filters. The receiving unit needs to be set up in the same channel configuration to
resynthesize a version of the original signal spectrum. The vocoder as
both hardware and software has also been used extensively as an electronic musical
instrument.
Whereas the vocoder analyzes speech, transforms it into electronically transmitted
information, and recreates it, The Voder (from Voice Operating Demonstrator) generates
synthesized speech by means of a console with fifteen touch-sensitive keys and a pedal,
basically consisting of the "second half" of the vocoder, but with manual filter controls,
needing a highly trained operator.
The human voice consists of sounds generated by the opening and closing of the glottis by
the vocal cords, which produces a periodic waveform with many harmonics. This basic sound
is then filtered by the nose and throat (a complicated resonant piping system) to produce
differences in harmonic content (formants) in a controlled way, creating the wide variety of
sounds used in speech. There is another set of sounds, known as
the unvoiced and plosive sounds, which are created or modified by the mouth in different
fashions.
The vocoder examines speech by measuring how its spectral characteristics change over time.
This results in a series of numbers representing these modified frequencies at any particular
time as the user speaks. In simple terms, the signal is split into a number of frequency bands
(the larger this number, the more accurate the analysis) and the level of signal present at each
frequency band gives the instantaneous representation of the spectral energy content. Thus,
the vocoder dramatically reduces the amount of information needed to store speech, from a
complete recording to a series of numbers. To recreate speech, the vocoder simply reverses
the process, processing a broadband noise source by passing it through a stage that filters the
frequency content based on the originally recorded series of numbers. Information about the
instantaneous frequency (as distinct from spectral characteristic) of the original voice signal
is discarded; it wasn't important to preserve this for the purposes of the vocoder's original use
as an encryption aid, and it is this "dehumanizing" quality of the vocoding process that has
made it useful in creating special voice effects in popular music and audio entertainment.
Since the vocoder process sends only the parameters of the vocal model over the
communication link, instead of a point by point recreation of the waveform, it allows a
significant reduction in the bandwidth required to transmit speech.

Modern vocoder implementations


Even with the need to record several frequencies, and the additional unvoiced sounds, the
compression of the vocoder system is impressive. Standard speech-recording systems capture
frequencies from about 500 Hz to 3400 Hz, where most of the frequencies used in speech lie,
typically using a sampling rate of 8 kHz (slightly greater than the Nyquist rate). The sampling
resolution is typically at least 12 or more bits per sample resolution (16 is standard), for a
final data rate in the range of 96-128 kbit/s. However, a good vocoder can provide a
reasonable good simulation of voice with as little as 2.4 kbit/s of data.
'Toll Quality' voice coders, such as ITU G.729, are used in many telephone networks. G.729
in particular has a final data rate of 8 kbit/s with superb voice quality. G.723 achieves slightly
worse quality at data rates of 5.3 kbit/s and 6.4 kbit/s. Many voice systems use even lower
data rates, but below 5 kbit/s voice quality begins to drop rapidly.
Several vocoder systems are used in NSA encryption systems:

 LPC-10, FIPS Pub 137, 2400 bit/s, which uses linear predictive coding
 Code-excited linear prediction (CELP), 2400 and 4800 bit/s, Federal Standard 1016,
used in STU-III
 Continuously variable slope delta modulation (CVSD), 16 kbit/s, used in wide band
encryptors such as the KY-57.
 Mixed-excitation linear prediction (MELP), MIL STD 3005, 2400 bit/s, used in the
Future Narrowband Digital Terminal FNBDT, NSA's 21st century secure telephone.
 Adaptive Differential Pulse Code Modulation (ADPCM), former ITU-T G.721, 32
kbit/s used in STE secure telephone
(ADPCM is not a proper vocoder but rather a waveform codec. ITU has gathered G.721
along with some other ADPCM codecs into G.726.)
Vocoders are also currently used in developing psychophysics, linguistics, computational
neuroscience and cochlear implant research.
Modern vocoders that are used in communication equipment and in voice storage devices
today are based on the following algorithms:

 Algebraic code-excited linear prediction (ACELP 4.7 kbit/s – 24 kbit/s)[5]


 Mixed-excitation linear prediction (MELPe 2400, 1200 and 600 bit/s)[6]
 Multi-band excitation (AMBE 2000 bit/s – 9600 bit/s)[7]
 Sinusoidal-Pulsed Representation (SPR 300 bit/s – 4800 bit/s)[8]
 Tri-wave excited linear prediction (TWELP 2400 – 3600 bit/s)[9]
Linear prediction-based vocoders
Main article: Linear predictive coding

Since the late 1970s, most non-musical vocoders have been implemented using linear
prediction, whereby the target signal's spectral envelope (formant) is estimated by an all-
pole IIR filter. In linear prediction coding, the all-pole filter replaces the bandpass filter bank
of its predecessor and is used at the encoder to whiten the signal (i.e., flatten the spectrum)
and again at the decoder to re-apply the spectral shape of the target speech signal.
One advantage of this type of filtering is that the location of the linear predictor's spectral
peaks is entirely determined by the target signal, and can be as precise as allowed by the time
period to be filtered. This is in contrast with vocoders realized using fixed-width filter banks,
where spectral peaks can generally only be determined to be within the scope of a given
frequency band. LP filtering also has disadvantages in that signals with a large number of
constituent frequencies may exceed the number of frequencies that can be represented by the
linear prediction filter. This restriction is the primary reason that LP coding is almost always
used in tandem with other methods in high-compression voice coders.
RAWCLI vocoder
Robust Advanced Low Complexity Waveform Interpolation (RALCWI) technology uses
proprietary signal decomposition and parameter encoding methods to provide high voice
quality at high compression ratios. The voice quality of RALCWI-class vocoders, as
estimated by independent listeners, is similar to that provided by standard vocoders running
at bit rates above 4000 bit/s. The Mean Opinion Score (MOS) of voice quality for this
Vocoder is about 3.5-3.6. This value was determined by a paired comparison method,
performing listening tests of developed and standard voice Vocoders
The RALCWI vocoder operates on a “frame-by-frame” basis. The 20ms source voice frame
consists of 160 samples of linear 16-bit PCM sampled at 8 kHz. The Voice Encoder performs
voice analysis at the high time resolution (8 times per frame) and forms a set of estimated
parameters for each voice segment. All of the estimated parameters are quantized to produce
41-, 48- or 55-bit frames, using vector quantization (VQ) of different types. All of the vector
quantizers were trained on a mixed multi-language voice base, which contains voice samples
in both Eastern and Western languages.
Waveform-Interpolative (WI) vocoder was developed in AT&T Bell Laboratories around
1995 by W.B. Kleijn, and subsequently a low- complexity version was developed by AT&T
for the DoD secure vocoder competition. Notable enhancements to the WI coder were made
at the University of California, Santa Barbara. AT&T holds the core patents related to WI,
and other institutes hold additional patents. Using these patents as a part of WI coder
implementation requires licensing from all IPR holders.
The product is the result of a co-operation between CML Microcircuits and SPIRIT DSP. The
co-operation combines CML’s 39-year history of developing mixed-signal semiconductors
for professional and leisure communication applications, with SPIRIT’s experience
in embedded voice products.

Regenerative repeater
Introduction of on-board regeneration alleviates the conflict between enhanced traffic
capacity and moderate system cost by reducing the requirements of the radio front-ends, by
simplifying the ground station digital equipment and the satellite communication payload in
TDMA and Satellite-Switched-TDMA systems. Regenerative satellite repeaters can be
introduced in an existing system with only minor changes at the ground stations. In cases
where one repeater can be dedicated to each station a more favorable arrangement of the
information data than in SS-TDMA can be conceived, which eliminates burst transmission
while retaining full interconnectivity among spot-beam areas.

ADVANCEMENTS IN DIGITAL COMMUNICATIONS

Novel Robust, Narrow-band PSK Modes for HF Digital Communications

Some items that I wrote that may be of general interest:


The well-known Shannon-Hartley law tells us that there is an absolute limit on the error-free
bit rate that can be transmitted within a certain bandwidth at a given signal to noise ratio
(SNR). Although it is not obvious, this law can be restated (given here without proof) by
saying that for a given bit rate, one can trade off bandwidth and power. On this basis then, a
certain digital communications system could be either bandwidth limited or power limited,
depending on its design criteria.

Practice also tells us that digital communication systems designed for HF are necessarily
designed with two objectives in mind; slow and robust to allow communications with weak
signals embedded in noise and adjacent channel interference, or fast and somewhat subject to
failing under adverse conditions, however being able to best utilize the HF medium with
good prevailing conditions.

Taken that the average amateur radio transceiver has limited power output, typically 20 - 100
Watts continuous duty, poor or restricted antenna systems, fierce competition for a free spot
on the digital portion of the bands, adjacent channel QRM, QRN, and the marginal condition
of the HF bands, it is evident that for amateur radio, there is a greater need for a weak signal,
spectrally-efficient, robust digital communications mode, rather than another high speed,
wide band communications method.

Recent Developments using PSK on HF

It is difficult to understand that true coherent demodulation of PSK could ever be achieved in
any non-cabled system since random phase changes would introduce uncontrolled phase
ambiguities. Presently, we have the technology to match and track carrier frequencies
exactly, however tracking carrier phase is another matter. As a matter of practicality thus, we
must revert to differentially coherent phase demodulation (DPSK).

Another practical matter concerns that of symbol, or baud rate; conventional RTTY runs at
45.45 baud (a symbol time of about 22 ms.) This relatively-long symbol time have been
favored as being resistant to HF multipath effects and thus attributed to its robustness.
Symbol rate also plays an important part in determining spectral occupancy. In the case of a
45.45 baud RTTY waveform, the expected spectral occupancy is some 91 Hz for the major
lobe, or +/- 45.45 on each side of each the two data tones. For a two tone FSK signaling
system of continuous-phase frequency-shift keying (CPFSK) paced at 170 Hz, this system
would occupy approximately 261 Hz.

Signal space representation

• Band pass Signal

• Real valued signal S(f) Ù S* (-f)

• finite bandwidth B Ù infinite time span

• f c denotes center frequency

• Negative Frequencies contain no Additional Info

Characteristics:
• Complex valued signal

• No information loss, truely equivalent

Let us consider DN = {(xi , yi) : i = 1, .., N} iid realizations of the joint observation-class
phenomenon (X(u), Y (u)) with true probability measure PX,Y defined on (X ×Y, σ(FX × FY
)). In addition, let us consider a family of measurable representation functions D, where any
f(·) ∈ D is defined in X and takes values in Xf . Let us assume that any representation
function f(·) induces an empirical istribution Pˆ Xf ,Y on (Xf ×Y, σ(Ff ×FY )), based on the
training data and an implicit learning approach, where the empirical Bayes classification rule
is given by: gˆf (x) = arg maxy∈Y Pˆ Xf ,Y (x, y).

Turbo codes
In information theory, turbo codes (originally in French Turbocodes) are a class of high-
performance forward error correction (FEC) codes developed in 1993, which were the first
practical codes to closely approach the channel capacity, a theoretical maximum for the code
rate at which reliable communication is still possible given a specific noise level. Turbo
codes are finding use in 3G mobile communications and (deep
space) satellite communications as well as other applications where designers seek to achieve
reliable information transfer over bandwidth- or latency-constrained communication links in
the presence of data-corrupting noise. Turbo codes are nowadays competing with LDPC
codes, which provide similar performance.
Prior to turbo codes, the best constructions were serial concatenated codes based on an
outer Reed-Solomon error correction code combined with an inner Viterbi-decoded short
constraint length convolutional code, also known as RSV codes.
In 1993, turbo codes were introduced by Berrou, Glavieux, and Thitimajshima (from
Télécom Bretagne, former ENST Bretagne, France) in their paper: "Near Shannon Limit
Error-correcting Coding and Decoding: Turbo-codes" published in the Proceedings of IEEE
International Communications Conference.[1] In a later paper, Berrou gave credit to the
"intuition" of "G. Battail, J. Hagenauer and P. Hoeher, who, in the late 80s, highlighted the
interest of probabilistic processing.". He adds "R. Gallager and M. Tanner had already
imagined coding and decoding techniques whose general principles are closely related,"
although the necessary calculations were impractical at that time.[2]
The first class of turbo code was the parallel concatenated convolutional code (PCCC). Since
the introduction of the original parallel turbo codes in 1993, many other classes of turbo code
have been discovered, including serial versions and repeat-accumulate codes. Iterative Turbo
decoding methods have also been applied to more conventional FEC systems, including
Reed-Solomon corrected convolutional codes
There are many different instantiations of turbo codes, using different component encoders,
input/output ratios, interleavers, and puncturing patterns. This example encoder
implementation describes a 'classic' turbo encoder, and demonstrates the general design of
parallel turbo codes.
This encoder implementation sends three sub-blocks of bits. The first sub-block is the m-bit
block of payload data. The second sub-block is n/2 parity bits for the payload data, computed
using a recursive systematic convolutional code (RSC code). The third sub-block is n/2 parity
bits for a known permutation of the payload data, again computed using an RSC
convolutional code. Thus, two redundant but different sub-blocks of parity bits are sent with
the payload. The complete block has m+n bits of data with a code rate of m/(m+n).
The permutation of the payload data is carried out by a device called an interleaver.
Hardware-wise, this turbo-code encoder consists of two identical RSC coders, С1 and C2, as
depicted in the figure, which are connected to each other using a concatenation scheme,
called parallel concatenation:

In the figure, M is a memory register. The delay line and interleaver force input bits dk to
appear in different sequences. At first iteration, the input sequence dk appears at both outputs
of the encoder,xk and y1k or y2k due to the encoder's systematic nature. If the
encoders C1 and C2 are used respectively in n1 and n2 iterations, their rates are respectively
equal to

.
[edit]The decoder
The decoder is built in a similar way to the above encoder - two elementary decoders
are interconnected to each other, but in serial way, not in parallel. The decoder
operates on lower speed (i.e. ), thus, it is intended for the encoder, and is
for correspondingly. yields a soft decision which causes delay. The same
delay is caused by the delay line in the encoder. The 's operation causes delay.
An interleaver installed between the two decoders is used here to scatter error bursts
coming from output. DI block is a demultiplexing and insertion module. It
works as a switch, redirecting input bits to at one moment and to at
another. In OFF state, it feeds both and inputs with padding bits (zeros).
Consider a memoryless AWGN channel, and assume that at k-th iteration, the
decoder receives a pair of random variables:

where and are independent noise components having the same


variance . is a k-th bit from encoder output.
Redundant information is demultiplexed and sent
through DI to (when ) and to (when ).
yields a soft decision, i.e.:

and delivers it to . is called the logarithm of the likelihood


ratio (LLR). is the a posteriori probability (APP) of the data bit
which shows the probability of interpreting a received bit as . Taking the LLR into
account, yields a hard decision, i.e. a decoded bit.
It is known that the Viterbi algorithm is unable to calculate APP, thus it cannot be
used in . Instead of that, a modified BCJR algorithm is used. For , the Viterbi
algorithm is an appropriate one.
However, the depicted structure is not an optimal one, ecause uses
only a proper fraction of the available redundant information. In order to improve the
structure, a feedback loop is used (see the dotted line on the figure).
16. Question papers:

B. Tech III Year II Semester Examinations, April/May - 2012


DIGITAL COMMUNICATIONS
(ELECTRONICS AND COMMUNICATION ENGINEERING)
Time: 3 hours Max. Marks: 75
Answer any five questions
All questions carry equal marks
---
1. a) Discuss the advantages and disadvantages of digital communication system.
b) State and prove sampling theorem in time domain. [15]
2. a) With a relevant diagram, describe the operation of DPCM system.
b) A TV signal with a bandwidth of 4.2 MHz is transmitted using binary PCM. The number
of representation level is 512. Calculate:
i) Code word length ii) Final bit rate iii) Transmission bandwidth. [15]
3. a) What are different digital modulation techniques available? Compare them with regard
to the probability error.
b) Draw the block diagram of DPSK modulator and explain how synchronization problem is
avoided for its detection. [15]
4. a) Draw the block diagram of a baseband signal receiver and explain.
b) What is an Eye pattern? Explain.
c) What is matched filter? Derive the expression for its output SNR. [15]
5. a) State and prove the condition for entropy to be maximum.
b) Prove that H(Y/X) ≤ H(Y) with equality if and only if X and Y are independent.
[15]
6. a) Explain the advantages and disadvantages of cyclic codes.
2 3
b) Construct the (7, 4) linear code word for the generator polynomial G(D) = 1+D +D for the
message bits 1001 and find the checksum for the same.
[15]
7. a) Briefly describe the Viterbi algorithm for maximum-likelihood decoding of
convolutional codes.
b) For the convolutional encoder shown in figure7, draw the state diagram and the trellis
diagram.
8. a) Explain how PN sequences are generated. What are maximal-length sequences? What
are their properties and why are they preferred?
b) With the help of a neat block diagram, explain the working of a DS spread spectrum
based CDMA system. [15]
B. Tech III Year II Semester Examinations, April/May - 2012
DIGITAL COMMUNICATIONS
(ELECTRONICS AND COMMUNICATION ENGINEERING)
Time: 3 hours Max. Marks: 75
Answer any five questions
All questions carry equal marks
---
1. a) What is natural sampling? Explain it with sketches.
b) Specify the Nyquist rate and Nyquist intervals for each of the following signals
2 2
i) x(t) = Sinc200t ii) x(t) = Sinc 200t iii) x(t) = Sinc200t+ Sinc 200t.
[15]
2. a) Derive an expression for signal to quantization noise ratio of a PCM encoder using
uniform quantizer when the input signal is uniformly distributed.
b) A PCM system uses a uniform quantizer followed by a 7 bit binary encoder. The bit rate of
6
the system is equal to 50 x 10 bits/sec.
i) What is the maximum message bandwidth?
ii) Determine the signal to quantization noise ratio when fm = 1 MHz is applied.
[15]
3. a) Draw the correlation receiver structure for QPSK signal and explain its working
principle.
b) Write the power spectral density of BPSK and QPSK and draw the power spectrum of
each. [15]
4. a) Draw the block diagram of baseband communication receiver and explain the
importance of each block.
b) What is matched filter?
c) Represent BFSK system using signal space diagram. What are the conclusions one can
make with that of BPSK system? [15]
5. a) Define and explain the following.
i) Information
ii) Efficiency of coding
iii) Redundancy of coding.
b) Prove that H(X,Y) = H(X) + H(Y/X) = H(Y) + H(X/Y). [15]
6. a) Explain the principle and operation of encoder for Hamming code.
b) An error control code has the following parity check matrix.
101100110010011001H⎡⎤⎢⎥=⎢⎥⎢⎥⎣⎦
i) Determine the generator matrix ‘G’
e received code word 110110. Comment on error detection capability of this code. [15]
7. a) Explain how you would draw the trellis diagram of a convolutional encoder given its
state diagrams.
b) For the convolutional encoder shown in figure7, draw the state diagram and the trellis
diagram. [15]
Figure: 7
8. a) What are the advantages of spread spectrum technique.
b) Compare direct sequence spread spectrum and frequency hopped spread spectrum
techniques and draw the important features of each. [15]

B. Tech III Year II Semester Examinations, April/May - 2012


DIGITAL COMMUNICATIONS
(ELECTRONICS AND COMMUNICATION ENGINEERING)
Time: 3 hours Max. Marks: 75
Answer any five questions
All questions carry equal marks
---
1. a) State and discuss the Hartley-Shannon law.
b) The terminal of a computer used to enter alpha numeric data is connected to the computer
through a voice grade telephone line having a usable bandwidth of 3 KHz and a output SNR
of 10 dB. Determine:
i) The capacity of the channel
ii) The maximum rate at which data can be transmitted from the terminal to the computer
without error.
Assume that the terminal has 128 characters and that the data sent from the terminal consists
of independent sequences of characters with equal probability.
[15]
2. a) What is hunting in delta modulation? Explain.
b) Differentiate between granular and slope overload noise.
c) A signal band limited within 3.6 KHz is to be transmitted via binary PCM on a channel
whose maximum pulse rate is 40,000 pulses/sec. Design a PCM system and draw a block
diagram showing all parameters. [15]
3. a) Derive an expression for the spectrum of BFSK and sketch the same.
b) Explain operation of differentially encoded PSK system. [15]
4. a) What is an inter symbol interference in base band binary PAM system? Explain.
b) Give the basic components of a filter in baseband data transmission and explain.
[15]
5. a) State the significance of H(Y/X) and H(X/Y).
b) Given six messages with probabilities, P() = 123456x, x, x, x, x, x1x13, 21P(x) = , 4
31P(x) = , 8 41P(x) = , 8 51P(x) = , 12 61P(x) = 12. Find the Shannon-Fano code.
Evaluate the coding efficiency. [15]
6. a) State and explain the properties of cyclic codes.
3
b) The generator polynomial of a (7, 4) cyclic code is x +x+1. Construct the generator
matrix for a systematic cyclic code and find the code word for the message (1101)
using the generated matrix. [15]
7. a) What is a convolutional code? How is it different from a block code?
b) Find the generator matrix G(D) for the (2, 1, 2) convolutional encoder of figure shown 7.
[15]
Figure: 7
8. a) What are the PN sequences? Discuss their characteristics.
b) What are the two basic types of spread-spectrums systems? Explain the basic principle of
each of them. [15]
B. Tech III Year II Semester Examinations, April/May - 2012
DIGITAL COMMUNICATIONS
(ELECTRONICS AND COMMUNICATION ENGINEERING)
Time: 3 hours Max. Marks: 75
Answer any five questions
All questions carry equal marks
---
1. a) What is Nyquist rate and Nyquist interval?
b) What is aliasing and how it is reduced?
c) A band limited signal x(t) is sampled by a train of rectangular pulses of width τ and period
T.
i) Find an expression for the sampled signal.
ii) Determine the spectrum of the sampled signal and sketch it. [15]
2. a) What is Companding? Explain how Companding improves the SNR of a PCM system?
b) The input of a PCM and a Data Modulational(DM) is a sine wave of 4KHz. PCM and
DM are both designed to yield an output SNR of 30dB. Assuming PCM sampling at 5
times the Nyquist rate. Compare the bandwidth required for each system. [15]
3. a) Draw the block diagram of QPSK system and explain its working.
b) Derive an expression for the probability of error for PSK. [15]
4. a) What is an optimum receiver? Explain it with suitable derivation.
b) Describe briefly baseband M-ary PAM system. [15]
5. a) Explain the importance of source coding.
b) Apply Haffman’s encoding procedure to the following message ensemble and determine
the average length of the encoded message
{}{}12345678910,,,,,,,,,Xxxxxxxxxxx=.
{}{}0.18,0.17,0.16,0.15,0.10,0.08,0.05,0.05,0.04,0.02XP=.
The encoding alphabet is {D} = {0,1,2,3}. [15]
6. a) What is a systematic block code?
b) What is a syndrome vector? How is it useful?
c) A linear (n, k) block code has a generated matrix
10110110G⎡⎤=⎢⎥⎣⎦
i) Find all its code words
ii) Find its H matrix
iii) What is the minimum distance of the code and its error correcting capacity?
[15]
7. a) What is a convolutional code?
b) What is meant by the constraint length of a convolutional encoder?
c) A convolutional encoder has a single shift register with two stages i.e., constraint length
k=3, three mod-2 adders and an output multiplexer. The generator sequence of the
encoder are as follows.
()()()()()()1231,0,1,1,1,0,1,1,1ggg===.
Draw the block diagram of the encoder. [15]
8. a) What are the advantages of spread – spectrum communication.
b) What are PN sequences? Discuss their characteristics.
c) Explain the principle of direct sequence spread spectrum. [15]
17. Question Bank
1. (a) State and prove the sampling theorem for band pass signals.
(b) A signal m(t) = cos(200pt) + 2 cos(320pt) is ideally sampled at fS = 300Hz. If the
sampled signal is passed through a low pass filter with a cutoff frequency of 250Hz. What
frequency components will appear in the output? [6+10]

2. (a) Explain with a neat block diagram the operation of a continuously variable slope delta
modulator (CVSD).
(b) Compare Delta modulation with Pulse code modulation technique. [8+8]

3. (a) Assume that 4800bits/sec. random data are sent over a band pass channel by BFSK
signaling scheme. Find the transmission bandwidth BT such that the spectral envelope is
down at least 35dB outside this band.
(b) Write the comparisons among ASK, PSK, FSK and DPSK. [8+8]

4. (a) What is meant by ISI? Explain how it differs from cross talk in the PAM.
(b) What is the ideal solution to obtain zero ISI and what is the disadvantage of this solution.
[6+10]

5. A code is composed of dots and dashes. Assume that the dash is 3 times as long as the
dots, has one-third the probability of occurrence.
Calculate
(a) the Information in a dot and that in a hash.
(b) average Information in the dot-hash code.
(c) Assume that a dot lasts for 10 ms and that this same time interval is allowed between
symbols. Calculate average rate of Information. [16]

6. Explain Shannon-Fano algorithm with an example. [16]

7. Explain about block codes in which each block of k message bits encoded into block of
n>k bits with an example. [16]

8. Sketch the Tree diagram of convolutional encoder shown in figure 8 with Rate= 1/2,
constraint length L = 2. [16]

9. (a) State and prove the sampling theorem for band pass signals.
(b) A signal m(t) = cos(200pt) + 2 cos(320pt) is ideally sampled at fS = 300Hz. If the
sampled signal is passed through a low pass filter with a cutoff frequency of 250Hz. What
frequency components will appear in the output? [6+10]

10. (a) Derive an expression for channel noise and quantization noise in DM system.
(b) Compare DM and PCM systems. [10+6]

11. (a) Draw the signal space representation of MSK.


(b) Show that in a MSK signaling scheme, the carrier frequency in integral multiple of ‘fb/4’
where ‘fb’ is the bit rate.
(c) Bring out the comparisons between MSK and QPSK.
12. (a) Derive an expression for error probability of non - coherent ASK scheme.
(b) Binary data is transmitted over an RF band pass channel with a usable bandwidth of
10MHz at a rate of 4,8 × 106 bits/sec using an ASK singling method. The carrier amplitude at
the receiver antenna is 1mV and the noise power spectral density at the receiver input is 10-
15w/Hz.
i. Find the error probability of a coherent receiver.
ii. Find the error probability of a coherent receiver. [8+8]

13. Figure 5 illustrates a binary erasure channel with the transmission probabilities
probabilities P(0|0) = P(1|1) = 1 - p and P(e|0) = P(e|1) = p. The probabilities for the input
symbols are P(X=0) =a and P(X=1) =1- a.
Determine the average mutual information I(X; Y) in bits. [16]

14. Show that H(X, Y) = H(X) + H(Y |X) = H(Y ) + H(X|Y ). [16]

15. Explain about block codes in which each block of k message bits encoded into block of
n>k bits with an example. [16]

16. Explain various methods for describing Conventional Codes. [16]

17. The probability density function of the sampled values of an analog signal is shown in
figure 1.
(a) Design a 4 - level uniform quantizer.
(b) Calculate the signal power to quantization noise power ratio.
(c) Design a 4 - level minimum mean squared error non - uniform quantizer.
[6+4+6]

18. A DM system is tested with a 10kHz sinusoidal signal, 1V peak to peak at the input. The
signal is sampled at 10times the Nyquist rate.
(a) What is the step size required to prevent slope overload and to minimize the granular
noise.
(b) What is the power spectral density of the granular noise?
(c) If the receiver input is band limited to 200kHz, what is the average (S/NQ).
[6+5+5]

19. (a) Write down the modulation waveform for transmitting binary information over base
band channels, for the following modulation schemes: ASK, PSK, FSK and DPSK.
(b) What are the advantages and disadvantages of digital modulation schemes?
(c) Discuss base band transmission of M-ary data. [4+6+6]

20. (a) Draw the block diagram of band pass binary data transmission system and explain
each block.
(b) A band pass data transmitter used a PSK signaling scheme with
s1(t) =?A coswct; 0 t Tb
s2(t) = +A coswct; 0 t Tb
Where Tb = 0.2msec; wc = 10p /Tb.
The carrier amplitude at the receiver input is 1mV and the power spectral density of the
additive white gaussian noise at the input is 10-11w/Hz. Assume that an ideal correlation
receiver is used. Calculate the average bit error rate of the receiver. [8+8]
21. A Discrete Memory less Source (DMS) has an alphabet of five letters, xi, i =1,2,3,4,5,
each occurring with probabilities 0.15, 0.30, 0.25, 0.15, 0.10, 0.08, 0.05,0.05.
(a) Determine the Entropy of the source and compare with it N.
(b) Determine the average number N of binary digits per source code. [16]

22. (a) Calculate the bandwidth limits of Shannon-Hartley theorem.


(b) What is an Ideal system? What kind of method is proposed by Shannon for an Ideal
system? [16]

23. Explain about block codes in which each block of k message bits encoded into block of
n>k bits with an example. [16]

24. Sketch the Tree diagram of convolutional encoder shown in figure 8 with Rate= 1/2,
constraint length L = 2. [16]

25. (a) State sampling theorem for low pass signals and band pass signals.
(b) What is aliasing effect? How it can be eliminated? Explain with neat diagram.
[4+4+8]

26. (a) Derive an expression for channel noise and quantization noise in DM system.
(b) Compare DM and PCM systems. [10+6]

27. Explain the design and analysis of M-ary signaling schemes. List the waveforms in
quaternary schemes. [16]

28. (a) Derive an expression for error probability of coherent PSK scheme.
(b) In a binary PSK scheme for using a correlator receiver, the local carrier wave-form is
Acos (wct + q) instead of Acos(wct) due to poor carrier synchronization. Derive an expression
for the error probability and compute the increase in error probability when q=150 and
[A2Tb/?] = 10. [8+8]

29. Consider the transmitting Q1, Q2, Q3, and Q4 by symbols 0, 10, 110, 111
(a) Is the code uniquely decipherable? That is for every possible sequence is there only one
way of interpreting message.
(b) Calculate the average number of code bits per message. How does it compare with H =
1.8 bits per messages. [16]

30. Show that H(X, Y) = H (X) + H (Y) and H(X/Y) = H (X). [16]

31. Explain about block codes in which each block of k message bits encoded into block of
n>k bits with an example. [16]

32. Explain various methods for describing Conventional Codes. [16]


18. Assignment topics

Unit 1:

1. Certain issues of digital transmission,


2. advantages of digital communication systems,
3. Bandwidth- S/N trade off, and Sampling theorem
4. PCM generation and reconstruction
5. Quantization noise, Differential PCM systems (DPCM)
6. Delta modulation,

Unit 2:

1. Coherent ASK detector and non-Coherent ASK detector


2. Coherent FSK detector BPSK
3. Coherent PSK detection

1. A Base band signal receiver,


2. Different pulses and power spectrum densities
3. Probability of error

Unit 3:

1. Conditional entropy and redundancy,


2. Shannon Fano coding
3. Mutual information.
4. Matrix description of linear block codes
5. Matrix description of linear block codes
5. Error detection and error correction capabilities of linear block codes

Unit 4:

1. Encoding,
2. decoding using state Tree and trellis diagrams
3. Decoding using Viterbi algorithm

Unit 5:

1. Use of spread spectrum


2. direct sequence spread spectrum(DSSS),
3. Code division multiple access
4. Ranging using DSSS Frequency Hopping spread spectrum
19. Unit wise Bits

CHOOSE THE CORRECT ANSWER

1. A source is transmitting six messages with


probabilities,1/2,1/4,1/8,1/16,1/32,and1/32.Then

(a) Source coding improves the error performance of the communication system.

(b) Channel coding will reduce the average source code word length.

(c) Two different source codeword sets can be obtained using Huffman coding.

(d) Two different source codeword sets can be obtained using Shanon-Fano coding

2.A memory less source emits 2000binarysymbols/sec and each symbol has a Probability of
0.25 to be equal to 1and 0.75 to be equal to 0.The minimum number of bits/sec required for
error free transmission of this source is

(a) 1500

(b) 1734

(c) 1885

(d) 162213.

3. A system has a bandwidth of 3KHz and an S/N ratio of 29dB at the input of the receiver .If
the bandwidth of

The channel gets doubled ,then

(a) its capacity gets doubled

(b) its capacity gets halved

(c) the corresponding S/N ratio gets doubled

(d) the corresponding S/N ratio gets halved

5.The capacity of a channel with infinite bandwidth is

(a) finite because of increase in noise power

(b) finite because of finite message word length

(c) infinite because of infinite noise power

(d) infinite because of infinite bandwidth


6. Which of the following is correct?

(a) Channel coding is an efficient way of representing the output of a source

(b) ARQschemeoferrorcontrolisappliedafterthereceivermakesadecisionaboutthereceivedbit

(c) ARQ scheme of error control is applied when the receiver is unable to make a decision
about the received bit.

(d) Source coding introduces redundancy

7. Which of the following is correct?

(a) Source encoding reduces the probability of transmission errors

(b) In an (n,k) systematic cyclic code, the sum of two code words is another codeword of the
code.

(c) In a convolutional encoder, the constraint length of the encoder is equal to the tail of the
message sequence+ 1.

(d) Inan(n,k)blockcode,eachcodewordisthecyclicshiftofananothercodewordofthecode.

8. Automatic Repeat Request is a

(a) error correction scheme

(b) Source coding scheme

(c) error control scheme

(d) data conversion scheme

9. The fundamental limit on the average number of bits/source symbol is

(a) Channel capacity

(b) Entropy of the source

(c) Mutual Information

(d) Information content of the message

10. The Memory length of a convolutional encoder is 5. If a 6 bit message sequence is


applied as the input for the encoder ,then for the last message bit to come out of the encoder,
the number of extra zeros to be applied to the encoder is

(a) 6

(b) 4

(c) 3
(d) 5

Answers

1.C

2.D

3.B

4.A

5.C

6.C

7.C

8.B

9.A

10.D

Unit 2
CHOOSE THE CORRECT ANSWER

1. The cascade of two Binary Symmetric Channels is a

(a) symmetric Binary channel

(b) asymmetric Binary channel

(c) asymmetric quaternary channel

(d) symmetric quaternary channel

2. Which of the following is correct?

(a) Source coding introduces redundancy

(b) ARQ scheme of error control is applied after the receiver makes a decision about the
received bit

(c) Channel coding is an efficient way of representing the output of a source

(d) ARQ scheme of error control is applied when the receiver is unable to make a decision
about the received bit.

3. A linear block code with Hamming distance 5 is

(a) Triple error correcting code


(b) Single error correcting and double error detecting code

(c) double error detecting code

(d) Double error correcting code

4. In a Linear Block code

(a) the encoder satisfies superposition principle

(b) the communication channel is a linear system

(c) parity bits of the code word are the linear combination of the message bits

(d) the received power varies linearly with that of the transmitted power

5. The fundamental limit on the average number of bits/source symbol is

(a) Channel capacity

(b) Information content of the message

(c) Mutual Information

(d) Entropy of the source

6. Which of the following involves the effect of the communication channel?

(a) Entropy of the source

(b) Information content of a message

(c) Mutual information

(d) information rate of the source

7. Whichofthefollowingprovidesthefacilitytorecognizetheerroratthereceiver?

(a) Shanon -Fano Encoding

(b) differential encoding

(c) Parity Check codes

(d) Huffman encoding

8. A system has a bandwidth of 3 KHz and an S/N ratio of 29dB at the input of the receiver.
If the bandwidth of

The channel gets doubled, then

(a) the corresponding S/N ratio gets doubled


(b) its capacity gets doubled

(c) its capacity gets halved

(d) the corresponding S/N ratio gets halved

9. Information rate of a source can be used to

(a) design the matched filter for the receiver

(b) differentiate between two sources

(c) correct the errors at the receiving side

(d) to find the entropy in bits/message of a source

10. In a communication system, the average amount of uncertainty associated with the
Source, sink, source and sink jointly in bits/message are1.0613,1.5 and2.432 respectively.
Then the information transferred by the channel connecting the source and sink in bit sis

(a) 1.945

(b) 4.9933

(c) 2.8707

(d) 0.1293

11.ABS Chasa transition probability of P. The cascade of two such channel sis

(a) asymmetric channel with transition probability2P (1-P)

(b) an asymmetric channel with transition probability2P

(c) an asymmetric channel with transition probability(1-P)

(d) asymmetric channel with transition probability P2s.

Answers

1.A

2.D

3.D

4.C

5.D

6. C

7. A
8.C

9.D

10.D

11.A

Unit 2

CHOOSE THE CORRECT ANSWER

1. Information rate of a source is

(a) maximum when the source is continuous

(b) the entropy of the source measured in bits/message

(c) a measure of the uncertainty of the communication system

(d) the entropy of the source measured in bits/sec.

2. A Field is

(a) a group with 0 as the multiplicative identity for its members

(b) a group with 0 as the additive inverse for its members

(c) a group with 1 as the additive identity for its members

(d) an Abelian group under addition

3.Under error free reception, the syndrome vector computed for the received cyclic code
word consists of

(a) all ones

(b) alternate0‘s and1‘s starting with a 0

(c) all zeros

(d) alternate 1‘s and 0‘sstarting with a 1

4. The Memory length of a convolutional encoder is 3. If a 5 bit message sequence is applied


As The Input For The Encoder, Then Forth E last Message Bit To Come Out of the encoder,
The number of extra zeros to be applied to the encoder is

(a) 5

(b) 4

(c) 3
(d) 6

5. The cascade of two binary symmetric channels Is a

(A) Symmetric binary channel

(B) Symmetric quaternary channel

(C) Asymmetric quaternary channel

(D) Asymmetric binary channel

6. There are four binary words given as 0000, 0001,0011, 0111.Which of these cannot beam
member of the parity check matrix of a(15,11)linear Block code?

(a) 0011

(b) 0000,0001

(c) 0000

(d) 0111

7. The encoder of a(7,4)systematic cyclic encoder with generating polynomial

g(x)=1+x2+x3 is basically a

(a) 3 stage shift register

(b) 22 stage shift register

(c) 11 stage shift register

(d) 4 stage shift register

8. A source X with entropy 2 bits/message is connected to the receiver Y through a Noise


free channel. The conditional probability of the source, given the receiver is H(X/Y) and the
joint entropy of the source and the receiver H(X,Y) .Then

(a) H(X/Y)=2 bits/message

(b) H(X,Y)=2bits/message

(c) H(X/Y)=1bit/message

(d) H(X,Y)=0bits/message

9. A system has a bandwidth of 4KHz and an S/N ratio of 28 at the input to the Receiver. If
the bandwidth of the channel is doubled ,then

(a) S/N ratio at the input of the received gets halved


(b) Capacity of the channel gets doubled

(c) Capacity of the channel gets squared

(d) S/N ratio at the input of the received gets doubled

10. The Parity Check Matrix of a(6,3) Systematic Linear Block code is

101100

011010

110001

If the Syndrome vector computed for the received code word is [010], then for error
correction, which of the bits of the received code word is to be complemented?

(a) 3
(b) 4
(c) 5
(d) 2

Answers

1.D

2.D

3.C

4.B

5.A

6.C

7.A

8.B

9.A

10.C

Unit 3
CHOOSE THE CORRECT ANSWER

1. The minimum number of bits per message required to encode the output of source
transmitting four different messages with probabilities 0.5,0.25,0.125and0.125is
2. (a) 1.5
3. (b) 1.75
4. (c) 2
5. (d) 1

2. A Binary Erasure channel has P(0/0)=P(1/1)=p; P(k/0)=P(k/1)=q. Its Capacity in


bits/symbol is

(a) p/q

(b) pq

(c) p

(d) q

3. The syndromes(x) o f a cyclic code is given by Reminder of the division [{V(x)+E(x)}


/g(x)], where V (x) is the transmitted code polynomial(x)is the error polynomial and g(x)
is the generator polynomial. The S(x) is also equal to

(a) Reminder of[V(x).E(x)]/g(x)

(b) Reminder of V(x)/g(x)

(c) Reminder of E(x)/g(x)

(d) Remaindering(x)/V(x)

5. The output of a continuous source is a uniform random variable in the range 0 ≤ x ≤ 4.


The entropy of the source in bits/samples

(a) 2

(b) 8

(c) 4

(d) 1

6. In a (6,3) systematic Linear Block code, the number of‘6‘bit code words that are not
useful is

(a) 45

(b) 64

(c) 8
(d) 56

7. The output of a source is band limited to 6KHz.It is sampled at a rate of 2KHz above the
nyquist‘s rate. If the

Entropy of the sourceis2bits/sample, then the entropy of the source in bits/sec is

(a) 12Kbps

(b) 32Kbps

(c) 28Kbps

(d) 24Kbps

8. The channel capacity of a BSC with transition probability ½ is

(a) 2bits

(b) 0bits

(c) 1bit

(d) infinity

9. A communication channel is fed with an input signal x(t) and the noise in the channel is
negative. The Power received at the receiver input is

(a) Signal power-Noise power

(b) Signal power +Noise Power

(c) Signal power x Noise Power

(d) Signal power /Noise power

10. White noise of PSD η/2 is applied to an ideal LPF with one sided band width of B Hz.The
filter provides again

of 2. If the output power of the filter is 8η ,then the value of Bin Hz is

(a) 8

(b) 2

(c) 6

(d) 4

Answers

1.B
2.C

3.C

4.A

5.D

6.C

7.B

8.B

9.B

10.A

Unit 4

CHOOSE THE CORRECT ANSWER

1. Which of the following is correct?

(a) ThesyndromeofareceivedBlockcodedworddependsonthereceivedcodeword

(b) ThesyndromeforareceivedBlockcodedwordundererrorfreereceptionconsistsofall1‘s.

(c) ThesyndromeofareceivedBlockcodedworddependsonthetransmittedcodeword.

(d) The syndrome of a received Block coded word depends on the error pattern

2. A Field is

(a) a group with 0 as the multiplicative identity for its members

(b) a group with 0 as the additive inverse for its members

(c) a group with1as the additive identity for its members

(d) an A beli an group under addition

3. Variable length source coding provides better coding efficiency, if all the messages of the
source are

(a) Equiprobable

(b) continuously transmitted


(c) discretely transmitted

(d) with different transmission probability

4. Which of the following is correct?

(a) FEC and ARQ are not used for error correction

(b) ARQisusedforerrorcontrolafterreceivermakesadecisionaboutthereceivedbit

(c) FECisusedforerrorcontrolwhenthereceiverisunabletomakeadecisionaboutthereceivedbit

(d) FECisusedforerrorcontrolafterreceivermakesadecisionaboutthereceivedbit

5. The source coding efficiency can be increased by

(a) using source extension

(b) decreasing the entropy of the source

(c) using binary coding

(d) increasing the entropy of the source

6.
AdiscretesourceXistransmittingmmessagesandisconnectedtothereceiverYthroughasymmetricc
hannel.The capacity of the channel is given as

(a) log m bits/symbol

(b) H(X)+H(Y)-H(X,Y) bits/symbol

(c) log m-H(X/Y) bits/symbol

(d) log m-H(Y/X) bits/symbol

7. The time domain behavior of a convolutional encoder of code rate 1/3 is defined in terms
of a set of

(a) 3rampresponses

(b) 3stepresponses

(c) 3sinusoidalresponses

(d) 3impulseresponses

8. A source X with entropy 2 bits/message is connected to the receive r Y through a Noise


free channel. The conditional probability of the source, given the receiver is H(X/Y) and the
joint entropy of the source and the receiver H(X,Y).Then

(a) H(X,Y)=2bits/message
(b) H(X/Y)=1bit/message

(c) H(X,Y)=0bits/message

(d) H(X/Y)=2bits/message

9. The fundamental limit on the average number of bits/source symbol is

(a) Mutual Information

(b) Entropy of the source

(c) Information content of the message

(d) Channel capacity

10. The Memory length of a convolutional encoder is 5. If a 6 bit message sequence is


applied as the input for the encoder, then for the last message bit to come out of the encoder,
the number of extra zeros to be applied to the encoder is

(a) 4

(b) 6

(c) 3

(d) 5

Answers

1.D

2.D

3.D

4.D

5.A

6.D

7.D

8.A

9.B

10.B
Unit 5

CHOOSE THE CORRECT ANSWER

1. If ‘a‘ is an element of a Field ‘F‘, then its additive inverse is

(a) -a

(b) 0

(c) a

(d) 1

2. Relative to Hard decision decoding, soft decision decoding results in

(a) better coding gain

(b) lesser coding gain

(c) less circuit complexity

(d) better bit error probability

3. .Under error free reception, the syndrome vector computed for the received cyclic
codeword consists of

(a) alternate 0‘sand1‘sstartingwitha0

(b) all zeros

(c) all ones

(d) alternate1‘s and 0‘ss tartingwitha1

4. Error free communication may be possible by

(a) increasing transmission power to the required level

(b) providing redundancy during transmission

(c) increasing the channel bandwidth

(d) reducing redundancy during transmission

5. A discrete source X is transmitting m messages and is connected to the receiver Y through


asymmetric channel. The capacity of the channel is given as

(a) H(X)+H(Y)-H(X,Y)bits/symbol

(b) log m-H(X/Y)bits/symbol

(c) log m-H(Y/X)bits/symbol


(d) log m bits/symbol

6. Theencoderofa(7,4)systematiccyclicencoderwithgeneratingpolynomialg(x)=1+x2 +x3 Is
basically a

(a) 11stageshiftregister

(b) 4stageshiftregister

(c) 3stageshiftregister

(d) 22stageshiftregister

7. A channel with independent input and output acts as

(a) Gaussian channel

(b) channel with maximum capacity

(c) lossless network

(d) resistive network

8. A system has a bandwidth of 4 KHz and an S/Nratio of 28 at the input to the Receiver.If
the bandwidth of the channel is doubled, then

(a) S/N ratio at the input of the received gets halved

(b) Capacity of the channel gets doubled

(c) S/N ratio at the input of the received gets doubled

(d) Capacity of the channel gets squared

9. A source is transmitting four messages with equal probability. Then, for optimum Source
coding efficiency.

(a) necessarily, variable length coding schemes should be used

(b) Variable length coding schemes need not necessarily be used

(c) Convolutional codes should be used

(d) Fixed length coding schemes should not be used

10. The maximum average amount of information content measured in bits/sec associated
with the output of a discrete Information source transmitting 8 messages and 2000
messages/sec is

(a) 16Kbps

(b) 4Kbps
(c) 3Kbps

(d) 6Kbps

Answers

1.A

2.A

3.B

4.B

5.C

6.C

7.D

8.A

9.B

10. D

Unit 5

CHOOSE THE CORRECT ANSWER

1.
TwobinaryrandomvariablesXandYaredistributedaccordingtothejointDistributiongivenasP(X=
Y=0)=

P(X=Y=1)=P(X=Y=1)=1/3.Then,

(a) H(X)+H(Y)=1.

(b) H(Y)=2.H(X)

(c) H(X)=H(Y)

(d) H(X)=2.H(Y)

2. Relative to Hard decision decoding, soft decision decoding results in

(a) less circuit complexity

(b) better bit error probability

(c) better coding gain

(d) lesser coding gain


3. .Under error free reception, the syndrome vector computed for the received cyclic code
word consists of

(a) alternate 1‘s and 0‘s starting with a 1

(b) all ones

(c) all zeros

(d) alternate 0‘s and 1‘s starting with a 0

4. Source 1 is transmitting two messages with probabilities 0.2 and 0.8 and Source2 is
transmitting two messages with Probabilities 0.5 and 0.5. Then

(a) MaximumuncertaintyisassociatedwithSource1

(b) Boththesources1and2arehavingmaximumamountofuncertaintyassociated

(c) There is no uncertainty associated with either of the two sources.

(d) MaximumuncertaintyisassociatedwithSource2

5. The Hamming Weight of the(6,3) Linear Block coded word 101011

(a) 5

(b) 4

(c) 2

(d) 3

6. If X is the transmitter and Y is the receiver and if the channel is the noise free, then the
mutual information I(X,Y) is equal to

(a) Conditional Entropy of the receiver, given the source

(b) Conditional Entropy of the source, given the receiver

(c) Entropy of the source

(d) Joint entropy of the source and receiver

7. In a Linear Block code

(a) the received power varies linearly with that of the transmitted power

(b) parity bits of the code word are the linear combination of the message bits

(c) the communication channel is a linear system

(d) the encoder satisfies superposition principle


8 The fundamental limit on the average number of bits/source symbol is

(a) Mutual Information

(b) Channel capacity

(c) Information content of the message

(d) Entropy of the source

9. A source is transmitting four messages with equal probability. Then for optimum Source
coding efficiency,

(a) necessarily, variable length coding schemes should be used

(b) Variable length coding schemes need not necessarily be used

(c) Convolutional codes should be used

(d) Fixed length coding schemes should not be used

10. If a memory ess source of information rate R is connected to a channel with a channel
capacity C, then on which of the following statements, the channel coding for the output of
the source is based?

(a) Minimum number of bits required to encode the output of the source is its entropy

(b) R must be less than or equal to C

(c) R must be greater than or equal to C

(d) R must be exactly equal to C

Answers

1.C

2.C

3.C

4.D

5.B

6.C

7.B

8.D

9.B
10.B

Code No: 56026 Set No. 1


JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD
III B.Tech. II Sem., I Mid-Term Examinations, February – 2012
DIGITAL COMMUNICATIONS
Objective Exam
Name: ______________________________ Hall Ticket No.
Answer All Questions.
All Questions Carry Equal Marks. Time: 20 Min. Marks: 10.
I Choose the correct alternative:
1. The minimum band width required to multiplex 12 different message signals each of band
width 10KHz is [ ]
A) 60KHz B) 120KHz C) 180KHz D) 160KHz
2. In 8-PSK system, adjacent phasors differ by an angle given by ( in degrees) [ ]
A) n/4 B) n/8 C) n/6 D) n/2
3. Band Width efficiency of a Digital Modulation Method is [ ]
A) (Minimum Band width)/ (Transmission Bit Rate)
B) (Power required)/( Minimum Band width)
C) (Transmission Bit rate)/ (Minimum Band width)
D) (Power Saved during transmission)/(Minimum Band width)
4. The Auto-correlation function of White Noise is [ ]
A) Impulse function B) Constant C) Sampling function D) Step function
5. The minimum band width required for a BPSK signal is equal to [ ]
A) one fourth of bit rate B) twice the bit rate C) half of the bit rate D) bit rate
6. Companding results in [ ]
A)More S/N ratio at higher amplitudes of the base band signal
B) More S/N ratio at lower amplitudes of the base band signal
C) Uniform S/N ratio throughout the base band signal
D) Better S/N ratio at lower frequencies
7. A uniform quantizer is having a step size of .05 volts. This quantizer suffers from a
maximum quantization error of [ ]
A) 0.1V B) 0.025 V C) 0.8 V D) 0.05 V
8. In Non-Coherent demodulation, the receiver [ ]
A) relies on carrier phase B) relies on the carrier amplitude
C) makes an error with less probability D) uses a carrier recovery circuit
9. The advantage of Manchester encoding is [ ]
A) less band width requirement B) less bit energy required for transmission
C) less probability of error D) less bit duration
10. Granular Noise in Delta Modulation system can be reduced by
A) using a square law device B) increasing the step size
C) decreasing the step size D) adjusting the rate of rise of the base band signal
Cont……2
Code No: 56026 :2: Set No. 1
II Fill in the blanks
11. Non-coherent detection of FSK signal results in ____________________
12. _____________ is used as a Predictor in a DPCM transmitter.
13. The Nyquist's rate of sampling of an analog signal S(t) for alias free reconstruction is
5000samples/sec. For a signal x(t) = [S(t)]2 ,the corresponding sampling rate in
samples/sec is __________________
14. A Matched filter is used to __________________________
15. A signal extending over -4v to +4v is quantized into 8 levels. The maximum possible
quantization error obtainable is _____________V.
16. The advantage of DPCM over Delta Modulation is _________________________
17. The phases in a QPSK system can be expressed as ______________________
18. The Synchronization is defined as _______________________
19. The sampling rate in Delta Modulation is _______________than PCM.
20. The bit error Probability of BPSK system is __________________that of QPSK.
code No: 56026 Set No. 2
JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD
III B.Tech. II Sem., I Mid-Term Examinations, February – 2012
DIGITAL COMMUNICATIONS
Objective Exam
Name: ______________________________ Hall Ticket No.
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 10.
I Choose the correct alternative:
1. The Auto-correlation function of White Noise is [ ]
A) Impulse function B) Constant C) Sampling function D) Step function
2. The minimum band width required for a BPSK signal is equal to [ ]
A) one fourth of bit rate B) twice the bit rate C) half of the bit rate D) bit rate
3. Companding results in [ ]
A)More S/N ratio at higher amplitudes of the base band signal
B) More S/N ratio at lower amplitudes of the base band signal
C) Uniform S/N ratio throughout the base band signal
D) Better S/N ratio at lower frequencies
4. A uniform quantizer is having a step size of .05 volts. This quantizer suffers from a
maximum quantization error of [ ]
A) 0.1V B) 0.025 V C) 0.8 V D) 0.05 V
5. In Non-Coherent demodulation, the receiver [ ]
A) relies on carrier phase B) relies on the carrier amplitude
C) makes an error with less probability D) uses a carrier recovery circuit
6. The advantage of Manchester encoding is [ ]
A) less band width requirement B) less bit energy required for transmission
C) less probability of error D) less bit duration
7. Granular Noise in Delta Modulation system can be reduced by
A) using a square law device B) increasing the step size
C) decreasing the step size D) adjusting the rate of rise of the base band signal
8. The minimum band width required to multiplex 12 different message signals each of band
width 10KHz is [ ]
A) 60KHz B) 120KHz C) 180KHz D) 160KHz
9. In 8-PSK system, adjacent phasors differ by an angle given by ( in degrees) [ ]
A) n/4 B) n/8 C) n/6 D) n/2
10. Band Width efficiency of a Digital Modulation Method is [ ]
A) (Minimum Band width)/ (Transmission Bit Rate)
B) (Power required)/( Minimum Band width)
C) (Transmission Bit rate)/ (Minimum Band width)
D) (Power Saved during transmission)/(Minimum Band width)
Cont……2
code No: 56026 :2: Set No. 2
II Fill in the blanks
11. A Matched filter is used to __________________________
12. A signal extending over -4v to +4v is quantized into 8 levels. The maximum possible
quantization error obtainable is _____________V.
13. The advantage of DPCM over Delta Modulation is _________________________
14. The phases in a QPSK system can be expressed as ______________________
15. The Synchronization is defined as _______________________
16. The sampling rate in Delta Modulation is _______________than PCM.
17. The bit error Probability of BPSK system is __________________that of QPSK.
18. Non-coherent detection of FSK signal results in ____________________
19. _____________ is used as a Predictor in a DPCM transmitter.
20. The Nyquist's rate of sampling of an analog signal S(t) for alias free reconstruction is
5000samples/sec. For a signal x(t) = [S(t)]2 ,the corresponding sampling rate in
samples/sec is __________________
-oOo-

Code No: 56026 Set No. 3


JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD
III B.Tech. II Sem., I Mid-Term Examinations, February – 2012
DIGITAL COMMUNICATIONS
Objective Exam
Name: ______________________________ Hall Ticket No.
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 10.
I Choose the correct alternative:
1. Companding results in [ ]
A)More S/N ratio at higher amplitudes of the base band signal
B) More S/N ratio at lower amplitudes of the base band signal
C) Uniform S/N ratio throughout the base band signal
D) Better S/N ratio at lower frequencies
2. A uniform quantizer is having a step size of .05 volts. This quantizer suffers from a
maximum quantization error of [ ]
A) 0.1V B) 0.025 V C) 0.8 V D) 0.05 V
3. In Non-Coherent demodulation, the receiver [ ]
A) relies on carrier phase B) relies on the carrier amplitude
C) makes an error with less probability D) uses a carrier recovery circuit
4. The advantage of Manchester encoding is [ ]
A) less band width requirement B) less bit energy required for transmission
C) less probability of error D) less bit duration
5. Granular Noise in Delta Modulation system can be reduced by
A) using a square law device B) increasing the step size
C) decreasing the step size D) adjusting the rate of rise of the base band signal
6. The minimum band width required to multiplex 12 different message signals each of band
width 10KHz is [ ]
A) 60KHz B) 120KHz C) 180KHz D) 160KHz
7. In 8-PSK system, adjacent phasors differ by an angle given by ( in degrees) [ ]
A) n/4 B) n/8 C) n/6 D) n/2
8. Band Width efficiency of a Digital Modulation Method is [ ]
A) (Minimum Band width)/ (Transmission Bit Rate)
B) (Power required)/( Minimum Band width)
C) (Transmission Bit rate)/ (Minimum Band width)
D) (Power Saved during transmission)/(Minimum Band width)
9. The Auto-correlation function of White Noise is [ ]
A) Impulse function B) Constant C) Sampling function D) Step function
10. The minimum band width required for a BPSK signal is equal to [ ]
A) one fourth of bit rate B) twice the bit rate C) half of the bit rate D) bit rate
Cont……2

A
Code No: 56026 :2: Set No. 3
II Fill in the blanks
11. The advantage of DPCM over Delta Modulation is _________________________
12. The phases in a QPSK system can be expressed as ______________________
13. The Synchronization is defined as _______________________
14. The sampling rate in Delta Modulation is _______________than PCM.
15. The bit error Probability of BPSK system is __________________that of QPSK.
16. Non-coherent detection of FSK signal results in ____________________
17. _____________ is used as a Predictor in a DPCM transmitter.
18. The Nyquist's rate of sampling of an analog signal S(t) for alias free reconstruction is
5000samples/sec. For a signal x(t) = [S(t)]2 ,the corresponding sampling rate in
samples/sec is __________________
19. A Matched filter is used to __________________________
20. A signal extending over -4v to +4v is quantized into 8 levels. The maximum possible
quantization error obtainable is _____________V.
-oOo-

Code No: 56026 Set No. 4


JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD
III B.Tech. II Sem., I Mid-Term Examinations, February – 2012
DIGITAL COMMUNICATIONS
Objective Exam
Name: ______________________________ Hall Ticket No.
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 10.
I Choose the correct alternative:
1. In Non-Coherent demodulation, the receiver [ ]
A) relies on carrier phase B) relies on the carrier amplitude
C) makes an error with less probability D) uses a carrier recovery circuit
2. The advantage of Manchester encoding is [ ]
A) less band width requirement B) less bit energy required for transmission
C) less probability of error D) less bit duration
3. Granular Noise in Delta Modulation system can be reduced by
A) using a square law device B) increasing the step size
C) decreasing the step size D) adjusting the rate of rise of the base band signal
4. The minimum band width required to multiplex 12 different message signals each of band
width 10KHz is [ ]
A) 60KHz B) 120KHz C) 180KHz D) 160KHz
5. In 8-PSK system, adjacent phasors differ by an angle given by ( in degrees) [ ]
A) n/4 B) n/8 C) n/6 D) n/2
6. Band Width efficiency of a Digital Modulation Method is [ ]
A) (Minimum Band width)/ (Transmission Bit Rate)
B) (Power required)/( Minimum Band width)
C) (Transmission Bit rate)/ (Minimum Band width)
D) (Power Saved during transmission)/(Minimum Band width)
7. The Auto-correlation function of White Noise is [ ]
A) Impulse function B) Constant C) Sampling function D) Step function
8. The minimum band width required for a BPSK signal is equal to [ ]
A) one fourth of bit rate B) twice the bit rate C) half of the bit rate D) bit rate
9. Companding results in [ ]
A)More S/N ratio at higher amplitudes of the base band signal
B) More S/N ratio at lower amplitudes of the base band signal
C) Uniform S/N ratio throughout the base band signal
D) Better S/N ratio at lower frequencies
10. A uniform quantizer is having a step size of .05 volts. This quantizer suffers from a
maximum quantization error of [ ]
A) 0.1V B) 0.025 V C) 0.8 V D) 0.05 V
Cont……2

Code No: 56026 :2: Set No. 4


II Fill in the blanks
11. The Synchronization is defined as _______________________
12. The sampling rate in Delta Modulation is _______________than PCM.
13. The bit error Probability of BPSK system is __________________that of QPSK.
14. Non-coherent detection of FSK signal results in ____________________
15. _____________ is used as a Predictor in a DPCM transmitter.
16. The Nyquist's rate of sampling of an analog signal S(t) for alias free reconstruction is
5000samples/sec. For a signal x(t) = [S(t)]2 ,the corresponding sampling rate in
samples/sec is __________________
17. A Matched filter is used to __________________________
18. A signal extending over -4v to +4v is quantized into 8 levels. The maximum possible
quantization error obtainable is _____________V.
19. The advantage of DPCM over Delta Modulation is _________________________
20. The phases in a QPSK system can be expressed as ______________________
-oOo-

Code No: 56026 Set No. 1


JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD
III B.Tech. II Sem., II Mid-Term Examinations, April – 2012
DIGITAL COMMUNICATIONS
Objective Exam
Name: ______________________________ Hall Ticket No.
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 10.
I Choose the correct alternative:
1. Information rate of a source is [ ]
A) maximum when the source is continuous B) the entropy of the source measured in
bits/message
C) a measure of the uncertainty of the communication system
D) the entropy of the source measured in bits/sec.
2. The Hamming Weight of the (6,3) Linear Block coded word 101011 [ ]
A) 5 B) 4 C) 2 D) 3
3. Which of the following can be the generating polynomial for a (7,4) systematic Cyclic
code? [ ]
3 5 2 4 3 7 4 3
A) x +x+1 B) x +x +1 C) x +x +1 D) x +x +x +1
4. In a Linear Block code [ ]
A) the received power varies linearly with that of the transmitted power
B) parity bits of the code word are the linear combination of the message bits
C) the communication channel is a linear system
D) the encoder satisfies super position principle
5. The fundamental limit on the average number of bits/source symbol is [ ]
A) Mutual Information B) Channel capacity
C) Information content of the message D) Entropy of the source
6. A system has a band width of 3KHz and an S/N ratio of 29dB at the input of the receiver.
If the band width of the channel gets doubled, then [ ]
A) its capacity gets halved B) the corresponding S/N ratio gets doubled
C) the corresponding S/N ratio gets halved D) its capacity gets doubled
7. The Channel Matrix of a Noiseless channel [ ]
A) consists of a single nonzero number in each column
B) consists of a single nonzero number in each row
C) is a square Matrix
D) is an Identity Matrix
8. A source emits messages A and B with probability 0.8 and 0.2 respectively. The
redundancy provided by the optimum source-coding scheme for the above Source is [
]
A) 27% B) 72% C) 55% D) 45%
9. A source X and the receiver Y are connected by a noise free channel. Its capacity is [ ]
A) Max H(Y/X) B) Max H(X) C) Max H(X/Y) D) Max H(X,Y)
10. Exchange between Band width and Signal noise ratio can be justified based on [ ]
A) Hartley - Shanon‘s Law B) Shanon‘s source coding Theorem
C) Shanon‘s limit D) Shanon‘s channel coding Theorem
Cont…….2

Code No: 56026 Set No. 1


DIGITAL COMMUNICATIONS
KEYS
I Choose the correct alternative:
1. D
2. B
3. A
4. B
5. D
6. C
7. D
8. A
9. B
10. A
II Fill in the blanks
11. 3
12. to transmit the information signal using orthogonal codes
13. symmetric Binary channel
14. Source extension
15. It has soft capacity limit
16. Variable length coding scheme
17. 18
18. Better bit error probability
19. ZERO
20. It has soft capacity limit
-oOo-

Code No: 56026 :2: Set No. 1


II Fill in the blanks
11. The Parity check matrix of a linear block code is
101100
011010
110001
Its Hamming distance is ___________________
12. The significance of PN sequence in CDMA is ________________
13. The cascade of two Binary Symmetric Channels is a __________________________
14. The source coding efficiency can be increased by using _______________________
15. The advantage of Spread Spectrum Modulation schemes over other modulations is
_________________
16. Entropy coding is a _____________________
17. A convolutional encoder of code rate 1/2 is a 3 stage shift register with a message word
length of 6.The code word length obtained from the encoder ( in bits) is
_____________
18. Relative to Hard decision decoding, soft decision decoding results in _____________
19. If T is the code vector and H is the Parity check Matrix of a Linear Block code, then the
T
code is defined by the set of all code vectors for which H .T = ______________
20. The advantage of CDMA over Frequency hopping is ____________
-oOo-
Code No: 56026 Set No. 2
JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD
III B.Tech. II Sem., II Mid-Term Examinations, April – 2012
DIGITAL COMMUNICATIONS
Objective Exam
Name: ______________________________ Hall Ticket No.
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 10.
I Choose the correct alternative:
1. In a Linear Block code [ ]
A) the received power varies linearly with that of the transmitted power
B) parity bits of the code word are the linear combination of the message bits
C) the communication channel is a linear system
D) the encoder satisfies super position principle
2. The fundamental limit on the average number of bits/source symbol is [ ]
A) Mutual Information B) Channel capacity
C) Information content of the message D) Entropy of the source
3. A system has a band width of 3KHz and an S/N ratio of 29dB at the input of the receiver.
If the band width of the channel gets doubled, then [ ]
A) its capacity gets halved B) the corresponding S/N ratio gets doubled
C) the corresponding S/N ratio gets halved D) its capacity gets doubled
4. The Channel Matrix of a Noiseless channel [ ]
A) consists of a single nonzero number in each column
B) consists of a single nonzero number in each row
C) is a square Matrix
D) is an Identity Matrix
5. A source emits messages A and B with probability 0.8 and 0.2 respectively. The
redundancy provided by the optimum source-coding scheme for the above Source is [
]
A) 27% B) 72% C) 55% D) 45%
6. A source X and the receiver Y are connected by a noise free channel. Its capacity is [ ]
A) Max H(Y/X) B) Max H(X) C) Max H(X/Y) D) Max H(X,Y)
7. Exchange between Band width and Signal noise ratio can be justified based on [ ]
A) Hartley - Shanon‘s Law B) Shanon‘s source coding Theorem
C) Shanon‘s limit D) Shanon‘s channel coding Theorem
8. Information rate of a source is [ ]
A) maximum when the source is continuous B) the entropy of the source measured in
bits/message
C) a measure of the uncertainty of the communication system
D) the entropy of the source measured in bits/sec.
9. The Hamming Weight of the (6,3) Linear Block coded word 101011 [ ]
A) 5 B) 4 C) 2 D) 3
10. Which of the following can be the generating polynomial for a (7,4) systematic Cyclic
code? [ ]
3 5 2 4 3 7 4 3
A) x +x+1 B) x +x +1 C) x +x +1 D) x +x +x +1
Cont…….2

A
Code No: 56026 :2: Set No. 2
II Fill in the blanks
11. The source coding efficiency can be increased by using _______________________
12. The advantage of Spread Spectrum Modulation schemes over other modulations is
_________________
13. Entropy coding is a _____________________
14. A convolutional encoder of code rate 1/2 is a 3 stage shift register with a message word
length of 6.The code word length obtained from the encoder ( in bits) is
_____________
15. Relative to Hard decision decoding, soft decision decoding results in _____________
16. If T is the code vector and H is the Parity check Matrix of a Linear Block code, then the
T
code is defined by the set of all code vectors for which H .T = ______________
17. The advantage of CDMA over Frequency hopping is ____________
18. The Parity check matrix of a linear block code is
101100
011010
110001
Its Hamming distance is ___________________
19. The significance of PN sequence in CDMA is ________________
20. The cascade of two Binary Symmetric Channels is a __________________________
-oOo-

Code No: 56026 Set No. 3


JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD
III B.Tech. II Sem., II Mid-Term Examinations, April – 2012
DIGITAL COMMUNICATIONS
Objective Exam
Name: ______________________________ Hall Ticket No.
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 10.
I Choose the correct alternative:
1. A system has a band width of 3KHz and an S/N ratio of 29dB at the input of the receiver.
If the band width of the channel gets doubled, then [ ]
A) its capacity gets halved B) the corresponding S/N ratio gets doubled
C) the corresponding S/N ratio gets halved D) its capacity gets doubled
2. The Channel Matrix of a Noiseless channel [ ]
A) consists of a single nonzero number in each column
B) consists of a single nonzero number in each row
C) is a square Matrix
D) is an Identity Matrix
3. A source emits messages A and B with probability 0.8 and 0.2 respectively. The
redundancy provided by the optimum source-coding scheme for the above Source is [
]
A) 27% B) 72% C) 55% D) 45%
4. A source X and the receiver Y are connected by a noise free channel. Its capacity is [ ]
A) Max H(Y/X) B) Max H(X) C) Max H(X/Y) D) Max H(X,Y)
5. Exchange between Band width and Signal noise ratio can be justified based on [ ]
A) Hartley - Shanon‘s Law B) Shanon‘s source coding Theorem
C) Shanon‘s limit D) Shanon‘s channel coding Theorem
6. Information rate of a source is [ ]
A) maximum when the source is continuous B) the entropy of the source measured in
bits/message
C) a measure of the uncertainty of the communication system
D) the entropy of the source measured in bits/sec.
7. The Hamming Weight of the (6,3) Linear Block coded word 101011 [ ]
A) 5 B) 4 C) 2 D) 3
8. Which of the following can be the generating polynomial for a (7,4) systematic Cyclic
code? [ ]
3 5 2 4 3 7 4 3
A) x +x+1 B) x +x +1 C) x +x +1 D) x +x +x +1
9. In a Linear Block code [ ]
A) the received power varies linearly with that of the transmitted power
B) parity bits of the code word are the linear combination of the message bits
C) the communication channel is a linear system
D) the encoder satisfies super position principle
10. The fundamental limit on the average number of bits/source symbol is [ ]
A) Mutual Information B) Channel capacity
C) Information content of the message D) Entropy of the source
Cont…….2

A
Code No: 56026 :2: Set No. 3
II Fill in the blanks
11. Entropy coding is a _____________________
12. A convolutional encoder of code rate 1/2 is a 3 stage shift register with a message word
length of 6.The code word length obtained from the encoder ( in bits) is
_____________
13. Relative to Hard decision decoding, soft decision decoding results in _____________
14. If T is the code vector and H is the Parity check Matrix of a Linear Block code, then the
T
code is defined by the set of all code vectors for which H .T = ______________
15. The advantage of CDMA over Frequency hopping is ____________
16. The Parity check matrix of a linear block code is
101100
011010
110001
Its Hamming distance is ___________________
17. The significance of PN sequence in CDMA is ________________
18. The cascade of two Binary Symmetric Channels is a __________________________
19. The source coding efficiency can be increased by using _______________________
20. The advantage of Spread Spectrum Modulation schemes over other modulations is
_________________
Code No: 56026 Set No. 4
JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD
III B.Tech. II Sem., II Mid-Term Examinations, April – 2012
DIGITAL COMMUNICATIONS
Objective Exam
Name: ______________________________ Hall Ticket No.
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 10.
I Choose the correct alternative:
1. A source emits messages A and B with probability 0.8 and 0.2 respectively. The
redundancy provided by the optimum source-coding scheme for the above Source is [
]
A) 27% B) 72% C) 55% D) 45%
2. A source X and the receiver Y are connected by a noise free channel. Its capacity is [ ]
A) Max H(Y/X) B) Max H(X) C) Max H(X/Y) D) Max H(X,Y)
3. Exchange between Band width and Signal noise ratio can be justified based on [ ]
A) Hartley - Shanon‘s Law B) Shanon‘s source coding Theorem
C) Shanon‘s limit D) Shanon‘s channel coding Theorem
4. Information rate of a source is [ ]
A) maximum when the source is continuous B) the entropy of the source measured in
bits/message
C) a measure of the uncertainty of the communication system
D) the entropy of the source measured in bits/sec.
5. The Hamming Weight of the (6,3) Linear Block coded word 101011 [ ]
A) 5 B) 4 C) 2 D) 3
6. Which of the following can be the generating polynomial for a (7,4) systematic Cyclic
code? [ ]
3 5 2 4 3 7 4 3
A) x +x+1 B) x +x +1 C) x +x +1 D) x +x +x +1
7. In a Linear Block code [ ]
A) the received power varies linearly with that of the transmitted power
B) parity bits of the code word are the linear combination of the message bits
C) the communication channel is a linear system
D) the encoder satisfies super position principle
8. The fundamental limit on the average number of bits/source symbol is [ ]
A) Mutual Information B) Channel capacity
C) Information content of the message D) Entropy of the source
9. A system has a band width of 3KHz and an S/N ratio of 29dB at the input of the receiver.
If the band width of the channel gets doubled, then [ ]
A) its capacity gets halved B) the corresponding S/N ratio gets doubled
C) the corresponding S/N ratio gets halved D) its capacity gets doubled
10. The Channel Matrix of a Noiseless channel [ ]
A) consists of a single nonzero number in each column
B) consists of a single nonzero number in each row
C) is a square Matrix
D) is an Identity Matrix
Cont…….2
Code No: 56026 :2: Set No. 4
II Fill in the blanks
11. Relative to Hard decision decoding, soft decision decoding results in _____________
12. If T is the code vector and H is the Parity check Matrix of a Linear Block code, then the
T
code is defined by the set of all code vectors for which H .T = ______________
13. The advantage of CDMA over Frequency hopping is ____________
14. The Parity check matrix of a linear block code is
101100
011010
110001
Its Hamming distance is ___________________
15. The significance of PN sequence in CDMA is ________________
16. The cascade of two Binary Symmetric Channels is a __________________________
17. The source coding efficiency can be increased by using _______________________
18. The advantage of Spread Spectrum Modulation schemes over other modulations is
_________________
19. Entropy coding is a _____________________
20. A convolutional encoder of code rate 1/2 is a 3 stage shift register with a message word
length of 6.The code word length obtained from the encoder ( in bits) is
_____________
-oOo-
JNTUWORLD

Code No: 07A5EC09 Set No. 1


JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD
III B.Tech. I Sem., I Mid-Term Examinations, September – 2010
DIGITAL COMMUNICATIONS
Objective Exam
Name: ______________________________ Hall Ticket No.
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 20.
I Choose the correct alternative:
1) Word length in Delta modulation in Delta modulation is [ ]
n
A)3 bits B)2 bits C)1 bit D)2 bits
2) Which of the following gives minimum probability of error [ ]
A)FSK B)ASK C)DPSK D)PSK
3) QPSK is an example of M-ary data transmission with M= [ ]
A)2 B)8 C)6 D)4
4) The quantization error in PCM when Δ is the step size [ ]
2 2 2 2
A) Δ /12 B) Δ /2 C) Δ /4 D) Δ /3
5) Quantization noise occurs in [ ]
A) TDM B)FDM C) PCM D)PWM
6) Non Uniform quantization is used to make [ ]
A) (S/N)q is uniform B) (S/N)q is non-uniform
C) (S/N)q is high D) (S/N)q is low
7) Slope Overload distortion in DM can be reduced by [ ]
A) Increasing step size B)Decreasing step size
C) Uniform step size D)Zero step size
8) Which of the following requires more band width [ ]
A) ASK B)PSK C)FSK D)DPSK
9) Companding results in [ ]
A) More S/N ratio at higher amplitudes B) More S/N ratio at lower amplitudes
C) Uniform S/N ratio throughout the signal D) Better S/N ratio at lower frequencies
10) Mean square quantization noise in the PCM system with step size of 2V is [ ]
A)1/3 B)1/12 C)3/2 D)2
Cont…..2

A
www.jntuworld.com www.jntuworld.com www.jwjobs.net JNTUWORLD

Code No: 07A5EC09 :2: Set No.1


II Fill in the blanks:
11) The minimum symbol rate of a PCM system transmitting an analog signal band limited to
2 KHz, the number of Q-levels 64 is ------------------
12) In DM granular noise occurs if when step size is -------------
13) The combination of compressor and expander is called---------------------
14) Data word length in DM is ---------------
15) Band width of PCM signals is -----------
16) A signal extending over-4V to +4Vis quantized in to 8e maximum possible quantization
error obtainable is-------------
17) Probability of error of PSK scheme is-----------------------
18) PSK and FSK have a constant--------------
19) Granular noise occurs when step size is--------------
20) Converting discrete time continuous signal into discrete amplitude discrete time signal is
called-----------------.
-oOo-
www.jntuworld.com www.jntuworld.com www.jwjobs.net JNTUWORLD
Code No: 07A5EC09 Set No. 2
JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD
III B.Tech. I Sem., I Mid-Term Examinations, September – 2010
DIGITAL COMMUNICATIONS
Objective Exam
Name: ______________________________ Hall Ticket No.
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 20.
I Choose the correct alternative:
1) The quantization error in PCM when Δ is the step size [ ]
2 2 2 2
A) Δ /12 B) Δ /2 C) Δ /4 D) Δ /3
2) Quantization noise occurs in [ ]
A) TDM B)FDM C) PCM D)PWM
3) Non Uniform quantization is used to make [ ]
A) (S/N)q is uniform B) (S/N)q is non-uniform
C) (S/N)q is high D) (S/N)q is low
4) Slope Overload distortion in DM can be reduced by [ ]
A) Increasing step size B)Decreasing step size
C) Uniform step size D)Zero step size
5) Which of the following requires more band width [ ]
A) ASK B)PSK C)FSK D)DPSK
6) Companding results in [ ]
A) More S/N ratio at higher amplitudes B) More S/N ratio at lower amplitudes
C) Uniform S/N ratio throughout the signal D) Better S/N ratio at lower frequencies
7) Mean square quantization noise in the PCM system with step size of 2V is [ ]
A)1/3 B)1/12 C)3/2 D)2
8) Word length in Delta modulation in Delta modulation is [ ]
n
A)3 bits B)2 bits C)1 bit D)2 bits
9) Which of the following gives minimum probability of error [ ]
A)FSK B)ASK C)DPSK D)PSK
10) QPSK is an example of M-ary data transmission with M= [ ]
A)2 B)8 C)6 D)4
Cont…..2

A
www.jntuworld.com www.jntuworld.com www.jwjobs.net JNTUWORLD
Code No: 07A5EC09 :2: Set No.2
II Fill in the blanks:
11) Data word length in DM is ---------------
12) Band width of PCM signals is -----------
13) A signal extending over-4V to +4Vis quantized in to 8e maximum possible quantization
error obtainable is-------------
14) Probability of error of PSK scheme is-----------------------
15) PSK and FSK have a constant--------------
16) Granular noise occurs when step size is--------------
17) Converting discrete time continuous signal into discrete amplitude discrete time signal is
called-----------------.
18) The minimum symbol rate of a PCM system transmitting an analog signal band limited to
2 KHz, the number of Q-levels 64 is ------------------
19) In DM granular noise occurs if when step size is -------------
20) The combination of compressor and expander is called---------------------
-oOo-

Code No: 07A5EC09 Set No. 3


JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD
III B.Tech. I Sem., I Mid-Term Examinations, September – 2010
DIGITAL COMMUNICATIONS
Objective Exam
Name: ______________________________ Hall Ticket No.
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 20.
I Choose the correct alternative:
1) Non Uniform quantization is used to make [ ]
A) (S/N)q is uniform B) (S/N)q is non-uniform
C) (S/N)q is high D) (S/N)q is low
2) Slope Overload distortion in DM can be reduced by [ ]
A) Increasing step size B)Decreasing step size
C) Uniform step size D)Zero step size
3) Which of the following requires more band width [ ]
A) ASK B)PSK C)FSK D)DPSK
4) Companding results in [ ]
A) More S/N ratio at higher amplitudes B) More S/N ratio at lower amplitudes
C) Uniform S/N ratio throughout the signal D) Better S/N ratio at lower frequencies
5) Mean square quantization noise in the PCM system with step size of 2V is [ ]
A)1/3 B)1/12 C)3/2 D)2
6) Word length in Delta modulation in Delta modulation is [ ]
n
A)3 bits B)2 bits C)1 bit D)2 bits
7) Which of the following gives minimum probability of error [ ]
A)FSK B)ASK C)DPSK D)PSK
8) QPSK is an example of M-ary data transmission with M= [ ]
A)2 B)8 C)6 D)4
9) The quantization error in PCM when Δ is the step size [ ]
2 2 2 2
A) Δ /12 B) Δ /2 C) Δ /4 D) Δ /3
10) Quantization noise occurs in [ ]
A) TDM B)FDM C) PCM D)PWM
Cont…..2
Code No: 07A5EC09 :2: Set No.3
II Fill in the blanks:
11) A signal extending over-4V to +4Vis quantized in to 8e maximum possible quantization
error obtainable is-------------
12) Probability of error of PSK scheme is-----------------------
13) PSK and FSK have a constant--------------
14) Granular noise occurs when step size is--------------
15) Converting discrete time continuous signal into discrete amplitude discrete time signal is
called-----------------.
16) The minimum symbol rate of a PCM system transmitting an analog signal band limited to
2 KHz, the number of Q-levels 64 is ------------------
17) In DM granular noise occurs if when step size is -------------
18) The combination of compressor and expander is called---------------------
19) Data word length in DM is ---------------
20) Band width of PCM signals is -----------
-oOo-

Code No: 07A5EC09 Set No. 4


JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD
III B.Tech. I Sem., I Mid-Term Examinations, September – 2010
DIGITAL COMMUNICATIONS
Objective Exam
Name: ______________________________ Hall Ticket No.
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 20.
I Choose the correct alternative:
1) Which of the following requires more band width [ ]
A) ASK B)PSK C)FSK D)DPSK
2) Companding results in [ ]
A) More S/N ratio at higher amplitudes B) More S/N ratio at lower amplitudes
C) Uniform S/N ratio throughout the signal D) Better S/N ratio at lower frequencies
3) Mean square quantization noise in the PCM system with step size of 2V is [ ]
A)1/3 B)1/12 C)3/2 D)2
4) Word length in Delta modulation in Delta modulation is [ ]
n
A)3 bits B)2 bits C)1 bit D)2 bits
5) Which of the following gives minimum probability of error [ ]
A)FSK B)ASK C)DPSK D)PSK
6) QPSK is an example of M-ary data transmission with M= [ ]
A)2 B)8 C)6 D)4
7) The quantization error in PCM when Δ is the step size [ ]
2 2 2 2
A) Δ /12 B) Δ /2 C) Δ /4 D) Δ /3
8) Quantization noise occurs in [ ]
A) TDM B)FDM C) PCM D)PWM
9) Non Uniform quantization is used to make [ ]
A) (S/N)q is uniform B) (S/N)q is non-uniform
C) (S/N)q is high D) (S/N)q is low
10) Slope Overload distortion in DM can be reduced by [ ]
A) Increasing step size B)Decreasing step size
C) Uniform step size D)Zero step size
Cont…..2
Code No: 07A5EC09 :2: Set No.4
II Fill in the blanks:
11) PSK and FSK have a constant--------------
12) Granular noise occurs when step size is--------------
13) Converting discrete time continuous signal into discrete amplitude discrete time signal is
called-----------------.
14) The minimum symbol rate of a PCM system transmitting an analog signal band limited to
2 KHz, the number of Q-levels 64 is ------------------
15) In DM granular noise occurs if when step size is -------------
16) The combination of compressor and expander is called---------------------
17) Data word length in DM is ---------------
18) Band width of PCM signals is -----------
19) A signal extending over-4V to +4Vis quantized in to 8e maximum possible quantization
error obtainable is-------------
20) Probability of error of PSK scheme is-----------------------
-oOo-

Code No: 07A5EC09 Set No. 1


JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD
III B.Tech. I Sem., II Mid-Term Examinations, November – 2010
DIGITAL COMMUNICATIONS
Objective Exam
Name: ______________________________ Hall Ticket No.
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 20.
I Choose the correct alternative:
1) The channel matrix of a noiseless channel [ ]
a)consists of a single nonzero number in each column.
b) consists of a single nonzero number in each row.
c) is an Identity Matrix. d) is a square matrix.
2 ) Information content of a message [ ]
a) increase with its certainty of occurrence. b) independent of the certainty of
occurrence.
c) increases with its uncertainty of occurrence. d) is the logarithm of its uncertainty of
occurrence.
3) The channel capacity of a BSC with the transition probability ½ is [ ]
a) 0 bits b) 1 bit c) 2 bits d) infinity
4) For the data word 1110 in a (7, 4) non-systematic cyclic code with the generator
2 3
polynomial 1+x +x , the code polynomial is [ ]
3 5 2 3 5 2 3 4 5
a) 1+x+x +x b) 1+x +x +x c) 1+x +x +x d) 1+x+x
5) A source transmitting ‘m’ number of messages is connected to a noise free channel. The
capacity of the channel is [ ]
2
a) m bits/symbol b) m bits/symbol c) logm bits/symbol d) 2m bits/symbol
6) Which of the following is a p(Y/X) matrix for a binary symmetric channel [ ]
a) b) c) d) None
7) Exchange between channel bandwidth and (S/N) ratio can be adjusted based on [ ]
a)shannons limit b)shannons source coding
c) shannons channel coading d) Shannon Hartley theorem
8) For the data word 1110 in a (7, 4) non-systematic cyclic code with the generator
3
polynomial 1+x+x , the code polynomial is [ ]
3 5 2 3 5 2 3 4 4 5
a) 1+x+x +x b) 1+x +x +x c) 1+x +x +x d) 1+x +x
9) A source X with entropy 2 bits/message is connected to the receiver Y through a noise free
channel. The conditional probability of the source is H(X/Y) and the joint entropy of
the source and the receiver H(X, Y). Then [ ]
a) H(X,Y)= 2 bits/message b) H(X/Y)= 2 bits/message
c) H(X, Y)= 0 bits/message d) H(X/Y)= 1 bit/message
Cont…..2

Code No: 07A5EC09 :2: Set No.1


10) Which of the following is a p(Y/X) matrix for a binary Erasure channel [ ]
a) 11ppqq−⎡⎤⎢⎥−⎣⎦ b) c) d) None
II Fill in the blanks:
11) The information rate of a source is also referred to as entropy measured in
______________
12) H(X,Y)=______________ or __________________
13) Capacity of a noise free channel is _________________
14) The Shannon’s limit is ______________
15) The channel capacity with infinite bandwidth is not because ____________
16) Assuming 26 characters are equally likely , the average of the information content of
English language in bits/character is________________
17) The distance between two vector c1 and c2 is defined as the no.of components in which
they are differ is called as____________________
18) The minimum distance of a linear block code is equal to____________________of any
non-zero code word in the code.
19) A linear block code with a minimum distance dmin can detect upto ___________________
20) For a Linear Block code Code rate =_________________
-oOo-

Code No: 07A5EC09 Set No. 2


JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD
III B.Tech. I Sem., II Mid-Term Examinations, November – 2010
DIGITAL COMMUNICATIONS
Objective Exam
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 20.
I Choose the correct alternative:
1) For the data word 1110 in a (7, 4) non-systematic cyclic code with the generator
2 3
polynomial 1+x +x , the code polynomial is [ ]
3 5 2 3 5 2 3 4 5
a) 1+x+x +x b) 1+x +x +x c) 1+x +x +x d) 1+x+x
2) A source transmitting ‘m’ number of messages is connected to a noise free channel. The
capacity of the channel is [ ]
2
a) m bits/symbol b) m bits/symbol c) logm bits/symbol d) 2m bits/symbol
3) Which of the following is a p(Y/X) matrix for a binary symmetric channel [ ]
a) b) c) d) None
4) Exchange between channel bandwidth and (S/N) ratio can be adjusted based on [ ]
a)shannons limit b)shannons source coding
c) shannons channel coading d) Shannon Hartley theorem

5) For the data word 1110 in a (7, 4) non-systematic cyclic code with the generator
3
polynomial 1+x+x , the code polynomial is [ ]
3 5 2 3 5 2 3 4 4 5
a) 1+x+x +x b) 1+x +x +x c) 1+x +x +x d) 1+x +x
6) A source X with entropy 2 bits/message is connected to the receiver Y through a noise free
channel. The conditional probability of the source is H(X/Y) and the joint entropy of
the source and the receiver H(X, Y). Then [ ]
a) H(X,Y)= 2 bits/message b) H(X/Y)= 2 bits/message
c) H(X, Y)= 0 bits/message d) H(X/Y)= 1 bit/message
7) Which of the following is a p(Y/X) matrix for a binary Erasure channel [ ]
a) 11ppqq−⎡⎤⎢⎥−⎣⎦ b) c) d) None
8) The channel matrix of a noiseless channel [ ]
a)consists of a single nonzero number in each column.
b) consists of a single nonzero number in each row.
c) is an Identity Matrix. d) is a square matrix.
9 ) Information content of a message [ ]
a) increase with its certainty of occurrence. b) independent of the certainty of
occurrence.
c) increases with its uncertainty of occurrence. d) is the logarithm of its uncertainty of
occurrence.
Cont…..2

A
www.jntuworld.com www.jntuworld.com www.jwjobs.net JNTUWORLD

Code No: 07A5EC09 :2: Set No.2


10) The channel capacity of a BSC with the transition probability ½ is [ ]
a) 0 bits b) 1 bit c) 2 bits d) infinity
II Fill in the blanks:
11) The shannons limit is ______________
12) The channel capacity with infinite bandwidth is not because ____________
13) Assuming 26 characters are equally likely , the average of the information content of
English language in bits/character is________________
14) The distance between two vector c1 and c2 is defined as the no.of components in which
they are differ is called as____________________
15) The minimum distance of a linear block code is equal to____________________of any
non-zero code word in the code.
16) A linear block code with a minimum distance dmin can detect upto ___________________
17) For a Linear Block code Code rate =_________________
18) The information rate of a source is also referred to as entropy measured in
______________
19) H(X,Y)=______________ or __________________
20) Capacity of a noise free channel is _________________
-oOo-
Code No: 07A5EC09 Set No. 3
JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD
III B.Tech. I Sem., II Mid-Term Examinations, November – 2010
DIGITAL COMMUNICATIONS
Objective Exam
Name: ______________________________ Hall Ticket No.
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 20.
I Choose the correct alternative:
1) Which of the following is a p(Y/X) matrix for a binary symmetric channel [ ]
a) b) c) d) None
2) Exchange between channel bandwidth and (S/N) ratio can be adjusted based on [ ]
a) Shannon’s limit b )Shanon’s source coding
c) Shannon’s channel coding d) Shannon Hartley theorem
3) For the data word 1110 in a (7, 4) non-systematic cyclic code with the generator
3
polynomial 1+x+x , the code polynomial is [ ]
3 5 2 3 5 2 3 4 4 5
a) 1+x+x +x b) 1+x +x +x c) 1+x +x +x d) 1+x +x
4) A source X with entropy 2 bits/message is connected to the receiver Y through a noise free
channel. The conditional probability of the source is H(X/Y) and the joint entropy of
the source and the receiver H(X, Y). Then [ ]
a) H(X,Y)= 2 bits/message b) H(X/Y)= 2 bits/message
c) H(X, Y)= 0 bits/message d) H(X/Y)= 1 bit/message
5) Which of the following is a p(Y/X) matrix for a binary Erasure channel [ ]
a) 11ppqq−⎡⎤⎢⎥−⎣⎦ b) c) d) None
6) The channel matrix of a noiseless channel [ ]
a)consists of a single nonzero number in each column.
b) consists of a single nonzero number in each row.
c) is an Identity Matrix. d) is a square matrix.
7 ) Information content of a message [ ]
a) increase with its certainty of occurrence. b) independent of the certainty of
occurrence.
c) increases with its uncertainty of occurrence. d) is the logarithm of its uncertainty of
occurrence.
8) The channel capacity of a BSC with the transition probability ½ is [ ]
a) 0 bits b) 1 bit c) 2 bits d) infinity
9) For the data word 1110 in a (7, 4) non-systematic cyclic code with the generator
2 3
polynomial 1+x +x , the code polynomial is [ ]
3 5 2 3 5 2 3 4 5
a) 1+x+x +x b) 1+x +x +x c) 1+x +x +x d) 1+x+x
Cont…..2
Code No: 07A5EC09 :2: Set No.3
10) A source transmitting ‘m’ number of messages is connected to a noise free channel. The
capacity of the channel is [ ]
2
a) m bits/symbol b) m bits/symbol c) log m bits/symbol d) 2m bits/symbol
II Fill in the blanks:
11) Assuming 26 characters are equally likely , the average of the information content of
English language in bits/character is________________
12) The distance between two vector c1 and c2 is defined as the no.of components in which
they are differ is called as____________________
13) The minimum distance of a linear block code is equal to____________________of any
non-zero code word in the code.
14) A linear block code with a minimum distance dmin can detect upto ___________________
15) For a Linear Block code Code rate =_________________
16) The information rate of a source is also referred to as entropy measured in
______________
17) H(X,Y)=______________ or __________________
18) Capacity of a noise free channel is _________________
19) The Shanon’s limit is ______________
20) The channel capacity with infinite bandwidth is not because ____________
-oOo-

Code No: 07A5EC09 Set No. 4


JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD
III B.Tech. I Sem., II Mid-Term Examinations, November – 2010
DIGITAL COMMUNICATIONS
Objective Exam
Name: ______________________________ Hall Ticket No.
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 20.
I Choose the correct alternative:
1) For the data word 1110 in a (7, 4) non-systematic cyclic code with the generator
3
polynomial 1+x+x , the code polynomial is [ ]
3 5 2 3 5 2 3 4 4 5
a) 1+x+x +x b) 1+x +x +x c) 1+x +x +x d) 1+x +x
2) A source X with entropy 2 bits/message is connected to the receiver Y through a noise free
channel. The conditional probability of the source is H(X/Y) and the joint entropy of
the source and the receiver H(X, Y). Then [ ]
a) H(X,Y)= 2 bits/message b) H(X/Y)= 2 bits/message
c) H(X, Y)= 0 bits/message d) H(X/Y)= 1 bit/message
3) Which of the following is a p(Y/X) matrix for a binary Erasure channel [ ]
a) 11ppqq−⎡⎤⎢⎥−⎣⎦ b) c) d) None
4) The channel matrix of a noiseless channel [ ]
a)consists of a single nonzero number in each column.
b) consists of a single nonzero number in each row.
c) is an Identity Matrix. d) is a square matrix.
5) Information content of a message [ ]
a) increase with its certainty of occurrence. b) independent of the certainty of
occurrence.
c) increases with its uncertainty of occurrence. d) is the logarithm of its uncertainty of
occurrence.
6) The channel capacity of a BSC with the transition probability ½ is [ ]
a) 0 bits b) 1 bit c) 2 bits d) infinity
7) For the data word 1110 in a (7, 4) non-systematic cyclic code with the generator
2 3
polynomial 1+x +x , the code polynomial is [ ]
3 5 2 3 5 2 3 4 5
a) 1+x+x +x b) 1+x +x +x c) 1+x +x +x d) 1+x+x
8) A source transmitting ‘m’ number of messages is connected to a noise free channel. The
capacity of the channel is [ ]
2
a) m bits/symbol b) m bits/symbol c) logm bits/symbol d) 2m bits/symbol
9) Which of the following is a p(Y/X) matrix for a binary symmetric channel [ ]
a) b) c) d) None
Cont…..2

Code No: 07A5EC09 :2: Set No.4


10) Exchange between channel bandwidth and (S/N) ratio can be adjusted based on [ ]
a)Shannon’s limit b)Shannon’s source coding
c) Shannon’s channel coding d) Shannon Hartley theorem
II Fill in the blanks:
11) The minimum distance of a linear block code is equal to____________________of any
non-zero code word in the code.
12) A linear block code with a minimum distance dmin can detect upto ___________________
13) For a Linear Block code Code rate =_________________
14) The information rate of a source is also referred to as entropy measured in
______________
15) H(X,Y)=______________ or __________________
16) Capacity of a noise free channel is _________________
17) The Shannon’s limit is ______________
18) The channel capacity with infinite bandwidth is not because ____________
19) Assuming 26 characters are equally likely , the average of the information content of
English language in bits/character is________________
20) The distance between two vector c1 and c2 is defined as the no.of components in which
they are differ is called as____________________
-oOo-

Code No: 07A5EC09 Set No. 1


JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD
III B.Tech. I Sem., II Mid-Term Examinations, November – 2010
DIGITAL COMMUNICATIONS
Objective Exam
Name: ______________________________ Hall Ticket No.
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 20.
I Choose the correct alternative:
1) The channel matrix of a noiseless channel [ ]
a)consists of a single nonzero number in each column.
b) consists of a single nonzero number in each row.
c) is an Identity Matrix. d) is a square matrix.
2 ) Information content of a message [ ]
a) increase with its certainty of occurrence. b) independent of the certainty of
occurrence.
c) increases with its uncertainty of occurrence. d) is the logarithm of its uncertainty of
occurrence.
3) The channel capacity of a BSC with the transition probability ½ is [ ]
a) 0 bits b) 1 bit c) 2 bits d) infinity
4) For the data word 1110 in a (7, 4) non-systematic cyclic code with the generator
2 3
polynomial 1+x +x , the code polynomial is [ ]
3 5 2 3 5 2 3 4 5
a) 1+x+x +x b) 1+x +x +x c) 1+x +x +x d) 1+x+x
5) A source transmitting ‘m’ number of messages is connected to a noise free channel. The
capacity of the channel is [ ]
2
a) m bits/symbol b) m bits/symbol c) logm bits/symbol d) 2m bits/symbol
6) Which of the following is a p(Y/X) matrix for a binary symmetric channel [ ]
a) b) c) d) None
7) Exchange between channel bandwidth and (S/N) ratio can be adjusted based on [ ]
a)Shannon’s limit b)Shannon’s source coding
c) Shannon’s channel coding d) Shannon Hartley theorem
8) For the data word 1110 in a (7, 4) non-systematic cyclic code with the generator
3
polynomial 1+x+x , the code polynomial is [ ]
3 5 2 3 5 2 3 4 4 5
a) 1+x+x +x b) 1+x +x +x c) 1+x +x +x d) 1+x +x
9) A source X with entropy 2 bits/message is connected to the receiver Y through a noise free
channel. The conditional probability of the source is H(X/Y) and the joint entropy of
the source and the receiver H(X, Y). Then [ ]
a) H(X,Y)= 2 bits/message b) H(X/Y)= 2 bits/message
c) H(X, Y)= 0 bits/message d) H(X/Y)= 1 bit/message
Cont…..2

Code No: 07A5EC09 :2: Set No.1


10) Which of the following is a p(Y/X) matrix for a binary Erasure channel [ ]
a) 11ppqq−⎡⎤⎢⎥−⎣⎦ b) c) d) None
II Fill in the blanks:
11) The information rate of a source is also referred to as entropy measured in
______________
12) H(X,Y)=______________ or __________________
13) Capacity of a noise free channel is _________________
14) The Shannon’s limit is ______________
15) The channel capacity with infinite bandwidth is not because ____________
16) Assuming 26 characters are equally likely , the average of the information content of
English language in bits/character is________________
17) The distance between two vector c1 and c2 is defined as the no.of components in which
they are differ is called as____________________
18) The minimum distance of a linear block code is equal to____________________of any
non-zero code word in the code.
19) A linear block code with a minimum distance dmin can detect upto ___________________
20) For a Linear Block code Code rate =_________________
-oOo-
Code No: 07A5EC09 Set No. 2
JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD
III B.Tech. I Sem., II Mid-Term Examinations, November – 2010
DIGITAL COMMUNICATIONS
Objective Exam
Name: ______________________________ Hall Ticket No.
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 20.
I Choose the correct alternative:
1) For the data word 1110 in a (7, 4) non-systematic cyclic code with the generator
2 3
polynomial 1+x +x , the code polynomial is [ ]
3 5 2 3 5 2 3 4 5
a) 1+x+x +x b) 1+x +x +x c) 1+x +x +x d) 1+x+x
2) A source transmitting ‘m’ number of messages is connected to a noise free channel. The
capacity of the channel is [ ]
2
a) m bits/symbol b) m bits/symbol c) log m bits/symbol d) 2m bits/symbol
3) Which of the following is a p(Y/X) matrix for a binary symmetric channel [ ]
a) b) c) d) None
4) Exchange between channel bandwidth and (S/N) ratio can be adjusted based on [ ]
a)Shannon’s limit b)Shannon’s source coding
c) Shannon’s channel coding d) Shannon Hartley theorem
5) For the data word 1110 in a (7, 4) non-systematic cyclic code with the generator
3
polynomial 1+x+x , the code polynomial is [ ]
3 5 2 3 5 2 3 4 4 5
a) 1+x+x +x b) 1+x +x +x c) 1+x +x +x d) 1+x +x
6) A source X with entropy 2 bits/message is connected to the receiver Y through a noise free
channel. The conditional probability of the source is H(X/Y) and the joint entropy of
the source and the receiver H(X, Y). Then [ ]
a) H(X,Y)= 2 bits/message b) H(X/Y)= 2 bits/message
c) H(X, Y)= 0 bits/message d) H(X/Y)= 1 bit/message
7) Which of the following is a p(Y/X) matrix for a binary Erasure channel [ ]
a) 11ppqq−⎡⎤⎢⎥−⎣⎦ b) c) d) None
8) The channel matrix of a noiseless channel [ ]
a)consists of a single nonzero number in each column.
b) consists of a single nonzero number in each row.
c) is an Identity Matrix. d) is a square matrix.
9 ) Information content of a message [ ]
a) increase with its certainty of occurrence. b) independent of the certainty of
occurrence.
c) increases with its uncertainty of occurrence. d) is the logarithm of its uncertainty of
occurrence.
Cont…..2
Code No: 07A5EC09 :2: Set No.2
10) The channel capacity of a BSC with the transition probability ½ is [ ]
a) 0 bits b) 1 bit c) 2 bits d) infinity
II Fill in the blanks:
11) The Shannon’s limit is ______________
12) The channel capacity with infinite bandwidth is not because ____________
13) Assuming 26 characters are equally likely , the average of the information content of
English language in bits/character is________________
14) The distance between two vector c1 and c2 is defined as the no.of components in which
they are differ is called as____________________
15) The minimum distance of a linear block code is equal to____________________of any
non-zero code word in the code.
16) A linear block code with a minimum distance dmin can detect upto ___________________
17) For a Linear Block code Code rate =_________________
18) The information rate of a source is also referred to as entropy measured in
______________
19) H(X,Y)=______________ or __________________
20) Capacity of a noise free channel is _________________
-oOo-

Code No: 07A5EC09 Set No. 3


JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD
III B.Tech. I Sem., II Mid-Term Examinations, November – 2010
DIGITAL COMMUNICATIONS
Objective Exam
Name: ______________________________ Hall Ticket No.
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 20.
I Choose the correct alternative:
1) Which of the following is a p(Y/X) matrix for a binary symmetric channel [ ]
a) b) c) d) None
2) Exchange between channel bandwidth and (S/N) ratio can be adjusted based on [ ]
a)Shannon’s limit b)Shannon’s source coding
c) Shannon’s channel coding d) Shannon Hartley theorem
3) For the data word 1110 in a (7, 4) non-systematic cyclic code with the generator
3
polynomial 1+x+x , the code polynomial is [ ]
3 5 2 3 5 2 3 4 4 5
a) 1+x+x +x b) 1+x +x +x c) 1+x +x +x d) 1+x +x
4) A source X with entropy 2 bits/message is connected to the receiver Y through a noise free
channel. The conditional probability of the source is H(X/Y) and the joint entropy of
the source and the receiver H(X, Y). Then [ ]
a) H(X,Y)= 2 bits/message b) H(X/Y)= 2 bits/message
c) H(X, Y)= 0 bits/message d) H(X/Y)= 1 bit/message
5) Which of the following is a p(Y/X) matrix for a binary Erasure channel [ ]
a) 11ppqq−⎡⎤⎢⎥−⎣⎦ b) c) d) None
6) The channel matrix of a noiseless channel [ ]
a)consists of a single nonzero number in each column.
b) consists of a single nonzero number in each row.
c) is an Identity Matrix. d) is a square matrix.
7 ) Information content of a message [ ]
a) increase with its certainty of occurrence. b) independent of the certainty of
occurrence.
c) increases with its uncertainty of occurrence. d) is the logarithm of its uncertainty of
occurrence.
8) The channel capacity of a BSC with the transition probability ½ is [ ]
a) 0 bits b) 1 bit c) 2 bits d) infinity
9) For the data word 1110 in a (7, 4) non-systematic cyclic code with the generator
2 3
polynomial 1+x +x , the code polynomial is [ ]
3 5 2 3 5 2 3 4 5
a) 1+x+x +x b) 1+x +x +x c) 1+x +x +x d) 1+x+x
Cont…..2

Code No: 07A5EC09 :2: Set No.3


10) A source transmitting ‘m’ number of messages is connected to a noise free channel. The
capacity of the channel is [ ]
2
a) m bits/symbol b) m bits/symbol c) log m bits/symbol d) 2m bits/symbol
II Fill in the blanks:
11) Assuming 26 characters are equally likely , the average of the information content of
English language in bits/character is________________
12) The distance between two vector c1 and c2 is defined as the no.of components in which
they are differ is called as____________________
13) The minimum distance of a linear block code is equal to____________________of any
non-zero code word in the code.
14) A linear block code with a minimum distance dmin can detect upto ___________________
15) For a Linear Block code Code rate =_________________
16) The information rate of a source is also referred to as entropy measured in
______________
17) H(X,Y)=______________ or __________________
18) Capacity of a noise free channel is _________________
19) The shannons limit is ______________
20) The channel capacity with infinite bandwidth is not because ____________
-oOo-
Code No: 07A5EC09 Set No. 4
JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY HYDERABAD
III B.Tech. I Sem., II Mid-Term Examinations, November – 2010
DIGITAL COMMUNICATIONS
Objective Exam
Name: ______________________________ Hall Ticket No.
Answer All Questions. All Questions Carry Equal Marks.Time: 20 Min. Marks: 20.
I Choose the correct alternative:
1) For the data word 1110 in a (7, 4) non-systematic cyclic code with the generator
3
polynomial 1+x+x , the code polynomial is [ ]
3 5 2 3 5 2 3 4 4 5
a) 1+x+x +x b) 1+x +x +x c) 1+x +x +x d) 1+x +x
2) A source X with entropy 2 bits/message is connected to the receiver Y through a noise free
channel. The conditional probability of the source is H(X/Y) and the joint entropy of
the source and the receiver H(X, Y). Then [ ]
a) H(X,Y)= 2 bits/message b) H(X/Y)= 2 bits/message
c) H(X, Y)= 0 bits/message d) H(X/Y)= 1 bit/message
3) Which of the following is a p(Y/X) matrix for a binary Erasure channel [ ]
a) 11ppqq−⎡⎤⎢⎥−⎣⎦ b) c) d) None
4) The channel matrix of a noiseless channel [ ]
a)consists of a single nonzero number in each column.
b) consists of a single nonzero number in each row.
c) is an Identity Matrix. d) is a square matrix.
5) Information content of a message [ ]
a) increase with its certainty of occurrence. b) independent of the certainty of
occurrence.
c) increases with its uncertainty of occurrence. d) is the logarithm of its uncertainty of
occurrence.
6) The channel capacity of a BSC with the transition probability ½ is [ ]
a) 0 bits b) 1 bit c) 2 bits d) infinity
7) For the data word 1110 in a (7, 4) non-systematic cyclic code with the generator
2 3
polynomial 1+x +x , the code polynomial is [ ]
3 5 2 3 5 2 3 4 5
a) 1+x+x +x b) 1+x +x +x c) 1+x +x +x d) 1+x+x
8) A source transmitting ‘m’ number of messages is connected to a noise free channel. The
capacity of the channel is [ ]
2
a) m bits/symbol b) m bits/symbol c) logm bits/symbol d) 2m bits/symbol
9) Which of the following is a p(Y/X) matrix for a binary symmetric channel [ ]
a) b) c) d) None
Cont…..2
Code No: 07A5EC09 :2: Set No.4
10) Exchange between channel bandwidth and (S/N) ratio can be adjusted based on [ ]
a)Shannon’s limit b)Shannon’s source coding
c) Shannon’s channel coding d) Shannon Hartley theorem
II Fill in the blanks:
11) The minimum distance of a linear block code is equal to____________________of any
non-zero code word in the code.
12) A linear block code with a minimum distance dmin can detect upto ___________________
13) For a Linear Block code Code rate =_________________
14) The information rate of a source is also referred to as entropy measured in
______________
15) H(X,Y)=______________ or __________________
16) Capacity of a noise free channel is _________________
17) The Shannon’s limit is ______________
18) The channel capacity with infinite bandwidth is not because ____________
19) Assuming 26 characters are equally likely , the average of the information content of
English language in bits/character is________________
20) The distance between two vector c1 and c2 is defined as the no.of components in which
they are differ is called as____________________
-oOo-

20. Tutorial Questions

1. (a) Explain the basic principles of sampling, and distinguish between ideal sampling and practical
sampling.

(b) A band pass signal has a centre frequency fo and extends from fo - 5 KHz to f0 + 5 KHz. It is
sampled at a rate of fs = 25 KHz. As f o varies from 5 KHz to 50 KHz, find the ranges of fo for which
the sampling rate is adequate.

2. a) Describe the synchronization procedure for PAM,PWM and PPM signals. Also discuss about
the spectra of PWM and PDM signals.

b) State and prove sampling theorem.

3. (a) Explain the method of generation and detection of PPM signals with neat sketches.

(b) Compare the characteristics of PWM and PPM signals.

(c) Which analog pulse modulation can be termed as analogous to linear CW modulation and
why?

4. List out the applications, merits and demerits of PAM, PPM and PWM signals.

5. What are the advantages and disadvantages of digital communication system?

6. Draw and explain the elements of digital communication system?


7. Explain about the bandwidth and signal to noise ratio trade 0ff?

8. Explain about Hartley Shannon’s law?

9. Explain certain issues generally encountered while digital transmission?

10. (a) Sketch and explain the typical waveforms of PWM signals, for leading edge, trailing edge and
symmetrical cases.

(b) Compare the analog pulse modulation schemes with CW modulation systems.

11. (a) Explain how the PPM signals can be generated and reconstructed through

PWM signals.

(b) Compare the merits and demerits of PAM, PDM and PPM signals. List out their applications.

12. (a) Define the Sampling theorem and establish the same for band pass signals, using neat
schematics.

(b) For the modulating signal m(t) = 2 Cos (100 t) + 18 Cos (2000 πt), determine the allowable
sampling rates and sampling intervals.

13. Draw the block diagram of PCM generator and explain each block.

14. Determine the transmission bandwidth in PCM.

15. What is the function of predictor in DPCM system?

16. What are the applications of PCM? Give in detail any two applications.

17. Explain the need for Non-uniform quantization in PCM system.

18. Derive the expression for output Signal to noise ratio of PCM system.

19. Explain µ-law companding for speech signals.

20. Explain the working of DPCM system with neat block diagram.

21. Prove that the mean value of the quantization error is inversely proportional to the

Square of the number of quantization levels.

22.. Explain why quantization noise could affect small amplitude signals in a PCM system More
than large signals. With the aid of sketches show how tapered quantizing level Could be
used to counteract this effect.

23. Explain the working of Delta modulation system with a neat block diagram.

24. Clearly bring out the difference between granular noise and slope overload error.

25. Consider a speech signal with maximum frequency of 3.4 KHz and maximum
Amplitude of 1v.This speech signal applied to a DM whose bit rate is set at

20kbps. Discuss the choice of appropriate step size for the modulator.

26. Derive the expression for Signal to noise ratio of DM system

27. Explain with neat block diagram, Adaptive Delta Modulator transmitter and receiver.

28. Why is it necessary to use greater sampling rate for DM than PCM.

29. Explain the advantages of ADM over DM and how is it achieved.

30. A delta modulator system is designed to operate at five times the Nyquist rate for a

signal with 3KHz bandwidth. Determine the maximum amplitude of a 2KHz input

sinusoid for which the delta modulator does not have slope overload. Quantization step
size is 250mv.Derive the formula used.

31. Compare Delta modulation and PCM techniques in terms of bandwidth and signal to noise
ratio.

32. A signal m (t) is to be encoded using either Delta modulation or PCM technique. The signal
to quantization noise ratio (So/No) ≥ 30dB.Find the ratio bandwidth required for PCM to
Delta modulation.

33. What are the advantages and disadvantages of Digital modulation schemes?

34. Discuss base band transmission of M-ary Data.

35. Explain how the residual effects of the channel are responsible for ISI?

36. What is the practical solution to obtain zero ISI? Explain.

37. What is the ideal solution to obtain zero ISI and what is the disadvantage of

this Solution.

38. Explain the signal space representation of QPSK .Compare QPSK with all other digital
signaling schemes.

39. Write down the modulation waveform for transmitting binary information over

baseband channels for the following modulation schemes: ASK,PSK,FSK and DPSK.

40. Explain in detail the power spectra and bandwidth efficiency of M-ary signals.

41. Explain coherent and non-coherent detection of binary FSK waves.

42. Compare and discuss a binary scheme with M-ary signaling scheme.

43. Derive an expression for error probability of coherent ASK scheme.


44. Derive an expression for error probability of non-coherent ASK scheme.

45. Find the transfer function of the optimum receiver and calculate the error probability.

46. Derive an expression for probability of bit error of a binary coherent FSK receiver.

47. Derive an expression for probability of bit error in a PSK system.

48. Show that the impulse response of a matched filter is a time reversed and delayed

version of the input signal .and Briefly explain the properties of matched filter.

49. Binary data has to be transmitted over a telephone link that has a usable bandwidth of 3000
Hz and a maximum achievable SNR of 6dB at its output.

i) Determine the maximum signaling rate and error probability if a coherent ASK

scheme is used for transmitting binary data through this channel.

ii) If the data rate is maintained at 300 bits/sec. Calculate the error probability.

50. Binary data is transmitted over an RF band pass channel with a usable bandwidth of 10MHz
at a rate of 4.8×106 bits/sec using an ASK signaling method. The carrier amplitude at the
receiver antenna is 1mV and the noise power spectral density at the receiver input is 10-15
w/Hz.

i) Find the error probability of a coherent receiver.

ii) Find the error probability of a non-coherent receiver.

51. One of four possible messages Q1, Q2, Q3, Q4 having probabilities 1/8, 3/8, 3/8, and 1/8
respectively is transmitted. Calculate average information per message.
52. An ideal channel low pass channel of bandwidth B Hz with additive Gaussian white noise is
used for transmitting of digital information.
a. Plot C/B versus S/N in dB for an ideal system using this channel
b. A practical signaling scheme on this channel used one of two waveforms of duration
Tb sec to transmit binary information. The signaling scheme transmits data at the
rare of 2B bits/sec, the probability of error is given by P (error/1sent) = Pe
c. Plot graphs of
i C/B

ii Dt/B where Dt is rate of information transmission over channel.

53.Define and explain the following in terms of joint pdf (x, y) and marginal pdf`s p(x) and p(y).
d. Mutual Information
e. Average Mutual Information
f. Entropy
54 .Let X is a discrete random variable with equally probable outcomes X1 = A, and X2 = A and
conditional probability pdf`s p(y/xi), i = 1, 2 be the Gaussian with mean xi and variance σ 2. Calculate
average mutual information I (X,Y)
55. Write short notes on the following
a. Mutual Information
b. Self Information
c. Logarithmic measure for information
56. Write short notes on the following
a. Entropy
b. Conditional entropy
c. Mutual Information
d. Information
57. A DMS has an alphabet of eight letters, Xi , i=1,2,3,….,8, with probabilities
0.36,0.14,0.13,0.12,0.1,0.09,0.04,0.02.
i. Use the Huffman encoding procedure to determine a binary code for the
source output.
ii. Determine the entropy of the source and find the efficiency of the code

58. A DMS has an alphabet of eight letters, Xi , i=1,2,3,….,8, with probabilities


{0.05,0.1,0.1,0.15,0.05,0.25,0.3}
i. Use the Shannon-fano coding procedure to determine a binary code for the
source output.
ii. Determine the entropy of the source and find the efficiency of the code. An
analog signal band limited to 10HKz quantize is 8levels of PCM System with
59. Probability of 1/4/1/5, 1/5, 1/10,1/20,1/20, and 1/20 respectively. Find the entropy and rate
of information.

60. Explain various methods for describing conventional methods.

61. Explain about block codes in which each block of k message bits encoded into

block of n>k bits with an example.

62. Consider a (6,3) generator matrix

100011

G = 010101

001110

Find

a) All the code vectors of this code.

b) The parity check matrix for this code.

c) The minimum weight of the code.

63 . What is the hardware components required to implement a cyclic code encoder.

64. Explain about the syndrome calculation, error correction and error detection in

(n, k) cyclic codes.

65. Briefly discuss about the linear block code error control technique.
66. Show that if g(x) is a polynomial of degree (n-k) and is a factor of xn+1, then

g(x) generates an (n,k) cyclic code in which the code polynomial for a data vector D

is generated by v(x)=D(x)g(x).

67. Briefly discuss about the parity check bit error control technique.

68. Discuss about interlaced code with suitable example.

69. Draw and explain a decoder diagram for a (7,4) majority logic code whose generator

polynomial g(x)=1+x+x3.

70. Discuss about Hamming code with suitable examples.

71. The generator polynomial of a (7,4) cyclic code is g(x)=1+x+x3.Find the 16

code words of this code in the following ways.

a) By forming the code polynomials using V(x)=D(x)g(x), where D(x) is the

message polynomial.

b) By using systematic form.

72. Design an encoder for the (7,4) binary cyclic code generated by g(x)=1+x+x3

and verify its operation using the message vector (0101).

73. A (7, 4) linear block code is generated according to the H matrix

1110100

H= 1101010

1011001

The code word received is 1000011 for a transmitted codeword C. Find the

Corresponding data word transmitted.

74. Consider a (6,3) generator matrix


100011
G = 010101
001110
a. Find
a) All the code vectors of this code.
b) The parity check matrix for this code.
c) The error syndrome of the code.
75. What are the advantages and disadvantages of convolutional codes?

76. Explain about the Viterbi decoding method with an example.

77. (a) What is meant by random errors and burst errors? Explain about a coding

technique which can be used to correct both the burst and random errors simultaneously.

(b) Discuss about the various decoders for convolutional codes.

78. Draw the state diagram, tree diagram for K=3, rate1/3 code generated by

79. (a) Design an encoder for the (7,4) binary cyclic code generated by g(x) = 1+x+x3 and verify its
operation using the message vector (0101).

(b) What are the differences between block codes and the convolutional codes?

80. Explain various methods for describing Conventional Codes.

81. A convolutional encoder has two shift registers two modulo-2 adders and an output multiplexer.
The generator sequences of the encoder are as follows: g(1)=(1,0,1); g(2)=( 1,1, 1). Assuming a
5bit message sequence is transmitted. Using the state diagram find the message sequence when
the received sequence is

(11,01,00,10,01,10,11,00,00,......)

82. (a) What is meant by random errors and burst errors? Explain about a coding technique which
can be used to correct both the burst and random errors simultaneously.

(b) Discuss about the various decoders for convolutional codes.

83. Find the output codeword for the following convolutional encoder for the message sequence
10011. (as shown in the figure).

84. Construct the state diagram for the following encoder. Starting with all zero state, trace the path
that correspond to the message sequence 1011101. Given convolutional encoder has a single shift
register with two stages,(K=3) three modulo-2 adders and an output multiplexer. The generator
sequence s of the encoder are as follows. g(1)=(1, 0, 1) ; g(2)=(1, 1, 0),g(3)=(1,1,1).
85. Draw and explain Tree diagram of convolutional encoder shown below with rate=1/3, L=3
86. For the convolutional encoder shown below draw the trellis diagram for the message sequence
110.let the first six received bits be 11 01 11 then by using viterbi decoding find the decoded
sequence.

87. Explain the Direct sequence spread spectrum technique with neat diagram

88. Explain the Frequency hopping spread spectrum in detail.

89. Explain the properties of PN Sequences.

90. How pseudo noise sequence is generated? Explain it with example.

91. How DS-SS works? Explain it with a block diagram.

92. Explain the operation of slow and fast frequency hoping technique.

93. Explain about source coding of Speech for wireless communication

94. Explain the types of Multiple Access techniques.

95. Explain TDMA system with frame structure, frame efficiency and features.

96. Explain CDMA system with its features and list out various problems in CDMA systems.

21. Known gaps


Subject: DIGITAL COMMUNICATION

Known gaps:

1. The DC as per the curriculum is not matching with the real time applications
2. This subject is not matching with the coding techniques presently using.

Action to be taken: following additional topics are taken to fill the known gaps

1. Real rime applications


2. Draw backs of the each coding technique
22. Discussion Topics:

Data transmission, digital transmission, or digital communications is the physical transfer


of data (a digital bit stream or a digitized analog signal[1]) over a point-to-point or point-to-
multipoint communication channel. Examples of such channels are copper wires, optical
fibres, wireless communication channels, storage media and computer buses. The data are
represented as an electromagnetic signal, such as an electrical voltage, radiowave,
microwave, or infrared signal.

While analog transmission is the transfer of a continuously varying analog signal over an
analog channel, digital communications is the transfer of discrete messages over a digital or
an analog channel. The messages are either represented by a sequence of pulses by means of
a line code (baseband transmission), or by a limited set of continuously varying wave forms
(passband transmission), using a digital modulation method. The passband modulation and
corresponding demodulation (also known as detection) is carried out by modem equipment.
According to the most common definition of digital signal, both baseband and passband
signals representing bit-streams are considered as digital transmission, while an alternative
definition only considers the baseband signal as digital, and passband transmission of digital
data as a form of digital-to-analog conversion.

Data transmitted may be digital messages originating from a data source, for example a
computer or a keyboard. It may also be an analog signal such as a phone call or a video
signal, digitized into a bit-stream for example using pulse-code modulation (PCM) or more
advanced source coding (analog-to-digital conversion and data compression) schemes. This
source coding and decoding is carried out by codec equipment.

Digital transmission or data transmission traditionally belongs to telecommunications


and electrical engineering. Basic principles of data transmission may also be covered within
the computer science/computer engineering topic of data communications, which also
includes computer networking or computer communication applications and networking
protocols, for example routing, switching and inter-process communication. Although the
Transmission control protocol (TCP) involves the term "transmission", TCP and other
transport layer protocols are typically not discussed in a textbook or course about data
transmission, but in computer networking.

The term tele transmission involves the analog as well as digital communication. In most
textbooks, the term analog transmission only refers to the transmission of an analog message
signal (without digitization) by means of an analog signal, either as a non-modulated
baseband signal, or as a passband signal using an analog modulation method such as AM or
FM. It may also include analog-over-analog pulse modulatated baseband signals such as
pulse-width modulation. In a few books within the computer networking tradition, "analog
transmission" also refers to passband transmission of bit-streams using digital modulation
methods such as FSK, PSK and ASK. Note that these methods are covered in textbooks
named digital transmission or data transmission, for example.[1]

The theoretical aspects of data transmission are covered by information theory and coding
theory.
Protocol layers and sub-topics

OSI model
by layer

7. Application[show]

6. Presentation[show]

5. Session[show]

4. Transport[show]

3. Network[show]

2. Data link[show]

1. Physical[show]

 v
 t
 e

Courses and textbooks in the field of data transmission typically deal with the following OSI
model protocol layers and topics:

 Layer 1, the physical layer:


o Channel coding including
 Digital modulation schemes
 Line coding schemes
 Forward error correction (FEC) codes
o Bit synchronization
o Multiplexing
o Equalization
o Channel models
 Layer 2, the data link layer:
o Channel access schemes, media access control (MAC)
o Packet mode communication and Frame synchronization
o Error detection and automatic repeat request (ARQ)
o Flow control
 Layer 6, the presentation layer:
o Source coding (digitization and data compression), and information theory.
o Cryptography (may occur at any layer)
Applications and history

Data (mainly but not exclusively informational) has been sent via non-electronic (e.g. optical,
acoustic, mechanical) means since the advent of communication. Analog signal data has been
sent electronically since the advent of the telephone. However, the first data electromagnetic
transmission applications in modern time were telegraphy (1809) and teletypewriters (1906),
which are both digital signals. The fundamental theoretical work in data transmission and
information theory by Harry Nyquist, Ralph Hartley, Claude Shannon and others during the
early 20th century, was done with these applications in mind.

Data transmission is utilized in computers in computer buses and for communication with
peripheral equipment via parallel ports and serial ports such as RS-232 (1969), Firewire
(1995) and USB (1996). The principles of data transmission are also utilized in storage media
for Error detection and correction since 1951.

Data transmission is utilized in computer networking equipment such as modems (1940),


local area networks (LAN) adapters (1964), repeaters, hubs, microwave links, wireless
network access points (1997), etc.

In telephone networks, digital communication is utilized for transferring many phone calls
over the same copper cable or fiber cable by means of Pulse code modulation (PCM), i.e.
sampling and digitization, in combination with Time division multiplexing (TDM) (1962).
Telephone exchanges have become digital and software controlled, facilitating many value
added services. For example the first AXE telephone exchange was presented in 1976. Since
the late 1980s, digital communication to the end user has been possible using Integrated
Services Digital Network (ISDN) services. Since the end of the 1990s, broadband access
techniques such as ADSL, Cable modems, fiber-to-the-building (FTTB) and fiber-to-the-
home (FTTH) have become widespread to small offices and homes. The current tendency is
to replace traditional telecommunication services by packet mode communication such as IP
telephony and IPTV.

Transmitting analog signals digitally allows for greater signal processing capability. The
ability to process a communications signal means that errors caused by random processes can
be detected and corrected. Digital signals can also be sampled instead of continuously
monitored. The multiplexing of multiple digital signals is much simpler to the multiplexing of
analog signals.

Because of all these advantages, and because recent advances in wideband communication
channels and solid-state electronics have allowed scientists to fully realize these advantages,
digital communications has grown quickly. Digital communications is quickly edging out
analog communication because of the vast demand to transmit computer data and the ability
of digital communications to do so.

The digital revolution has also resulted in many digital telecommunication applications where
the principles of data transmission are applied. Examples are second-generation (1991) and
later cellular telephony, video conferencing, digital TV (1998), digital radio (1999),
telemetry, etc.
Baseband or passband transmission

The physically transmitted signal may be one of the following:

1. A baseband signal ("digital-over-digital" transmission): A sequence of electrical pulses or


light pulses produced by means of a line coding scheme such as Manchester coding. This is
typically used in serial cables, wired local area networks such as Ethernet, and in optical fiber
communication. It results in a pulse amplitude modulated(PAM) signal, also known as a
pulse train.
2. A passband signal ("digital-over-analog" transmission): A modulated sine wave signal
representing a digital bit-stream. Note that this is in some textbooks considered as analog
transmission, but in most books as digital transmission. The signal is produced by means of a
digital modulation method such as PSK, QAM or FSK. The modulation and demodulation is
carried out by modem equipment. This is used in wireless communication, and over
telephone network local-loop and cable-TV networks.

Serial and parallel transmission

In telecommunications, serial transmission is the sequential transmission of signal elements


of a group representing a character or other entity of data. Digital serial transmissions are bits
sent over a single wire, frequency or optical path sequentially. Because it requires less signal
processing and less chances for error than parallel transmission, the transfer rate of each
individual path may be faster. This can be used over longer distances as a check digit or
parity bit can be sent along it easily.

In telecommunications, parallel transmission is the simultaneous transmission of the signal


elements of a character or other entity of data. In digital communications, parallel
transmission is the simultaneous transmission of related signal elements over two or more
separate paths. Multiple electrical wires are used which can transmit multiple bits
simultaneously, which allows for higher data transfer rates than can be achieved with serial
transmission. This method is used internally within the computer, for example the internal
buses, and sometimes externally for such things as printers, The major issue with this is
"skewing" because the wires in parallel data transmission have slightly different properties
(not intentionally) so some bits may arrive before others, which may corrupt the message. A
parity bit can help to reduce this. However, electrical wire parallel data transmission is
therefore less reliable for long distances because corrupt transmissions are far more likely.

Types of communication channels


Main article: communication channel

 Data transmission circuit


 Simplex
 Half-duplex
 Full-duplex
 Point-to-point
 Multi-drop:
o Bus network
o Ring network
o Star network
o Mesh network
o Wireless network

Asynchronous and synchronous data transmission


Main article: comparison of synchronous and asynchronous signalling

This section may contain parts that are misleading. Please help clarify this article according to any
suggestions provided on the talk page. (August 2012)

Asynchronous transmission uses start and stop bits to signify the beginning bit[citation needed]
ASCII character would actually be transmitted using 10 bits. For example, "0100 0001"
would become "1 0100 0001 0". The extra one (or zero, depending on parity bit) at the start
and end of the transmission tells the receiver first that a character is coming and secondly that
the character has ended. This method of transmission is used when data are sent
intermittently as opposed to in a solid stream. In the previous example the start and stop bits
are in bold. The start and stop bits must be of opposite polarity.[citation needed] This allows the
receiver to recognize when the second packet of information is being sent.

Synchronous transmission uses no start and stop bits, but instead synchronizes transmission
speeds at both the receiving and sending end of the transmission using clock signal(s) built
into each component.[vague] A continual stream of data is then sent between the two nodes.
Due to there being no start and stop bits the data transfer rate is quicker although more errors
will occur, as the clocks will eventually get out of sync, and the receiving device would have
the wrong time that had been agreed in the protocol for sending/receiving data, so some bytes
could become corrupted (by losing bits).[citation needed] Ways to get around this problem include
re-synchronization of the clocks and use of check digits to ensure the byte is correctly
interpreted and received
23. References, Journals, websites and E-links:

TEXT BOOKS

1. Principles Of Communication Systems-Herberet Taub, Donald L Schiling, Goutham


saha,3rf edition, Mc Graw Hill 2008.\
2. Digital and anolog communiation systems- Sam Shanmugam, John Wiley,2005.
3. Digital communications- John g. Prokaris, Masoud salehi-5th edition Mc Graw-Hill,
2008.
4. Digital communications- Simon Haykin, Jon Wiley, 2005

Websites:-

1. http://en.wikipedia.org/wiki/digital_communications

2. http://www.tmworld.com/archive/2011/20110801.php

3. www.pemuk.com

4. www.site.uottawa.com

5. www.tews.elektronik.com

Journals:-

1. Communicaions Journal

2. Omega online technical reference

3. Review of Scientific techniques

REFERNCES:
1. Digital communications- John g. Prokaris, Masoud salehi-5th edition Mc Graw-Hill,
2008.
2. Digital communication- Simon Haykin, Jon Wiley, 2005.
3. Digital communications-Lan A.Glover, Peter M.Grant.2nd edition, pearson edu., 2008.
4. Communication systems-B.P.Lathi, BS Publication, 2006.

24. QualityControl Sheets


25. STUDENTS LIST

B.TECH III YEAR II SEMESTER:

SECTION-D:
S.No Roll number Student Name

1 13R11A04F5 A RAMA THEJA

2 13R11A04F7 ANUGU PRASHANTH

3 13R11A04F8 ARACHANA DASH

4 13R11A04F9 CHAVALI NAGARJUNA

5 13R11A04G0 CHIGURUPATI MEENAKSHI

6 13R11A04G1 D SRI RAMYA

7 13R11A04G2 DEEKONDA RAJSHREE

8 13R11A04G3 G MANIDEEP

9 13R11A04G4 GATADI VADDE PREM SAGAR

GOGU JEEVITHA SPANDANA


10 13R11A04G5
REDDY

11 13R11A04G6 GOLLURI SINDHUJA

GOPANABOENA SAI KRANTHI


12 13R11A04G7
KUMAR

13 13R11A04G8 GUNTIMADUGU SAI RESHMA

14 13R11A04G9 K DARSHAN

15 13R11A04H0 K. ANIRUDH

16 13R11A04H1 KOMIRISHETTY AKHILA

17 13R11A04H2 KOPPU MOUNIKA

18 13R11A04H3 KANNE RAVI KUMAR

19 13R11A04H4 KARRA VINEELA

20 13R11A04H5 KANUKALA SIDDHARTH

21 13R11A04H6 KATHI SHIVARAM REDDY


22 13R11A04H7 KOMANDLA SRIKANTH REDDY

23 13R11A04H8 KONDAM PADMA

24 13R11A04H9 KRISHNA ASHOK MORE

25 13R11A04J0 LINGAMPALLY RAJASRI

26 13R11A04J1 M ROHITH SAI SHASHANK

27 13R11A04J2 M TANVIKA

28 13R11A04J3 MALIHA AZAM

29 13R11A04J4 MANSHA NEYAZ

30 13R11A04J5 MATTA SRI SATYA SAI GAYATHRI

31 13R11A04J6 MD RAHMAN SHAREEF

32 13R11A04J7 MEESALA SAI SHRUTHI

33 13R11A04J8 P G CHANDANA

34 13R11A04J9 PALLE AKILA

35 13R11A04K0 PERNAPATI YAMINI

36 13R11A04K1 POLISETTY VEDA SRI

37 13R11A04K2 REGU PRAVALIKA

38 13R11A04K3 R RITHWIK REDDY

39 13R11A04K4 RAMYA S

40 13R11A04K5 SURI BHASKER SRI HARSHA

41 13R11A04K6 TANGUTOORI SIRI CHANDANA

42 13R11A04K7 THIPPARAPU AKHIL

43 13R11A04K8 UDDALA DEVAMMA

44 13R11A04K9 VALASA SHIVANI

45 13R11A04L0 VEPURI NAGA TARUN SAI

46 13R11A04L1 VISWAJITH GOVINDA RAJAN

47 13R11A04L2 YENDURI YUGANDHAR


48 13R11A04L3 M SAI KUMAR

49 14R18A0401 MODUMUDI HARSHITHA

26.GroupWise students list for discussion topics


Section -D

Group 1

1 13R11A04F5 A RAMA THEJA


2 13R11A04F7 ANUGU PRASHANTH
3 13R11A04F8 ARACHANA DASH
4 13R11A04F9 CHAVALI NAGARJUNA
5 13R11A04G0 CHIGURUPATI MEENAKSHI

Group 2

6 13R11A04G1 D SRI RAMYA


7 13R11A04G2 DEEKONDA RAJSHREE
8 13R11A04G3 G MANIDEEP
9 13R11A04G4 GATADI VADDE PREM SAGAR
GOGU JEEVITHA SPANDANA
10 13R11A04G5 REDDY

Group 3:

11 13R11A04G6 GOLLURI SINDHUJA

GOPANABOENA SAI KRANTHI


12 13R11A04G7 KUMAR

13 13R11A04G8 GUNTIMADUGU SAI RESHMA

14 13R11A04G9 K DARSHAN

15 13R11A04H0 K. ANIRUDH

Group 4:
16 13R11A04H1 KOMIRISHETTY AKHILA

17 13R11A04H2 KOPPU MOUNIKA

18 13R11A04H3 KANNE RAVI KUMAR

19 13R11A04H4 KARRA VINEELA

20 13R11A04H5 KANUKALA SIDDHARTH

Group 5:

21 13R11A04H6 KATHI SHIVARAM REDDY

22 13R11A04H7 KOMANDLA SRIKANTH REDDY

23 13R11A04H8 KONDAM PADMA

24 13R11A04H9 KRISHNA ASHOK MORE

25 13R11A04J0 LINGAMPALLY RAJASRI

Group 6:

26 13R11A04J1 M ROHITH SAI SHASHANK

27 13R11A04J2 M TANVIKA

28 13R11A04J3 MALIHA AZAM

29 13R11A04J4 MANSHA NEYAZ

30 13R11A04J5 MATTA SRI SATYA SAI GAYATHRI

Group 7:

31 13R11A04J6 MD RAHMAN SHAREEF

32 13R11A04J7 MEESALA SAI SHRUTHI

33 13R11A04J8 P G CHANDANA

34 13R11A04J9 PALLE AKILA

35 13R11A04K0 PERNAPATI YAMINI


Group 8:

36 13R11A04K1 POLISETTY VEDA SRI

37 13R11A04K2 REGU PRAVALIKA

38 13R11A04K3 R RITHWIK REDDY

39 13R11A04K4 RAMYA S

40 13R11A04K5 SURI BHASKER SRI HARSHA

Group 9:

41 13R11A04K6 TANGUTOORI SIRI CHANDANA

42 13R11A04K7 THIPPARAPU AKHIL

43 13R11A04K8 UDDALA DEVAMMA

44 13R11A04K9 VALASA SHIVANI

45 13R11A04L0 VEPURI NAGA TARUN SAI

Group 10:

46 13R11A04L1 VISWAJITH GOVINDA RAJAN

47 13R11A04L2 YENDURI YUGANDHAR

48 13R11A04L3 M SAI KUMAR

49 14R18A0401 MODUMUDI HARSHITHA

46 13R11A04L1 VISWAJITH GOVINDA RAJAN


10. Tutorial class sheets

UNIT-1

SAMPLING:

Sampling Theorem for strictly band - limited signals


1.a signal which is limited to  W  f  W , can be completely
 n 
described by  g ( ).
 2W 
 n 
2.The signal can be completely recovered from  g ( )
 2W 
Nyquist rate  2W
Nyquist interval  1
2W
When the signal is not band - limited (under sampling)
aliasing occurs .To avoid aliasing, we may limit the
signal bandwidth or have higher sampling rate.

From Table A6.3 we have



g( t )   (t  nTs ) 
n  

1 m
G( f ) 
Ts
 ( f
m  

Ts
)

  f G( f
m  
s  mf s )

g ( t )  f s  G( f
m  
 mf s ) (3.2)

or we may apply Fourier Transform on (3.1) to obtain



G ( f )   g (nT ) exp(  j 2 nf T )
n  
s s (3.3)

or G ( f )  f sG ( f )  f s  G( f
m  
 mf s ) (3.5)
m 0
Unit 2

Pulse-Amplitude Modulation :

Let s ( t ) denote the sequence of flat - top pulses as



s (t )   m( nT
n  
s ) h ( t  nTs ) (3.10)

 1, 0 t  T
1
h (t )   , t  0, t  T (3.11)
2
 0,
 otherwise
The instantane ously sampled version of m( t ) is

m ( t )   m( nT
n  
s ) ( t  nTs ) (3.12)

m ( t )  h ( t )  
m ( )h ( t   )d
 
 
 m( nT ) (
n  
s  nTs )h ( t   )d
 
  m( nT ) 
n  
s

 (  nTs )h (t   )d (3.13)
Using the sifting property , we have

m ( t )  h ( t )   m( nT )h(t  nT )
n  
s s (3.14)

The PAM signal s (t ) is


s (t )  m (t )  h(t ) (3.15)
 S ( f )  Mδ ( f ) H ( f ) (3.16)

Recall Modulation
Pulse Amplitude  fand
(3.2) g (–t )Natural s G( f 
Flat-Top 
mf s )
Sampling:
m  
(3.2)

M ( f )  f s M( f k f
k  
s ) (3.17)

S ( f )  fs M( f k f
k  
s )H ( f ) (3.18)
 The most common technique for sampling voice in PCM systems is to a sample-and-
hold circuit.

 The instantaneous amplitude of the analog (voice) signal is held as a constant charge
on a capacitor for the duration of the sampling period Ts.

 This technique is useful for holding the sample constant while other processing is
taking place, but it alters the frequency spectrum and introduces an error, called
aperture error, resulting in an inability to recover exactly the original analog signal.
 The amount of error depends on how mach the analog changes during the holding
time, called aperture time.

 To estimate the maximum voltage error possible, determine the maximum slope of the
analog signal and multiply it by the aperture time DT
Unit 3

Differential Pulse-Code Modulation

(DPCM):

Usually PCM has the sampling rate higher than the Nyquist rate .The encode signal contains
redundant information. DPCM can efficiently remove this redundancy.

en  mn  m ˆ n (3.74)


ˆ n is a prediction value.
Processing Gain:
m
The quantizer output is
eq n  en  qn (3.75)
where qn is quantizati on error.
The prediction filter input is
mq n  m
ˆ n  en  qn (3.77)

mn
 mq n  mn  qn (3.78)
Unit 4

CORRELATIVE LEVEL CODING:

 Correlative-level coding (partial response signaling)


– adding ISI to the transmitted signal in a controlled manner
 Since ISI introduced into the transmitted signal is known, its effect can be interpreted at
the receiver
 A practical method of achieving the theoretical maximum signaling rate of 2W symbol
per second in a bandwidth of W Hertz
 Using realizable and perturbation-tolerant filters

Duo-binary Signaling :

Duo : doubling of the transmission capacity of a straight binary system

Reference Text Books and websites

TEXT BOOKS

5. Principles Of Communication Systems-Herberet Taub, Donald L Schiling, Goutham


saha,3rf edition, Mc Graw Hill 2008
6. digital and anolog communiation systems- Sam Shanmugam, John Wiley,2005

REFERNCES:
5. Digital communication- john g. Prokaris, Masoud salehi-5th edition Mc Graw-Hill, 2008.
6. Digital communicatio- Simon Haykin, Jon Wiley, 2005
MISSING TOPICS
UNIT 1

Hartley's law
During that same year, Hartley formulated a way to quantify information and its line
rate (also known as data signalling rate or gross bitrate inclusive of error-correcting code 'R'
across a communications channel).[1] This method, later known as Hartley's law, became an
important precursor for Shannon's more sophisticated notion of channel capacity.
Hartley argued that the maximum number of distinct pulses that can be transmitted and
received reliably over a communications channel is limited by the dynamic range of the
signal amplitude and the precision with which the receiver can distinguish amplitude levels.
Specifically, if the amplitude of the transmitted signal is restricted to the range of [ –A ... +A ]
volts, and the precision of the receiver is ±ΔV volts, then the maximum number of distinct
pulses M is given by

By taking information per pulse in bit/pulse to be the base-2-logarithm of the number of


distinct messages M that could be sent, Hartley[2] constructed a measure of the line
rate R as:

where fp is the pulse rate, also known as the symbol rate, in symbols/second or baud.
Hartley then combined the above quantification with Nyquist's observation that the number
of independent pulses that could be put through a channel of bandwidth B hertz was
2B pulses per second, to arrive at his quantitative measure for achievable line rate.
Hartley's law is sometimes quoted as just a proportionality between the analog bandwidth, B,
in Hertz and what today is called the digital bandwidth, R, in bit/s.[3] Other times it is quoted
in this more quantitative form, as an achievable line rate of R bits per second:[4]

Hartley did not work out exactly how the number M should depend on the noise statistics of
the channel, or how the communication could be made reliable even when individual symbol
pulses could not be reliably distinguished to M levels; with Gaussian noise statistics, system
designers had to choose a very conservative value of M to achieve a low error rate.
The concept of an error-free capacity awaited Claude Shannon, who built on Hartley's
observations about a logarithmic measure of information and Nyquist's observations about
the effect of bandwidth limitations.
Hartley's rate result can be viewed as the capacity of an errorless M-ary channel of
2B symbols per second. Some authors refer to it as a capacity. But such an errorless channel
is an idealization, and the result is necessarily less than the Shannon capacity of the noisy
channel of bandwidth B, which is the Hartley–Shannon result that followed later.
Noisy channel coding theorem and capacity
Main article: noisy-channel coding theorem
Claude Shannon's development of information theory during World War II provided the next
big step in understanding how much information could be reliably communicated through
noisy channels. Building on Hartley's foundation, Shannon's noisy channel coding
theorem (1948) describes the maximum possible efficiency of error-correcting
methods versus levels of noise interference and data corruption.[5][6] The proof of the theorem
shows that a randomly constructed error correcting code is essentially as good as the best
possible code; the theorem is proved through the statistics of such random codes.
Shannon's theorem shows how to compute a channel capacity from a statistical description of
a channel, and establishes that given a noisy channel with capacity C and information
transmitted at a line rate R, then if

there exists a coding technique which allows the probability of error at the receiver to be
made arbitrarily small. This means that theoretically, it is possible to transmit information
nearly without error up to nearly a limit of C bits per second.
The converse is also important. If

the probability of error at the receiver increases without bound as the rate is increased. So no
useful information can be transmitted beyond the channel capacity. The theorem does not
address the rare situation in which rate and capacity are equal.
[edit]Shannon–Hartley theorem
The Shannon–Hartley theorem establishes what that channel capacity is for a finite-
bandwidth continuous-time channel subject to Gaussian noise. It connects Hartley's result
with Shannon's channel capacity theorem in a form that is equivalent to specifying the M in
Hartley's line rate formula in terms of a signal-to-noise ratio, but achieving reliability through
error-correction coding rather than through reliably distinguishable pulse levels.
If there were such a thing as an infinite-bandwidth, noise-free analog channel, one could
transmit unlimited amounts of error-free data over it per unit of time. Real channels,
however, are subject to limitations imposed by both finite bandwidth and nonzero noise.
So how do bandwidth and noise affect the rate at which information can be transmitted over
an analog channel?
Surprisingly, bandwidth limitations alone do not impose a cap on maximum information rate.
This is because it is still possible for the signal to take on an indefinitely large number of
different voltage levels on each symbol pulse, with each slightly different level being
assigned a different meaning or bit sequence. If we combine both noise and bandwidth
limitations, however, we do find there is a limit to the amount of information that can be
transferred by a signal of a bounded power, even when clever multi-level encoding
techniques are used.
In the channel considered by the Shannon-Hartley theorem, noise and signal are combined by
addition. That is, the receiver measures a signal that is equal to the sum of the signal
encoding the desired information and a continuous random variable that represents the noise.
This addition creates uncertainty as to the original signal's value. If the receiver has some
information about the random process that generates the noise, one can in principle recover
the information in the original signal by considering all possible states of the noise process. In
the case of the Shannon-Hartley theorem, the noise is assumed to be generated by a Gaussian
process with a known variance. Since the variance of a Gaussian process is equivalent to its
power, it is conventional to call this variance the noise power.
Such a channel is called the Additive White Gaussian Noise channel, because Gaussian noise
is added to the signal; "white" means equal amounts of noise at all frequencies within the
channel bandwidth. Such noise can arise both from random sources of energy and also from
coding and measurement error at the sender and receiver respectively. Since sums of
independent Gaussian random variables are themselves Gaussian random variables, this
conveniently simplifies analysis, if one assumes that such error sources are also Gaussian and
independent.

Comparison of Shannon's capacity to Hartley's law


Comparing the channel capacity to the information rate from Hartley's law, we can find the
effective number of distinguishable levels M:[7]

The square root effectively converts the power ratio back to a voltage ratio, so the number of
levels is approximately proportional to the ratio of rms signal amplitude to noise standard
deviation.
This similarity in form between Shannon's capacity and Hartley's law should not be
interpreted to mean that M pulse levels can be literally sent without any confusion; more
levels are needed, to allow for redundant coding and error correction, but the net data rate that
can be approached with coding is equivalent to using that M in Hartley's law.

Frequency-dependent (colored noise) case


In the simple version above, the signal and noise are fully uncorrelated, in which
case S + N is the total power of the received signal and noise together. A generalization of the
above equation for the case where the additive noise is not white (or that the S/N is not
constant with frequency over the bandwidth) is obtained by treating the channel as many
narrow, independent Gaussian channels in parallel:

where

C is the channel capacity in bits per second;


B is the bandwidth of the channel in Hz;
S(f) is the signal power spectrum
N(f) is the noise power spectrum
f is frequency in Hz.
Note: the theorem only applies to Gaussian stationary process noise. This formula's way of
introducing frequency-dependent noise cannot describe all continuous-time noise processes.
For example, consider a noise process consisting of adding a random wave whose amplitude
is 1 or -1 at any point in time, and a channel that adds such a wave to the source signal. Such
a wave's frequency components are highly dependent. Though such a noise may have a high
power, it is fairly easy to transmit a continuous signal with much less power than one would
need if the underlying noise was a sum of independent noises in each frequency band.
Approximations
For large or small and constant signal-to-noise ratios, the capacity formula can be
approximated:

 If S/N >> 1, then

where

 Similarly, if S/N << 1, then

In this low-SNR approximation, capacity is independent of bandwidth if the noise is


white, of spectral density watts per hertz, in which case the total noise power
is .

ADDITIONAL TOPICS

Unit 4

Signal space representation

• Bandpass Signal

• Real valued signal S(f) Ù S* (-f)

• finite bandwidth B Ù infinite time span


• f c denotes center frequency

• Negative Frequencies contain no Additional Info

Characteristics:

• Complex valued signal

• No information loss, truely equivalent

Let us consider DN = {(xi , yi) : i = 1, .., N} iid realizations of the joint observation-class
phenomenon (X(u), Y (u)) with true probability measure P(X,Y) defined on (X ×Y, σ(FX ×
FY )). In addition, let us consider a family of measurable representation functions D, where
any f(·) ∈ D is defined in X and takes values in Xf . Let us assume that any representation
function f(·) induces an empirical distribution Pˆ Xf ,Y on (Xf ×Y, σ(Ff ×FY )), based on the
training data and an implicit learning approach, where the empirical Bayes classification rule
is given by: gˆf (x) = arg maxy∈Y Pˆ Xf ,Y (x, y).

UNIT 6

Turbo codes

In information theory, turbo codes (originally in French Turbocodes) are a class of high-
performance forward error correction (FEC) codes developed in 1993, which were the first
practical codes to closely approach the channel capacity, a theoretical maximum for the code
rate at which reliable communication is still possible given a specific noise level. Turbo
codes are finding use in 3G mobile communications and (deep
space) satellite communications as well as other applications where designers seek to achieve
reliable information transfer over bandwidth- or latency-constrained communication links in
the presence of data-corrupting noise. Turbo codes are nowadays competing with LDPC
codes, which provide similar performance.
Prior to turbo codes, the best constructions were serial concatenated codes based on an
outer Reed-Solomon error correction code combined with an inner Viterbi-decoded short
constraint length convolutional code, also known as RSV codes.
In 1993, turbo codes were introduced by Berrou, Glavieux, and Thitimajshima (from
Télécom Bretagne, former ENST Bretagne, France) in their paper: "Near Shannon Limit
Error-correcting Coding and Decoding: Turbo-codes" published in the Proceedings of IEEE
International Communications Conference. In a later paper, Berrou gave credit to the
"intuition" of "G. Battail, J. Hagenauer and P. Hoeher, who, in the late 80s, highlighted the
interest of probabilistic processing.". He adds "R. Gallager and M. Tanner had already
imagined coding and decoding techniques whose general principles are closely related,"
although the necessary calculations were impractical at that time.
The first class of turbo code was the parallel concatenated convolutional code (PCCC). Since
the introduction of the original parallel turbo codes in 1993, many other classes of turbo code
have been discovered, including serial versions and repeat-accumulate codes. Iterative Turbo
decoding methods have also been applied to more conventional FEC systems, including
Reed-Solomon corrected convolutional codes
There are many different instantiations of turbo codes, using different component encoders,
input/output ratios, interleavers, and puncturing patterns. This example encoder
implementation describes a 'classic' turbo encoder, and demonstrates the general design of
parallel turbo codes.
This encoder implementation sends three sub-blocks of bits. The first sub-block is the m-bit
block of payload data. The second sub-block is n/2 parity bits for the payload data, computed
using a recursive systematic convolutional code (RSC code). The third sub-block is n/2 parity
bits for a known permutation of the payload data, again computed using an RSC
convolutional code. Thus, two redundant but different sub-blocks of parity bits are sent with
the payload. The complete block has m+n bits of data with a code rate of m/(m+n).
The permutation of the payload data is carried out by a device called an interleaver.
Hardware-wise, this turbo-code encoder consists of two identical RSC coders, С1 and C2, as
depicted in the figure, which are connected to each other using a concatenation scheme,
called parallel concatenation:

In the figure, M is a memory register. The delay line and interleaver force input bits dk to
appear in different sequences. At first iteration, the input sequence dk appears at both outputs
of the encoder, xk and y1k or y2k due to the encoder's systematic nature. If the
encoders C1 and C2 are used respectively in n1 and n2 iterations, their rates are respectively
equal to

.
The decoder
The decoder is built in a similar way to the above encoder - two elementary decoders are
interconnected to each other, but in serial way, not in parallel. The decoder operates
on lower speed (i.e. ), thus, it is intended for the encoder, and is
for correspondingly. yields a soft decision which causes delay. The same
delay is caused by the delay line in the encoder. The 's operation causes delay.
An interleaver installed between the two decoders is used here to scatter error bursts
coming from output. DI block is a Demultiplexing and insertion module. It works as
a switch, redirecting input bits to at one moment and to at another. In OFF
state, it feeds both and inputs with padding bits (zeros).
Consider a memoryless AWGN channel, and assume that at k-th iteration, the decoder
receives a pair of random variables:

where and are independent noise components having the same


variance . is a k-th bit from encoder output.
Redundant information is demultiplexed and sent
through DI to (when ) and to (when ).
yields a soft decision, i.e.:

and delivers it to . is called the logarithm of the likelihood


ratio (LLR). is the a posteriori probability (APP) of the data bit
which shows the probability of interpreting a received bit as . Taking the LLR into
account, yields a hard decision, i.e. a decoded bit.
It is known that the Viterbi algorithm is unable to calculate APP, thus it cannot be
used in . Instead of that, a modified BCJR algorithm is used. For , the Viterbi
algorithm is an appropriate one.
However, the depicted structure is not an optimal one, because uses only a
proper fraction of the available redundant information. In order to improve the structure, a
feedback loop is used (see the dotted line on the figure).
ASSIGNMENT TOPICS

Unit I:

1. Certain issues of digital transmission,


2. advantages of digital communication systems,
3. Bandwidth- S/N trade off, and Sampling theorem
4. PCM generation and reconstruction
5. Quantization noise, Differential PCM systems (DPCM)
6. Delta modulation,

Unit II:

1. Coherent ASK detector and non-Coherent ASK detector


2. Coherent FSK detector BPSK
3. Coherent PSK detection

Unit III:

1. A Base band signal receiver,


2. Different pulses and power spectrum densities
3. Probability of error
4. Conditional entropy and redundancy,
5. Shannon Fano coding
6. Mutual information,

Unit IV:

1. Matrix description of linear block codes


2. Matrix description of linear block codes
3. Error detection and error correction capabilities of linear block codes
4. Encoding
5. decoding using state Tree and trellis diagrams
6. Decoding using Viterbi algorithm

Unit V:

1. Use of spread spectrum


2. direct sequence spread spectrum(DSSS),
3. Code division multiple access
4. Ranging using DSSS Frequency Hopping spread spectrum,
Subject Contents

1.7. 1. Synopsis page for each period (62 pages)

1.7.2. Detailed Lecture notes containing:

1. PPTs

2. OHP slides

3. Subjective type questions (approximately 5 t0 8 in no)

4. Objective type questions (approximately 20 to 30 in no)

5. Any simulations

1.8. Course Review (By the concerned Faculty):

(I)Aims

(II) Sample check

(III) End of the course report by the concerned faculty

GUIDELINES:

Distribution of periods:

No. of classes required to cover JNTU syllabus : 40

No. of classes required to cover Additional topics : 4

No. of classes required to cover Assignment tests (for every 2 units 1 test) : 4

No. of classes required to cover tutorials : 8

No. of classes required to cover Mid tests : 2

No of classes required to solve University : 4

Question papers ----------------


62
Total periods
CLOSURE REPORT
Here the closure report is enclosed for the Digital Communication:

1 No. of hours planned to complete the course-62 hrs

No. of hours taken –

2. Internal marks evaluation sheet is attached.

3. How many students appeared for the external examination-

Вам также может понравиться