Вы находитесь на странице: 1из 55

EE-744 Advanced Digital

Communications
Dr. Farrukh Aziz Bhatti
Electrical Engineering Department
Institute of Space Technology
Islamabad

Objective
This course covers some of the fundamental
concepts and the key advanced topics related to
the transmitter, channel and receiver in digital
communication. It also introduces some of the
advanced research areas in the field.

Dr. Farrukh Bhatti

Course Outline

Introduction
Signal and Spectra
Formatting and Base band Modulation
Base band Demodulation/Detection
Channel Coding
Band pass Modulation and Demod./Detect.
Spread Spectrum Techniques
Multiplexing and Multiple Access
Synchronization
Source Coding
Fading Channels
Dr. Farrukh Bhatti

Pre requisites
Required:
Signals and Systems
Recommended:
Probability and Stochastic Processes
MATLAB

Dr. Farrukh Bhatti

Books
Text book None
Suggested reading:
Sklar, Digital Communications Fundamentals and
Applications, 2nd Edition, Prentice Hall, 2001
Haykin, Digital Communications, John Wiley & Sons,
2006
J. G. Proakis, Digital Communications, 4th Edition,
McGraw-Hill, 2001
Tri T.Ha, Theory and design of digital communications
systems, Cambridge, 2011
Dr. Farrukh Bhatti

Tentative Assessments weightage


Assignments:

10%

Presentation & Research Paper 20%


Quizzes

10%

Mid Term Exam:

25%

Final Exam:

35%

Dr. Farrukh Bhatti

Communication
It is the transmission of information from a source
to one or more recipients via a channel or a
medium.

Dr. Farrukh Bhatti

History a source of motivation


Origin of binary code by Francis Bacon, 17th century.
Used 5 letter combination of only two distinct letters to
represent 24 Alphabets. John Wikins expanded it to 3-5
letter alphabets.
Telegraph, 1837 by Samuel Morse, transmitted What
hath God wrought between Washington and
Baltimore. Used dot and dash _ _...._._..._.._ _ _....
Telephone, 1874 Graham Bell. In 1897 Strowger
devised automatic switch. 1913 transcontinental
telephony. Invention of transistor in 1948, first
commercial telephone with digital switching in 1960,
Illinois.
Dr. Farrukh Bhatti

History
Integrated digital network (IDN) in 1974, eventually
transformed to ISDN (Integrated services digital
network ) that carries all types of data as digital.
Radio: 1864, James Clark Maxwell proposed
electromagnetic theory. 1887 Heinrich Hertz
proposed that radio waves exist. Dec 12, 1901
Guglialmo Marconi received a radio signal in
Newfoundland that was transmitted from Cornwall
England (1700 miles away). Digital modulation for
microwave in 1930. Digital radio kicked off in 1970.
Dr. Farrukh Bhatti

History
Satellite Comm: 1945, Arthur C. Clarke proposed
about earth-orbiting satellite. 1957 Soviet Union
launched Sputnik I that txmtd telemetry signals.
1958, USA launched Explorer I, that txmtd
telemetry signals for 5 months.
Telstar 1 (by Bell labs) was launched in 1962, could
relay TV programs. 1964 INTELSAT, a multinational
organization was formed, its aim: to design,
develop, construct and establish the global
commercial communication satellite system.
Dr. Farrukh Bhatti

10

History
Optical Comm: Old concept, use of smoke and fire signals.
1966, Kao and Hockham proposed clad glass fiber as a
dielectric waveguide. Impurities in glass was a major issue,
in 1970 Kapron, Keck and Maurer of Corning Glass Works
fabricated a silica-doped clad fiber (attn. 20 db/km)- a major
breakthrough.
Computer Comm: started in early 1950s, speeds (300-1200
b/s). ARPANET developed by US DOD, used packet switching
for the first time.
Information theory: 1928 Harry Nyquists vital theory of
sampling requirement for telegraph signals. 1943 North
devised the matched filter for optimum detection.
1948 foundations of digital comm were laid by Claude
Shannon in his paper A Mathematical theory of
communication.
Dr. Farrukh Bhatti

11

Types of communication
Analog communication: The information bearing signal is
continuously varying both in time and amplitude, and it is
used directly to modify some characteristics of a sinusoidal
carrier wave, such as amplitude, phase or frequency.
Digital communication: The information bearing signal is
discrete in time and amplitude.

Dr. Farrukh Bhatti

12

Why digital?
Less distortion and interference as compared to
analog.
Regeneration of digital signal is easy, it is impossible
in analog signal. Amplification doesnt work.

Dr. Farrukh Bhatti

13

Why digital?
High fidelity possible through error detection and
correction.
Digital circuits are more reliable and cost effective.
VLSI technology on the rise, e.g. microprocessors,
storage components, RAMs, SD cards etc. Analog
circuits are bulky.
Combining of digital signals (TDM) is simpler than
analog combining (FDM)
Different types of digital signals, data, image, video,
voice, etc. are treated alike- a bit is a bit.
Dr. Farrukh Bhatti

14

Why digital?
Digital messages are handled in autonomous
groups called packets, dedicated link is not
required.
Digital signal processing is easily possible to protect
against jamming, interference, increase security
and privacy
Most of the communication is from computer to
computer, or digital nodes, digital comm is the
choice

Dr. Farrukh Bhatti

15

Costs of going digital


It is more signal processing intensive compared to
analog
Synchronization is a major step in digital comms,
unlike analog
Nongraceful degradation of signal occurs in digital
comm, e.g. in digital TVs, DVDs, cell phones etc.
Analog comm has a graceful degradation, e.g. AM,
FM radio

Dr. Farrukh Bhatti

16

Dr. Farrukh Bhatti

17

Performance criteria
Analog Communication Systems
Metric is fidelity, mean square error or percent
distortion
SNR is an important performance metric

Digital Communication Systems


Metrics are data rate (R bps) and probability of bit error
where symbols already known at the receiver

Dr. Farrukh Bhatti

18

Digital Communication
Nomenclature
Information Source
Discrete output values e.g. Keyboard
Analog signal source e.g. output of a microphone

Character
Member of an alphanumeric/symbol (A to Z, 0 to 9)
Characters can be mapped into a sequence of binary
digits using one of the standardized codes such as
ASCII: American Standard Code for Information Interchange
EBCDIC: Extended Binary Coded Decimal Interchange Code

Dr. Farrukh Bhatti

19

Digital Communication
Nomenclature
Digital Message
Messages constructed from a finite number of symbols; e.g.,
printed language consists of 26 letters, 10 numbers, space and
several punctuation marks. Hence a text is a digital message
constructed from about 50 symbols
Morse-coded telegraph message is a digital message constructed
from two symbols Mark and Space

M - ary
A digital message constructed with M symbols

Digital Waveform
Current or voltage waveform that represents a digital symbol

Dr. Farrukh Bhatti

20

Digital Communication
Nomenclature
Baud Rate
Refers to the rate at which the signaling elements are
transmitted, i.e. number of signaling elements per
second.

Bit Rate
Actual rate at which information is transmitted per
second
Bit Error Rate
The probability that one of the bits is in error or simply
the probability of error
Dr. Farrukh Bhatti

21

Classification Of Signals
1) Deterministic and Random Signals
A signal is deterministic means that there is no uncertainty
with respect to its value at any time.
Deterministic waveforms are modeled by explicit
mathematical expressions, example:

x(t) = 5Cos(10t)
A signal is random means that there is some degree of
uncertainty before the signal actually occurs.

Random waveforms/ Random processes when examined


over a long period may exhibit certain regularities that can
be described in terms of probabilities and statistical
averages.
Dr. Farrukh Bhatti

22

2) Periodic and Nonperiodic signals


A signal x(t) is called periodic in time if there exists
a constant T0 > 0 such that

x(t) = x(t + T0 )

for

- < t <

t denotes time
T0 is the period of x(t).

A signal for which there is no T0 is called nonperiodic.

11 Mar
Dr. Farrukh Bhatti

23

3) Analog and Discrete Signals


An analog signal x(t) is a continuous function of
time; that is, x(t) is uniquely defined for all t
A discrete signal x(kT) is one that exists only at
discrete times; it is characterized by a sequence of
numbers defined for each time, kT, where
k is an integer
T is a fixed time interval.

Dr. Farrukh Bhatti

24

4)Energy and Power signals


The performance of a communication system depends on the
received signal energy; higher energy signals are detected more
reliably (with fewer errors) than are lower energy signals
x(t) is classified as an energy signal if, and only if, it has nonzero
but finite energy (0 < Ex < ) for all time, where:

T/2

Ex =

lim
T

x 2 (t) dt

T / 2

x 2 (t) dt

An energy signal has finite energy but zero average power.


Signals that are both deterministic and non-periodic are classified
as energy signals
Dr. Farrukh Bhatti

25

4)Energy and Power signals


Power is the rate at which energy is delivered.
A signal is defined as a power signal, if and only if, it has finite but
nonzero power (0 < Px < ) for all time, where
T/2

1
2
Px = lim
x
(t) dt

T T T / 2
Power signal has finite average power but infinite energy.
As a general rule, periodic signals and random signals are
classified as power signals, while signals that are both
deterministic and nonperiodic are classified as energy signals.

Dr. Farrukh Bhatti

26

5. The Unit Impulse Function


Dirac delta function (t) or impulse function is an abstractionan
infinitely large amplitude pulse, with zero pulse width, and unity
weight (area under the pulse), concentrated at the point where its
argument is zero.

(t) dt = 1

(t) = 0 for t 0
(t) is bounded at t 0
Sifting or Sampling Property

x(t ) (t-t

)dt = x(t 0 )

Dr. Farrukh Bhatti

27

Spectral Density
The spectral density of a signal characterizes the distribution of the
signals energy or power in the frequency domain.
This concept is particularly important when considering filtering in
communication systems while evaluating the signal and noise at the
filter output.
The energy spectral density (ESD) or the power spectral density (PSD)
is used in the evaluation.

Dr. Farrukh Bhatti

28

Energy Spectral Density (ESD)


Energy spectral density describes the signal energy per unit bandwidth
measured in joules/hertz.

Represented as x(f), the squared magnitude spectrum

x( f ) X( f )

According to Parsevals theorem, the energy of x(t):

Ex =
Therefore:

Ex =

(t) dt =

|X(f)|

df

H.w
Review Fourier
Transform

(f) df

The Energy spectral density is symmetrical in frequency about origin


and total energy of the signal x(t) can be expressed as:

E x = 2 x (f) df
0

Dr. Farrukh Bhatti

29

Power Spectral Density (PSD)


The power spectral density (PSD) function Gx(f ) of the periodic signal x(t)
is a real, even, and nonnegative function of frequency that gives the
distribution of the power of x(t) in the frequency domain.
PSD is represented as:

G x (f ) =

2
|C
|
n ( f nf0 )

n=-

Whereas the average power of a periodic signal x(t) is represented as:

1
Px
T0

T0 /2

x 2 (t) dt

2
|C
|
n

n=-

T0 / 2

Using PSD, the average normalized power of a real-valued signal is

represented as:

Px

(f) df 2 G x (f) df

Dr. Farrukh Bhatti

30

Autocorrelation
1. Autocorrelation of an Energy Signal
Correlation between two phenomenon refers to how closely they
correspond in behavior or appearance. Correlation is a matching process;
autocorrelation refers to the matching of a signal with a delayed version of
itself.
Autocorrelation function of a real-valued energy signal x(t) is defined as:

R x ( ) =

x(t) x (t + ) dt

for

- < <

The autocorrelation function Rx() provides a measure of how closely the


signal matches a copy of itself as the copy is shifted
units in time.
Rx() is not a function of time; it is only a function of the time difference
between the waveform and its shifted copy.

Dr. Farrukh Bhatti

31

1. Autocorrelation of an Energy Signal


The autocorrelation function of a real-valued energy signal has the
following properties:

R x ( ) =R x (- )
R x ( ) R x (0) for all

symmetrical in about zero

maximum value occurs at the origin

R x ( ) x (f)

autocorrelation and ESD form a


Fourier transform pair, as designated
by the double-headed arrows
value at the origin is equal to

the

energy of the signal

R x (0)

x 2 (t) dt

Dr. Farrukh Bhatti

32

2. Autocorrelation of a Power Signal


Autocorrelation function of a real-valued power signal x(t) is defined
as:
T /2
1
R x ( ) lim
x(t) x (t + ) dt for - < <

T T T / 2
When the power signal x(t) is periodic with period T0, the
autocorrelation function can be expressed as

1
R x ( )
T0

T0 / 2

x(t) x (t + ) dt

for - < <

T0 / 2

Dr. Farrukh Bhatti

33

2. Autocorrelation of a Power Signal


The autocorrelation function of a real-valued periodic signal has the
following properties similar to those of an energy signal:

R x ( ) =R x (- )

symmetrical in about zero

R x ( ) R x (0) for all

R x ( ) Gx (f)
R x (0)

1
T0

T0 / 2

T0 / 2

maximum value occurs at the origin

autocorrelation and PSD form a


Fourier transform pair
value at the origin is equal to the
average power of the signal

x 2 (t) dt

Dr. Farrukh Bhatti

34

Random Signals
1. Random Variables
All useful message signals appear random; that is, the receiver does not
know, a priori, which of the possible waveform have been sent.
Let a random variable X(A) represent the functional relationship
between a random event A and a real number.
The (cumulative) distribution function FX(x) of the random variable X is
given by

FX ( x) P( X x)

Another useful function relating to the random variable X is the


probability density function (pdf)

dFX ( x)
PX ( x)
dx
Dr. Farrukh Bhatti

35

Ensemble Averages

m X E{ X }

xp

( x)dx

E{ X 2 } x 2 p X ( x)dx

The second moment of a


probability distribution is the
mean-square value of X

var( X ) E{( X m X ) 2 }

( x m X ) 2 p X ( x)dx

var( X ) E{X 2 } E{X }2

The first moment of a probability


distribution of a random variable X
is called mean value mX, or
expected value of a random
variable X

Central moments are the moments


of the difference between X and
mX and the second central
moment is the variance of X
Variance is equal to the difference
between the mean-square value
and the square of the mean

Dr. Farrukh Bhatti

36

Random Processes
A random process X(A, t) can be viewed as a function of two
variables: an event A and time.

Dr. Farrukh Bhatti

37

Statistical Averages of a Random Process


A random process whose distribution functions are continuous can be
described statistically with a probability density function (pdf).
A partial description consisting of the mean and autocorrelation function
are often adequate for the needs of communication systems.

Mean of the random process X(t) :

E{ X (tk )} xp X k ( x) dx mX (tk )

Autocorrelation function of the random process X(t)

RX (t1 , t2 ) E{ X (t1 ) X (t2 )}

Dr. Farrukh Bhatti

38

Stationarity
A random process X(t) is said to be stationary in the strict sense if none
of its statistics are affected by a shift in the time origin.
A random process is said to be wide-sense stationary (WSS) if two of its
statistics, its mean and autocorrelation function, do not vary with a shift
in the time origin.

E{X (t )} mX a constant
RX (t1 , t2 ) RX (t1 t2 )

Dr. Farrukh Bhatti

39

Autocorrelation of a Wide-Sense Stationary Random


Process

For a wide-sense stationary process, the autocorrelation function is


only a function of the time difference = t1 t2;

RX ( ) E{X (t ) X (t )}

for

Properties of the autocorrelation function of a real-valued widesense stationary process are

1. RX ( ) RX ( )
2. RX ( ) RX (0) for all
3. RX ( ) GX ( f )
4. RX (0) E{ X 2 (t )}

Symmetrical in about zero


Maximum value occurs at the origin
Autocorrelation and power spectral
density form a Fourier transform pair
Value at the origin is equal to the
average power of the signal

Dr. Farrukh Bhatti

40

Time Averaging and Ergodicity


When a random process belongs to a special class, known as an ergodic
process, its time averages equal its ensemble averages.
The statistical properties of such processes can be determined by time
averaging over a single sample function of the process.
A random process is ergodic in the mean if

1
mX lim
T T

T /2

X (t )dt

T / 2

It is ergodic in the autocorrelation function if

1
RX ( ) lim
T T

T /2

X (t ) X (t )dt

T / 2

Dr. Farrukh Bhatti

41

Time Averaging and Ergodicity


Fundamental electrical engg parameters, like dc value, rms value,
average power can be related to the moments of an ergodic random
process [Sklar, 2001]:

Dr. Farrukh Bhatti

42

Power Spectral Density and Autocorrelation


A random process X(t) can generally be classified as a power signal
having a power spectral density (PSD) GX(f )
Principal features of PSD functions
And is always real valued
for X(t) real-valued

1. GX ( f ) 0

2. GX ( f ) GX ( f )

3. GX ( f ) RX ( )

PSD and autocorrelation form a


Fourier transform pair

4. PX (0)

( f )df

Relationship between average


normalized power and PSD
Dr. Farrukh Bhatti

43

Dr. Farrukh Bhatti

44

Dr. Farrukh Bhatti

45

Noise in Communication Systems


The term noise refers to unwanted electrical signals that are always
present in electrical systems; e.g spark-plug ignition noise, switching
transients, and other radiating electromagnetic signals.
Can describe thermal noise as a zero-mean Gaussian random process.
A Gaussian process n(t) is a random function whose amplitude at any
arbitrary time t is statistically characterized by the Gaussian probability
density function
2

1
1 n
p ( n)
exp
2
2

Dr. Farrukh Bhatti

(1.40)

46

Noise in Communication Systems


The normalized or standardized Gaussian density function of a zeromean process is obtained by assuming unit variance.
Reading Assignment
Probability density function

Dr. Farrukh Bhatti

47

White Noise
The primary spectral characteristic of thermal noise is that its power spectral
density is the same for all frequencies of interest in most communication
systems
Power spectral density Gn(f )

N0
Gn ( f )
watts / hertz
2
Autocorrelation function of white noise is

N0
Rn ( ) {Gn ( f )}
( )
2
1

The average power Pn of white noise is infinite

p ( n)

N0
df
2

Dr. Farrukh Bhatti

48

White Noise
The effect on the detection process of a channel with additive white
Gaussian noise (AWGN) is that the noise affects each transmitted
symbol independently.
Such a channel is called a memoryless channel.

The term additive means that the noise is simply superimposed or


added to the signal

Dr. Farrukh Bhatti

49

Signal Transmission through


Linear Systems
A system can be characterized equally well in the time domain or the
frequency domain, techniques will be developed in both domains
The system is assumed to be linear and time invariant.
It is also assumed that there is no stored energy in the system at the
time the input is applied

Dr. Farrukh Bhatti

50

Dr. Farrukh Bhatti

51

Impulse Response
The linear time invariant system or network is characterized in the time domain
by an impulse response h (t ), to an input unit impulse (t)

y(t ) h(t ) when x(t ) (t )

The response of the network to an arbitrary input signal x (t )is found by the
convolution of x (t )with h (t )

y (t ) x(t ) h(t )

x( )h(t )d

The system is assumed to be causal, which means that there can be no output
prior to the time, t =0, when the input is applied.
The convolution integral can be expressed as:

y (t ) x( )h(t )d
0
Dr. Farrukh Bhatti

52

Frequency Transfer Function


The frequency-domain output signal Y (f )is obtained by taking the
Fourier transform

Y( f ) X ( f ) H( f )
Frequency transfer function or the frequency response is defined as:

Y( f )
H( f )
X(f )
H ( f ) H ( f ) e j ( f )
The phase response is defined as:

( f ) tan 1

Im{H ( f )}
Re{H ( f )}

Dr. Farrukh Bhatti

53

Random Processes and Linear Systems


If a random process forms the input to a time-invariant
linear system, the output will also be a random process.
The input power spectral density GX (f )and the output
power spectral density GY (f ) are related as:

GY ( f ) GX ( f ) H ( f )

Dr. Farrukh Bhatti

54

Distortionless Transmission
What is the required behavior of an ideal transmission line?
The output signal from an ideal transmission line may have some
time delay and different amplitude than the input
It must have no distortionit must have the same shape as the input.
For ideal distortionless transmission:

Output signal in time domain

y(t ) Kx(t t0 )

j 2 ft0
Y
(
f
)

KX
(
f
)
e
Output signal in frequency domain

System Transfer Function

H ( f ) Ke j 2 ft0
Dr. Farrukh Bhatti

(1.54)
(1.55)
(1.56)
55

Вам также может понравиться