Вы находитесь на странице: 1из 19

國立台灣海洋大學

National Taiwan Ocean University


國立台灣海洋大學  8.1 Probability and Random Variables
National Taiwan Ocean University  8.2 Expectation
 8.3 Transformation of Random Variables
通訊與導航工程學系
 8.4 Gaussian Random Variables
通訊原理  8.5 The Central Limit Theorem
 8.6 Random Processes
Fundamental Communications Theory  8.7 Correlation of Random Processes
Fall 2018  8.8 Spectra of Random Signals
 8.9 Gaussian Processes
吳家琪 助理教授  8.10 White Noise
 8.11 Narrowband Noise
Lecture 11. Random Signals and Noise  8.12 Summary and Discussion

國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University

 Random signals in one form or another are encountered 8.1 Probability and Random Variables
in every practical communication system.  Probability theory is rooted in situations that involve
 Noise may be defined as any unwanted signal performing an experiment with an outcome that is
interfering with or distorting the signal being subject to chance.
communicated.  If the experiment is repeated, the outcome may differ
 Noise is another example of a random signal. due to the influence of an underlying random
phenomenon.
 Such an experiment is referred to as a random
experiment.
國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University

Relative-Frequency Approach  A probability system consists of the triple:


n n  1. A sample space S of elementary events (outcomes).
0 ≤ A ≤ 1, P[ A] = lim A 
n n →∞ n
  2. A class ε of events that are subsets of S.
The Axioms of Probability 3. A probability measure P[A] assigned to each event A in
the class ε, which has the following properties:
 By assigning a sample point to each of these possible
(i) P[S] = 1
outcomes, we have a sample space that consists of six
(ii) 0 ≤ P[A] ≤ 1
sample points, as shown in Fig. 8.1.
(iii) If A∪B is the union of two mutually exclusive events
in the class ε, then
P[A∪B] = P[A] + P[B]

Figure 8.1 Sample Space of the experiment of throwing a die.

國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University Random Variables
 If the outcome of the experiment is s, we denote the
random variable as X(s) or just X.
 Note that X is a function, even if it is, for historical
reasons, called a random variable.
 We denote a particular outcome of a random
experiment by x; that is, X(sk) = x. There may be more
than one random variable associated with the same
random experiment.
 The concept of a random variable is shown in Fig. 8.3.

Figure 8.2 Relationship between sample space, events, and probability.


國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University

 Random variables may be discrete and take only a


finite number of values, such as in the coin-tossing
experiment.
 Random variables may be continuous and take a range
of real values.
 For a discrete-valued random variable, the probability
mass function describes the probability of each possible
value of the random variable.

Figure 8.3 Relationship between sample space, random variables, and probability.

國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University

 For the coin-tossing experiment, if it is affair coin, the Example 8.1 Bernoulli Random Variable
probability mass function of the associated random
variable may be written as  1 − p, x = 0

P[ X = x] =  p, x =1
 12 , x=0
1 0, otherwise
P[ X = x] =  2 , x =1 
0, otherwise
  Compare with the previous coin-tossing experiment,
 The probability mass function is illustrated in Fig. 8.4. can you find the difference?

Figure 8.4 Illustration of probability mass function for coin-tossing experiment


國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University

Distribution Functions  If X is a continuous-valued random variable and FX(x) is


 Closely related to the probability mass function is the differentiable with respect to x, then a third commonly
probability distribution function. used function is the probability density function, denoted
FX(x) = P[X ≤ x] by fX(x), where

 The distribution function has two basic properties, f X ( x) = FX ( x)
∂x
1. The distribution function FX(x) is bounded between zero
and one.  A probability density function has three basic properties:
2. The distribution function FX(x) is a monotone non- 1. Since the distribution function is monotone nondecreasing,
decreasing function of x. it follows that the density function is nonnegative for all
values of x.

國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University

2. The distribution function may be recovered from the


density function by integration, as shown by
x
FX ( x) =  f X ( s )ds
−∞

3. Property 2 implies that the total area under the curve of


the density function is unity.

Example 8.3 Uniform Distribution


 1
 , a≤ x≤b
f X ( x) =  b − a
 0, otherwise

Figure 8.5 The uniform distribution. (a) The probability density function.
(b) The distribution function.
國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University

Several Random Variables  We call the function fX,Y(x, y) the joint probability
 Consider two random variables X and Y. We define the density function of the random variables X and Y.
joint distribution function FX, Y(x, y) as the probability FX ( x) = 
∞ x

that the random variable X is less than or equal to a 


− ∞ −∞
f X ,Y (ξ ,η )dξdη

specified value x and that the random variable Y is less fX ( x) =  f X ,Y ( x,η )dη
−∞
than or equal to a specified value y.
 The probability density functions fX(x) and fY(y) are
FX,Y(x, y) = P[X ≤ x, Y ≤ y] called marginal densities.
∂ 2 FX ,Y ( x, y )  Two R.V.’s, X and Y, are statistically independent if the
f X ,Y ( x, y ) =
∂x∂y outcome of X does not affect the outcome Y.
P[X ∈ A, Y ∈ B] = P[X ∈ A] P[Y ∈ B]
FX,Y(x, y) = FX (x) FY(y)

國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University

Example 8.4 Binomial Random Variable Example 8.5 Binomial Distribution Function
y y
N
N FY ( y ) = P[Y ≤ y ] =  P[Y = k ] =    p k (1 − p ) N − k
N
Y =  Xn  P[Y = y ] =   p y (1 − p ) N − y k =0  k 
 y
k =0
n =1
N
N
N N! FY ( N ) =    p k (1 − p ) N − k = [ p + (1 − p )]N = 1
where   = k =0  k 
 y  y!( N − y )!

 [p + (1 – p)]N

Figure 8.7 The binomial distribution


Figure 8.6 The binomial probability mass function for N=20 and p =½. function for N=20 and p=½.
國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University

Conditional Probability  Provided P[X] ≠ 0,


 The probability P[Y|X] is called the conditional P[Y|X] = P[X|Y]P[Y]/P[X]
probability of Y given X. Assuming X has nonzero This relation is a special form of Bayes’ rule.
probability, the conditional probability is defined as  Suppose that the conditional probability P[Y|X] is
P[Y|X] = P[X, Y]/P[X] simply equal to the probability of occurrence of Y.
where P[X, Y] is the joint probability of the two random P[Y|X] = P[Y]
variables.
P[X, Y] = P[X]P[Y]
P[X, Y]= P[Y|X]P[X]
P[X, Y]= P[X|Y]P[Y]

國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University
8.2 Expectation
Example 8.6 Binary Symmetric Channel Mean
P[Y = 0] = P[Y = 0 | X = 0]P[ X = 0] + P[Y = 0 | X = 1]P[ X = 1]  These statistical averages or expectations are denoted
= (1 − p ) p0 + pp1 by, E[g(X)] for the expected value of a function g(·) of
P[Y = 1] = P[Y = 1 | X = 0]P[ X = 0] + P[Y = 1 | X = 1]P[ X = 1] the random variable X.
= pp0 + (1 − p ) p1  For a discrete random variable X, the mean, μX, is the
P[Y = 0 | X = 0]P[ X = 0] weighted sum of the possible outcomes
P[ X = 0 | Y = 0] =
P[Y = 0]
μX = E[X] = ΣX xP[X = x]
(1 − p ) p0
=
(1 − p) p0 + pp1 ∞
E[ X ] =  xf X ( x) dx
P[Y = 1 | X = 1]P[ X = 1] −∞
P[ X = 1 | Y = 1] = N
P[Y = 1] 1
(1 − p ) p1
μˆ X =
N
x
n =1
n
= Figure 8.8 Transition probability
pp0 + (1 − p) p1 diagram of binary symmetric channel.
國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University

Variance ∞
σ X2 =  ( x − μ X ) 2 f X ( x)dx
 The variance of a random variable is an estimate of the −∞

spread of the probability distribution about the mean. 1 N


σˆ X2 = 
N − 1 n=1
( xn − μˆ X ) 2
 For discrete random variables, the variance,σX2, is
given by the expectation of the squared distance of
(1 − μˆ Z ) 2 ⋅ n1 + (2 − μˆ Z ) 2 ⋅ n2
each outcome from the mean value of the distribution. σˆ Z2 =
N −1
σX2 = Var(X) = E[(X – μX)]2 = ΣX (X – μX)2P[X = x] N M
n N M
= 
N − 1 ni =1
(i − μˆ Z ) 2 i ≈
N N − 1 i =1
 (i − μˆ Z ) 2 P[ Z = i ]

國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University Covariance National Taiwan Ocean University

 The covariance of two random variables, X and Y, is 8.3 Transformation of Random Variables
given by the expected value of the product of the two  That a random variable X with distribution FX(x) is
random variables, transformed to Y = aX + b.
Cov(X, Y) = E[(X – μX)(Y – μY)]  Consider the probability that X belongs to the set A
Cov(X, Y) = E[XY] – μXμY where A is a subset the real line. If X ∈ A, then it
 If the two r.v.’s are continuous with joint density, follows that Y ∈ B where B is defined by B = aA + b.
fX,Y(x,y), then E[ XY ] = ∞ ∞ xyf ( x, y )dxdy P[X ∈ A] = P[X ∈ B]
 
−∞ −∞
X ,Y

 If the two r.v.’s happen to be independent, then FY(y) = P[Y∈(–∞, y)] = P[X∈(–∞, (y–b)/a]
E[ XY ] = 
∞ ∞
 y −b
− ∞ −∞ xyf X ( x) f Y ( y )dxdy FY ( y ) = FX 
 a 

∞ ∞
=  xf X ( x) dx  yfY ( y )dy
−∞ −∞

= E[ X ]E[Y ]
國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University

 If Y = g(X) is a one-to-one transformation of the Example 8.8 The Cosine Transformation


random variable X to the random variable, Y, then the P[X ∈ B] = P[X ∈ A1] + P[X ∈ A2]
distribution function of Y is given by
 P[φ ] + P[φ ], y < −1
FY(y) = FX(g-1(y)) 
FY ( y ) =  P[ A], | y |≤ 1
 P[0, π ] + p[π ,2π ], y > 1

 0, y < −1

=  P[ A], | y |≤ 1
 1, y >1

Figure 8.9 Illustration of cosine transformation.

國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University

8.4 Gaussian Random Variables  The distribution function of this normalized Gaussian
 A Gaussian random variables is a continuous random random variable is given by the integral of this
variable with a density function given by function x 1 x  s2 
FX ( x) =  f X ( s )ds =
2π −∞
exp − ds
1  ( x − μ X )2  −∞
 2
f X ( x) = exp − 
2π σ X  2σ X2   The related function, often used in the communications
where the Gaussian random variable X has mean μX and context, is the Q-function,
variance σX2. 1 ∞  s2 
1  x2 
Q( x) =
2π x
exp− ds = 1 − FX ( x)
 2
f X ( x) = exp − , −∞ < x < ∞
2π  2  The last line of Eq. (8.51) indicates that the Q-function
is the complement of the normalized Gaussian
distribution function.
 The Q-function is plotted in Fig. 8.11.
國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University

Figure 8.10 The normalized Gaussian distribution.


(a) The probability density function. (b) The
distribution function. Figure 8.11 The Q-function.

國立台灣海洋大學
National Taiwan Ocean University
國立台灣海洋大學
National Taiwan Ocean University
8.5 The Central Limit Theorem
Example 8.9 Probability of Bit Error with PAM  An important result in probability theory that is closely
Y=A+N related to the Gaussian distribution is the central limit
 ( y − A) 2  theorem.
0 1 2
1  s 
P[Y < 0] = 
z
exp− dy FZ ( z ) →  exp− ds
−∞
2π σ  2σ 2
 −∞
2π  2
 y − A
s = −   This is a mathematical statement of the central limit
 σ  theorem.
1 ∞  s2 
P[Y < 0] =
2π A /σ − 2 ds
exp  In words, the normalized distribution of the sum of
independent, identically distributed random variables
 A approaches a Gaussian distribution as the number of
= Q 
σ  random variables increases, regardless of the
individual distributions.
Figure 8.12 Density function of noisy PAM signal Y.
國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University

Computer Experiment: Sums of random variables


 In the computer experiment, we compute 20,000
samples of Z for N = 5, and estimate the corresponding
density function by forming a histogram of the results.
 In Fig. 8.13, we compare this histogram to the
Gaussian density function having the same mean and
variance.

Figure 8.13 Comparison of the empirical density of sum of five uniform


variables with a Gaussian density having the same mean and variance

國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University

8.6 Random Processes  Consider a random experiment specified by the


 Random processes represent the formal mathematical outcomes s form a sample space S, and the probabilities
model of these random signals. of these events.
 Random processes have the following properties: X(t, s), –T < t < T
1. Random processes are functions of time. where 2T is the total observation period.
2. Random processes are random in the sense that it is not  For a fixed sample point sj, the function of X(t, sj) is
possible to predict exactly what waveform will be called a realization or a sample function of the random
observed in the future. process.
xj(t) = X(t, sj)
 Figure 8.14 illustrates a set of sample functions {xj(t): j
= 1, 2, …}.
國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University

Stationary Random Processes


 If a random process is divided into a number of time
intervals, the various sections of the process exhibit
essentially the same statistical properties. Such a
process is said to be stationary.

Figure 8.14 Illustration of the relationship between sample space and the
ensemble of sample functions.

國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University

 Suppose the same random process is observed at time 8.7 Correlation of Random Processes
t1 +τ, and the corresponding distribution function is  The covariance of the two random variables X(t1) and
FX(t1+τ)(x). Then if X(t2) is given by
FX(t1+τ)(x) = FX(t1)(x) Cov(X(t1), X(t2)) = E[(X(t1)X(t2)] – μX(t1)μX(t2)
for all t1 and all τ, we say the process is stationary to RX(t, s) = E[(X(t)X*(s)]
the first order.
 A first-order stationary random process has a  We define the first term on the right-and side of Eq.
distribution function that is independent of time. (8.64) as the autocorrelation of the random process and
∞ ∞
use the generic notation
μ X =  sf X (t ) ( s )ds =  sf X (t +τ ) ( s )ds RX(t, s) = E[(X(t)X*(s)] = RX(t – s)
−∞ 1 −∞ 1

 If, FX(t1+τ),X(t2+τ)(x1, x2) = FX(t1),X(t2)(x1, x2)


 We say the process is stationary to the second order.
國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University

 Stationarity in the second order also implies the mean  If a random process has these two properties, then we
of the random process is constant. If this mean is zero, say it is wide-sense stationary or weakly stationary.
then the autocorrelation and covariance functions of a
random process are equivalent.
1. The mean of the random process is a constant
 If a random process has these two properties, then we independent of time: E[X(t)] =μX for all t.
say it is wide-sense stationary or weakly stationary.
2. The autocorrelation of the random process only depends
1. The mean of the random process is a constant upon the time difference:
independent of time: E[X(t)] =μX for all t.
E[X(t)X *(t-τ)] = RX(τ), for all t and τ.
2. The autocorrelation of the random process only depends
upon the time difference:
E[X(t)X*(t -τ)] = RX(τ), for all t and τ.

國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University

Properties of the Autocorrelation Function Property 3


Property 1 Maximum Value
Power of a Wide-Sense Stationary Process  The autocorrelation function of a wide-sense
 The second moment or mean-square value of a real- stationary random process is a maximum at the origin.
valued random process is given by 0 ≤ E[( X (t ) ± X (t − τ )) 2 ]
RX(0) = E[X(t)X (t)] = E[X2(t)]
≤ E[ X 2 (t )] + E[ X 2 (t − τ )] ± 2 E[ X (t ) X (t − τ )]
Property 2
≤ 2 RX (0) ± 2 R X (τ )
Symmetry
 The autocorrelation of a real-valued wide-sense
stationary process has even symmetry.
RX(-τ) = E[X(t)X(t+τ)] = E[X(t+τ)X(t)] = RX(τ)
國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University

 The more rapidly the random process X(t) changes Ergodicity


with time, the more rapidly will the autocorrelation  For a random process, X(t) with N equal-probable
function RX(τ) decrease from its maximum RX(0) realizations {xj(t): j = 1, 2,…, N }, the expected value
asτincreases, as illustrated in Fig. 8.15. and second moment of the random process at time t =
tk are respectively given by the ensemble averages
N
1
E[ X (t k )] =
N
 x (t
j =1
j k )

N
1
2
E[ X (t k )] =
N
x j =1
2
j (t k )

Figure 8.15 Illustration of autocorrelation functions of slowly and rapidly


fluctuating random processes.

國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University

 The time average of a continuous sample function  If we assume that the real-valued random process is
drawn from a real-valued process is given by ergodic, then we can express the autocorrelation
1 T
function as
ε [ x] = lim
T →∞ 2T 
−T
x(t )dt
RX (τ ) = E[ X (t ) X (t − τ )]
and the time-autocorrelation of the sample function 1 T

is given by
= lim
T → ∞ 2T −T
x(t ) x(t − τ )dt
1 T
RX (τ ) = lim  x(t ) x(t − τ )dt
T →∞ 2T −T
 An estimate of the autocorrelation of a real-valued
process for lag τ = τ0 is
 In most physical applications, wide-sense stationary N
1
processes are assumed to be ergodic, in which case Rˆ X (τ 0 ) =
N
 x(t ) x(t
n =1
n n −τ 0 )
time averages and expectations can be used
interchangeably.
國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University

Example 8.11 Discrete-Time Autocorrelation


RX (kTs ) = E[ X (nTs ) X ((n − k )Ts )]
N
1
RX (kTs ) =
N
 x(nT ) x((n − k )T )
n =1
s s

Figure 8.16 Illustration of (a) a random


signal, and (b) its autocorrelation.

國立台灣海洋大學
National Taiwan Ocean University
8.8 Spectra of Random Signals 國立台灣海洋大學
National Taiwan Ocean University

 Figure 8.17 shows a plot of the waveform of xT(t) on the


interval -T< t < T. We may define the Fourier transform 1
S X ( f ) = lim E[| ξT ( f ) |2 ]
of the sample function xT(t) as T →∞ 2T

ξT ( f ) =  xT (t ) exp(− j 2πft )dt  If {xn: n = 0, 1,…, N-1} are uniformly spaced
−∞

 The Fourier transform has converted a family of random samples of a function x(t) at t = nTs, then the discrete
variables X(t). indexed by parameter t to a new family of Fourier transform is defined as
N −1
random variables ξT(f) indexed by parameter f. ξ k =  xnW kn
n =0

where W = exp(-j2π/N) and {ξk} are samples of the


frequency-domain response at f = k/NTs.

Figure 8.17 Sample function of a random process.


國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University

Properties of the Power Spectral Density Property 1 Mean-Square Value



S X ( f ) =  RX (τ ) exp(− j 2πfτ )dτ  The mean-square value of a stationary process equals
−∞

the total area under the graph of the power spectral
R X (τ ) =  S X ( f ) exp( j 2πfτ )df density; that is,
−∞

 We can use this pair of relations to derive some E[| X (t ) |2 ] =  S X ( f )df
−∞
general properties of the power spectral density of a
wide-sense stationary process. Property 2 Nonnegativity
 The power spectral density of a stationary random
process is always nonnegative; that is,
SX(f) ≥ 0, for all f

國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University

Property 3 Symmetry Example 8.12 Filtering a Random Sinusoid


 The power spectral density of a real random process is 1
an even function of frequency; that is, H( f ) =
1 + j 2πRCf
SX(-f) = SX(f)
1
cos( 2πf cτ ) ⇔ [δ ( f − f c ) + δ ( f + f c )]
2
Property 4 Filtered Random Processes
 If a stationary random process X(t) with spectrum SX(f)
is passed through a linear filter with frequency
response H(f), the spectrum of the stationary output
random process Y(t) is given by
SY(f) = |H(f)|2SX(f)
國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University

8.9 Gaussian Process  A Gaussian process has the following properties:


1 1. If a Gaussian process is wide-sense stationary, then it is
f X ( x) = exp{−( x − μ ) Λ−1 ( x − μ )T / 2}
(2π ) N /2 1/ 2
|Λ| also stationary in the strict sense.
which is called the multi-variate Gaussian 2. If a Gaussian process is applied to a stable linear filter,
distribution. then the random process Y(t) produced at the output of
 X = (X1, X2, …, XN) represents an N-dimensional the filter is also Gaussian.
vector of Gaussian random variables 3. If integration is defined in the mean-square sense, then
 x = (x1, x2, …, xN) is the corresponding vector of we may interchange the order of the operations of
indeterminates integration and expectation with a Gaussian random
 μ= (E[X1], E[X2], …, E[XN]) is the N-dimensional process.
vector of means
 Λ is the N-by-N covariance matrix with individual
elements given by Λij = Cov(Xi, Xj)

國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University

8.10 White Noise Example 8.13 Ideal Low-Pass Filtered White Noise
 The power spectral density of white noise is independent  N0
 , | f |< B
of frequency. SN ( f ) =  2
 0, | f |> B
N
SW ( f ) = 0 B N0
2 RN (τ ) =  exp( j 2πf cτ )df = N 0 Bsinc(2 Bτ )
−B 2

Figure 8.18 Characteristics of white noise.


(a) Power spectral density. (b) Figure 8.19 Characteristics of low-pass filtered white noise.
Autocorrelation function. (a) Power spectral density.
國立台灣海洋大學
National Taiwan Ocean University
國立台灣海洋大學
National Taiwan Ocean University 8.11 Narrowband Noise
Example 8.14 RC Low-Pass Filtered White Noise  The noise process appearing at the output of such a
1 filter is called narrowband noise.
H( f ) =
1 + j 2πfRC  If the narrowband noise has a spectrum centered at the
mid-band frequencies ±fc as illustrated in Fig. 8.21(a).
 Narrowband noise can be represented mathematically
using in-phase and quadrature components.

Figure 8.20 Characteristics of RC-filtered


white noise. (a) Low-pass RC filter.
(b)Power spectral density of output N(t).
(c) Autocorrelation function of N(t). Figure 8.21 (a) Power spectral density of narrowband noise. (b) Sample
function of narrowband noise.

國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University

 For the narrowband noise process N(t) of bandwidth


2B and centered on frequency fc of Fig. 8.21,
N(t) = NI(t)cos(2πfct) – NQ(t)sin(2πfct)
where NI(t) is called the in-phase component of N(t)
and NQ(t) is the quadrature component.
 Given the narrowband noise sample function n(t), the
in-phase and quadrature components may be extracted
using the scheme shown, in Fig. 8.22(a).

Figure 8.22 (a) Extraction of in-phase and quadrature components of narrowband


noise process. (b) Generation of narrowband noise process from its in-phase and
quadrature components.
國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University

 As an illustration of these properties, consider


narrowband noise having the power spectral density
shown in Fig. 8.23(a).

Noise-Equivalent Bandwidth
 Suppose a white noise source with spectrum SW(f) =
N0/2 is connected to the input of an arbitrary filter of
transfer function H(f).
Figure 8.23 Characteristics of
ideal band-pass filtered white
noise. (a) Power spectral density.
(b) Autocorrelation function. (c)
Power spectral density of in-
phase and quadrature
components.

國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University

Example 8.15 Ideal Band-Pass Filtered WhiteNoise ∞ ∞ N0


PN =  | H ( f ) |2 SW ( f )df =  | H ( f ) |2 df
 N 0 / 2, | f − f c |< B −∞ −∞ 2
 ∞
S N ( f ) =  N 0 / 2, | f + f c |< B = N 0  | H ( f ) |2 df
 0, otherwise 0

PN = N 0 BN | H (0) |2
N0
− fc + B fc + B N
RN (τ ) =  exp( j 2πf cτ )df +  0
exp( j 2πf cτ )df ∞
− fc −B 2 fc − B 2
BN =

0
| H ( f ) |2 df
= N 0 Bsinc ( 2 Bτ )[exp( − j 2πf cτ ) + exp( j 2πf cτ )] | H ( 0) | 2
= 2 N 0 Bsinc ( 2 Bτ ) cos( 2πf cτ )  The bandwidth BN is called the noise-equivalent
RN I (τ ) = RNQ (τ ) = 2 N 0 Bsinc( 2 Bτ ) bandwidth for a low-pass filter.
國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University

 We may define the noise-equivalent bandwidth for a


band-pass filter, as illustrated in Fig. 8.25.

BN =
 0
| H ( f ) |2 df
| H ( f c ) |2
PN = N 0 | H ( f c ) |2 BN

Figure 8.25 Illustration of arbitrary bandpass filter H(f ) and ideal


Figure 8.24 Illustration of arbitrary low-pass filter H(f ) and ideal bandpass filter of bandwidth BN.
low-pass filter of bandwidth BN.

Вам также может понравиться