Академический Документы
Профессиональный Документы
Культура Документы
1/45
Outline
• 1 Introduction
• 2 Geometric Representation of Signals
– Gram-Schmidt Orthogonalization Procedure
• 3 Conversion of the AWGN into a Vector Channel
2/45
Introduction – the Model
• We consider the following model of a generic
transmission system (digital source):
– A message source transmits 1 symbol every T sec
– Symbols belong to an alphabet M (m1, m2, …mM)
• Binary – symbols are 0s and 1s
• Quaternary PCM – symbols are 00, 01, 10, 11
3/45
Transmitter Side
• Symbol generation (message) is probabilistic, with a
priori probabilities p1, p2, .. pM. or
• Symbols are equally likely
• So, probability that symbol mi will be emitted:
ri = P(mi )
1
= for i=1,2,....,M (1)
M
4/45
• Transmitter takes the symbol (data) mi (digital
message source output) and encodes it into a
distinct signal si(t).
• The signal si(t) occupies the whole slot T allotted
to symbol mi.
• si(t) is a real valued energy signal (???)
T
Ei = �
si2 (t )dt , i=1,2,....,M (2)
0
5/45
• Transmitter takes the symbol (data) mi (digital
message source output) and encodes it into a
distinct signal si(t).
• The signal si(t) occupies the whole slot T allotted
to symbol mi.
• si(t) is a real valued energy signal (signal with finite
energy)
T
Ei = �
si2 (t )dt , i=1,2,....,M (2)
0
6/45
Channel Assumptions:
• Linear, wide enough to accommodate the signal
si(t) with no or negligible distortion
• Channel noise is w(t) is a zero-mean white Gaussian
noise process – AWGN
– additive noise
– received signal may be expressed as:
0 �t �T �
�
x(t ) = si (t ) + w(t ), � � (3)
i=1,2,....,M
�
7/45
Receiver Side
• Observes the received signal x(t) for a duration of time T sec
• Makes an estimate of the transmitted signal si(t) (eq. symbol mi).
• Process is statistical
– presence of noise
– errors
• So, receiver has to be designed for minimizing the average
probability of error (Pe)
What is this?
M
Pe = �p P(mˆ �m
i =1
i i / mi ) (4)
Symbol sent
cond. error probability
given ith symbol was
sent
8/45
Outline
• 1 Introduction
• 2 Geometric Representation of Signals
– Gram-Schmidt Orthogonalization Procedure
• 3 Conversion of the AWGN into a Vector Channel
9/45
2. Geometric Representation of Signals
• Objective: To represent any set of M energy signals
{si(t)} as linear combinations of N orthogonal basis
functions, where N ≤ M
• Real value energy signals s1(t), s2(t),..sM(t), each of
duration T sec Orthogonal basis
function
N 0 �t �T
� �
si (t ) = �sijf j (t ), � � (5)
j =1 i==1,2,....,M
�
coefficient
Energy signal
10/45
• Coefficients:
T �i=1,2,....,M �
sij = �si (t )f j (t )dt , � � (6)
0
�j=1,2,....,M
• Real-valued basis functions:
T
1 if i = j �
�
� fi (t )f j (t )dt = d ij = � � (7)
0
0 if i �j
�
11/45
• The set of coefficients can be viewed as a N-
dimensional vector, denoted by si
• Bears a one-to-one relationship with the
transmitted signal si(t)
12/45
Figure 2
(a) Synthesizer for generating the signal si(t). (b) Analyzer for
generating the set of signal vectors si.
13/45
So,
• Each signal in the set si(t) is completely
determined by the vector of its coefficients
�si1 �
�s �
�i 2 �
�
. �
si = � � , i = 1,2,....,M (8)
. �
�
�
. �
� �
�siN �
14/45
Finally,
• The signal vector si concept can be extended to 2D, 3D etc. N-
dimensional Euclidian space
• Provides mathematical basis for the geometric representation
of energy signals that is used in noise analysis
• Allows definition of
– Length of vectors (absolute value)
– Angles between vectors
– Squared value (inner product of si with itself)
Matrix
2 Transposition
si = siT si
N
= �sij2 , i = 1,2,....,M (9)
j =1
15/45
Figure 3
Illustrating the geometric
representation of signals for
the case when N = 2 and M = 3.
(two dimensional space, three
signals)
16/45
Also,
What is the relation between the vector
representation of a signal and its energy value?
of average energy in a Ei = �
si2 (t )dt (10)
0
signal…(10)
N
• Where si(t) is as in (5): si (t ) = �sijf j (t ), (5)
j =1
17/45
T
�N ��N �
• After substitution: Ei = � �� sijf j (t ) ��� sikfk (t ) �dt
0 �j =1 ��k =1 �
N N T
• Φj(t) is orthogonal, so N
E i = �s 2 2
finally we have: ij = si (12)
j =1
18/45
Formulas for two signals
• Assume we have a pair of signals: si(t) and sk(t), each
represented by its vector,
• Then:
T
sij = �si (t ) sk (t )dt = s s
T
i k (13)
0
N
= �(sij -skj ) 2
2
si - s k (14)
j =1
T
=�( si (t ) - sk (t )) 2 dt
0
20/45
Angle between two signals
• The cosine of the angle Θik between two signal vectors si and
sk is equal to the inner product of these two vectors, divided
by the product of their norms:
T
s s
cosqik = i k
(15)
si sk
21/45
Schwartz Inequality
• Defined as:
( ) ( )( )
� 2 � �
�s1 (t )s2 (t )dt = �s (t )dt � 2 (t )dt
2 2
s (16)
-� -� 1 -�
22/45
Outline
• 1 Introduction
• 2 Geometric Representation of Signals
– Gram-Schmidt Orthogonalization Procedure
• 3 Conversion of the AWGN into a Vector Channel
23/45
Gram-Schmidt Orthogonalization Procedure
Assume a set of M energy signals denoted by s1(t), s2(t), .. , sM(t).
24/45
4. If we introduce the g 2 (t ) = s2 (t ) - s21f1 (t ) (22)
intermediate function g2 as:
Orthogonal to φ1(t)
g 2 (t )
5. We can define the second f2 (t ) = (23)
T
basis function φ2(t) as:
� 2 (t )dt
2
g
0
becomes:
T
• Note that φ1(t) and φ2(t) are f
�2 (t )dt = 1
2 (Look at 23)
0
orthogonal that means:
T
�f (t )f (t )dt = 0
0 1 2
25/45
And so on for N dimensional space…,
• In general a basis function can be defined using the
following formula:
i -1
g i (t ) = si (t ) - �sij -f j (t) (25)
j =1
26/45
Special case:
• For the special case of i = 1 gi(t) reduces to si(t).
General case:
27/45
Outline
• 1 Introduction
• 2 Geometric Representation of Signals
– Gram-Schmidt Orthogonalization Procedure
• 3 Conversion of the AWGN into a Vector Channel
28/45
Conversion of the Continuous AWGN Channel
into a Vector Channel
• Suppose that the si(t) is
x (t ) = si (t ) + w(t ),
not any signal, but
specifically the signal at �0 �t �T �
� � (28)
the receiver side, defined i=1,2,....,M
�
in accordance with an
AWGN channel: T
x i = �x(t )f j (t )dt
• So the output of the 0
29/45
T T
30/45
Now,
• Consider a random
process X1(t), with x1(t), a N
(t ) = x(t ) - �x jfi (t )
x� (32)
sample function which is j =1
related to the received N
signal x(t) as follows: (t ) = x(t ) - �( sij + w j )f j (t )
x�
• Using 28, 29 and 30 and j =1
=w�
(t ) (33)
which means that the sample function x1(t) depends only on the channel noise!
31/45
• The received signal can
be expressed as:
N
x(t ) = �x jfi (t ) + x�
(t )
j =1
N
= �x jfi (t ) + w�
(t ) (34)
j =1
32/45
Statistical Characterization
• The received signal (output of the correlator of Fig.2
b) is a random signal. To describe it we need to use
statistical methods – mean and variance.
• The assumptions are:
– X(t) denotes a random process, a sample function of which
is represented by the received signal x(t).
– Xj(t) denotes a random variable whose sample value is
represented by the correlator output xj(t), j = 1, 2, …N.
– We have assumed AWGN, so the noise is Gaussian, so X(t)
is a Gaussian process and being a Gaussian RV, X j is
described fully by its mean value and variance.
33/45
Mean Value
• Let Wj, denote a random variable, represented by its
sample value wj, produced by the jth correlator in
response to the Gaussian noise component w(t).
• So it has zero mean (by definition of the AWGN
model) m = E� X �
xj � j�
• …thenthe mean of sij + W j �
=E �
� �
Xj depends only on
=sij + E[W j ]
sij:
m x j = sij (35)
34/45
Variance
s = var[ X ] 2
xi j
• Starting from the definition,
we substitute using 29 and =E � X -
� j ij �
( s ) 2
�
31
T =E �
�j �
W 2
� (36)
w(t )fi (t )dt
wi = � (31)
0
�
T T
�
s xi =E �
2
� W (t )f j (t )dt � W (u )f j (u )du �
�
0 0 �
�
T T
�
T T �
=E � � f j (t )fi (u )W (t )W (u )dtdu � (37)
s x2i = ��
fi (t )f j (u ) E[W (t )W (u )]dtdu �
o 0 �
o 0
�
T T
�
�
=E � � f j (t )fi (u ) Rw (t , u ) dtdu � (38) Autocorrelation function of
�
o 0 � the noise process
35/45
• It can be expressed as:
(because the noise is N0
stationary and with a R w (t , u ) = d (t - u ) (39)
2
constant power spectral
density) N
T T
s x2 = 0 �� fi (t )f j (u )d (t - u )dtdu
• After substitution for 2 o 0
i
we finally have:
• Correlator outputs, denoted by Xj have variance
equal to the power spectral density N0/2 of the
noise process W(t).
36/45
Properties (without proof)
• Xj are mutually uncorrelated
• Xj are statistically independent (follows from above
because Xj are Gaussian)
• and for a memoryless channel the following
equation is true:
N
f x ( x / mi ) = �f x j ( x j / mi ), i=1,2,....,M (44)
j =1
37/45
• Define (construct) a vector X of N random variables, X1, X2, …
XN, whose elements are independent Gaussian RV with mean
values sij, (output of the correlator, deterministic part of the
signal defined by the signal transmitted) and variance equal to
N0/2 (output of the correlator, random part, calculated noise
added by the channel).
• then the X1, X2, …XN , elements of X are statistically
independent.
• So, we can express the conditional probability of X, given si(t)
(correspondingly symbol mi) as a product of the conditional
density functions (fx) of its individual elements fxj.
NOTE: This is equal to finding an expression of the probability of
a received symbol given a specific symbol was sent, assuming
a memoryless channel
38/45
• …that is:
N
f x ( x / mi ) = �f x j ( x j / mi ), i=1,2,....,M (44)
j =1
39/45
N
f x ( x / mi ) = �f x j ( x j / mi ), i=1,2,....,M (44)
j =1
40/45
• Since, each Xj is Gaussian with mean sj and variance
N0/2
� 1 2� j=1,2,....,N
f x j ( x / mi ) = (p N 0 ) - N /2
-
exp � ( x j - sij ) �
, (45)
� N0 � i=1,2,....,M
� 1 N �
f x ( x / mi ) = (p N 0 ) - N /2
-
exp � � ( x j - sij ) �
2
,
i=1,2,....,M (46)
� N0 j =1 �
41/45
• If we go back to the formulation of the received
signal through a AWGN channel 34
N
x(t ) = �x jfi (t ) + x�
(t )
j =1
N
= �x jfi (t ) + w�
(t ) (34)
j =1
42/45
Finally,
• The AWGN channel, is equivalent to an N-
dimensional vector channel, described by the
observation vector
x = si + w, i = 1, 2,....., M (48)
43/45
Thank You