Вы находитесь на странице: 1из 42

32 CHAPTER 1.

ANALYSIS OF DISCRETE-TIME LINEAR TIME-INVARIANT SYSTEMS

y z
"
"
R "
"
"
" θ
"
"
x <

Figure 1.18. A complex number z can be represented in Cartesian coordinates (x, y) or polar coordi-
nates (R, θ).

 1.3 Frequency Analysis


 1.3.1 A Review of Complex Numbers
A complex number is represented in the form
z = x + jy,
where x and y are real numbers satisfying the usual rules of addition and multiplication,
and the symbol j, called the imaginary unit, has the property
j 2 = −1.
The numbers x and y are called the real and imaginary part of z, respectively, and are
denoted by
x = <(z), y = =(z).
We say that z is real if y = 0, while it is purely imaginary if x = 0.
Example 1.10. The complex number z = 3 + 2j has real part 3 and imaginary part 2,
while the real number 5 can be viewed as the complex number z = 5 + 0j whose real part
is 5 and imaginary part is 0.
Geometrically, complex numbers can be represented as vectors in the plane (Fig. 1.18).
We will call the xy-plane, when viewed in this manner, the complex plane, with the x-
axis designated as the real axis, and the y-axis as the imaginary axis. We designate the
complex number zero as the origin. Thus,
x + jy = 0 means x = y = 0.
In addition, since two points in the plane are the same if and only if both their x- and
y-coordinates agree, we can define equality of two complex numbers as follows:
x1 + jy1 = x2 + jy2 means x1 = x2 and y1 = y2 .
Thus, we see that a single statement about the equality of two complex quantities
actually contains two real equations.
Sec. 1.3. Frequency Analysis 33

Definition 1.1 (Complex Arithmetic). Let z1 = x1 + jy1 and z2 = x2 + jy2 . Then


we define:
(a) z1 ± z2 = (x1 ± x2 ) + j(y1 ± y2 );
(b) z1 z2 = (x1 x2 − y1 y2 ) + j(x1 y2 + x2 y1 );
z1
(c) for z2 6= 0, w = z2 is the complex number for which z1 = z2 w.
Note that, instead of the Cartesian coordinates x and y, we could use polar coordi-
nates to represent points in the plane. The polar coordinates are radial distance R and
angle θ, as illustrated in Fig. 1.18. The relationship between the two sets of coordinates
is:
x = R cos θ,
y = R sin θ,
p
R = x2 + y 2 = |z|,
y
θ = arctan .
x
Note that R is called the modulus, or the absolute value of z, and it alternatively
denoted |z|. Thus, the polar representation is:
z = |z| cos θ + j|z| sin θ = |z|(cos θ + j sin θ).
Definition 1.2 (Complex Exponential Function). The complex exponential func-
tion, denoted by ez , or exp(z), is defined by
ez = ex+jy = ex (cos y + j sin y).
In particular, if x = 0, we have Euler’s equation:
ejy = cos y + j sin y.
Comparing this with the terms in the polar representation of a complex variable, we
see that any complex variable can be written as:
z = |z|ejθ .
Properties of Complex Exponentials.
1 jθ
cos θ = (e + e−jθ ),
2
1 jθ
sin θ = (e − e−jθ ),
2j
|ejθ | = 1,
ez1 ez2 = ez1 +z2 ,
1
e−z = z,
e
ez+2πjn = ex (cos(y + 2πn) + j sin(y + 2πn))
= ex (cos y + j sin y) = ez , for any integer n.
34 CHAPTER 1. ANALYSIS OF DISCRETE-TIME LINEAR TIME-INVARIANT SYSTEMS

z = z1 z 2

 |z| = |z1 | · |z2 |
= 
 θ = θ1 + θ2

 z
y
 " 1
"
2 "
 θ"
 "
 "
" θ1

"
x <

Figure 1.19. Multiplication of two complex numbers z1 = |z1 |ejθ1 and z2 = |z2 |ejθ2 (z2 is not shown).
The result is z = |z|ejθ with |z| = |z1 | · |z2 | and θ = θ1 + θ2 .

DT complex exponential functions whose frequencies differ by 2π are thus identical:


ej(ω+2π)n = ejωn+2πjn = ejωn .
We have seen examples of this phenomenon before, when we discussed DT sinusoids.
It follows from the multiplication rule that
z1 z2 = |z1 |ejθ1 |z2 |ejθ2 = |z1 ||z2 |ej(θ1 +θ2 ) .
Therefore, in order to multiply two complex numbers,
• add the angles;
• multiply the absolute values.
Multiplication of two complex numbers is illustrated in Fig. 1.19.
Definition 1.3 (Complex Conjugate). If z = x + jy, then the complex conjugate of
z is z ∗ = x − jy (sometimes also denoted z̄).
This definition is illustrated in Fig. 1.20(a). Note that, if z = |z|ejθ , then z ∗ =
|z|e−jθ .Here are some other useful identities involving complex conjugates:
1
<(z) = (z + z ∗ ),
2
1
=(z) = (z − z ∗ ),
2j

|z| = zz ∗ ,
∗ ∗
(z ) = z,

z = z ⇔ z is real,
(z1 + z2 )∗ = z1∗ + z2∗ ,
(z1 z2 )∗ = z1∗ z2∗ .
Sec. 1.3. Frequency Analysis 35

=
2 C 2
=  C z = 2j


y z = x + jy 1 z = 1 + j, |z| = 2
"
"
"
"
"
" θ = π/4
"
"
b < @ <
b x 1
b @ 1/z = 1/2 − j/2
b @
b @
b
b @
−y b z ∗ = x − jy @ z∗ = 1 − j
−1
(a) (b)

Figure 1.20. (a) Complex number z and its complex conjugate z ∗ . (b) Illustrations to Example 1.11.

Example 1.11. Let us compute the various quantities defined above for z = 1 + j.
1. z ∗ = 1 − j.
√ √
2. |z| = √12 + 12p= 2. Alternatively,
p √
|z| = zz ∗ = (1 + j)(1 − j) = 1 + j − j − j 2 = 2.
3. <(z) = =(z) = 1.
√ √ π
4. Polar representation: z = 2(cos π4 + j sin π4 ) = 2ej 4 .
5. To compute z 2 , square the absolute value and double the angle:
π
z 2 = 2(cos π2 + j sin π2 ) = 2j = 2ej 2 .
The same answer is obtained from the Cartesian representation:
(1 + j)(1 + j) = 1 + 2j + j 2 = 1 + 2j − 1 = 2j.
6. To compute 1/z, multiply both the numerator and the denominator by z ∗ :
1 z∗ 1−j 1−j 1 j
= ∗ = = = − .
z zz (1 + j)(1 − j) 2 2 2
Alternatively, use the polar representation:
1 1 1 π
= √ j π = √ e−j 4
z 2e 4 2
  √ √ !
1 π  π  1 2 2
= √ cos − + j sin − =√ −j
2 4 4 2 2 2
1 j
= − .
2 2
36 CHAPTER 1. ANALYSIS OF DISCRETE-TIME LINEAR TIME-INVARIANT SYSTEMS

  2
j
We can check to make sure that (1/z) · z = 1: 1
2 − 2 (1 + j) = 12 − 2j + 2j − j2 = 1.

These computations are illustrated in Fig. 1.20(b).

 1.3.2 A Review of Basic Linear Algebra


Frequency analysis involves studying representations of the form
X
s= ak gk , (1.6)
k

where signal s which is being analyzed, is written as a linear combination of a set of


orthogonal sinusoidal signals gk . For example, Eq. (1.6) is called DT Fourier series if
signals gk are DT complex exponential signals representing different frequency compo-
nents of a periodic DT signal s. Similarly, if gk are CT complex exponential signals
and s is a CT periodic signal, representation (1.6) becomes CT Fourier series. There
are, moreover, other ways of decomposing a signal into its basic components which are
not necessarily frequency components. As a matter of fact, we have already seen a
decomposition like this when deriving the convolution formula (Section 1.2.3) where
gk = δk were shifted unit impulse signals.
There are many other reasons (some of which will become clear later in the course)
for studying representations of the form (1.6) in general, rather than focusing solely on
Fourier series. We pose the following questions regarding Eq. (1.6):

1. Given an arbitrary signal s and a set of pairwise orthogonal signals gk , does


representation (1.6) exist? In other words, can we find such coefficients ak that
Eq. (1.6) is satisfied?

2. If so, what are the coefficients ak ?

3. If not, can we at least find coefficients ak such that Eq. (1.6) is approximately
satisfied?

Precise answers to these three questions are impossible unless we define what we mean
by “orthogonal” and “approximately”. In order to do this, we generalize the notions
of orthogonality and length found in planar geometry, to spaces of signals, via the
following procedure:

• define an appropriate space of signals;

• define what it means for two signals to be orthogonal, by appropriately general-


izing the notion of orthogonality of two vectors in a plane;

• define the distance between any two signals, by appropriately generalizing the
notion of distance in a plane.
Sec. 1.3. Frequency Analysis 37

We address the first item on this agenda by initially restricting our attention to complex-
valued DT signals defined on a fixed interval, say, [0, N − 1] where N is some fixed
nonnegative integer. In other words, we consider DT signals whose domain is the set
{0, 1, . . . , N − 1} and whose range is C. Each such signal s can be represented as an
N -dimensional vector s, by recording its N samples in a column:
 
s(0)
 s(1) 
 
s= .. .
 . 
s(N − 1)
The collection of all such signals is therefore the same as the set of all N -dimensional
complex-valued vectors which we call CN .
Several remarks about notational conventions are in order here.
• In writing, vectors are usually denoted by underlining them: s. In printed texts,
however, it is customary to use boldface letters (s) to denote vectors.
• A transpose of a column vector is a row vector–that is, an equivalent expression
for s is: s = (s(0), s(1), . . . , s(N − 1))T .
• We will occasionally be using vectors to represent signals defined for n = 1, 2, . . . , N
rather than n = 0, 1, . . . , N − 1. In this case,
 
s(1)
 s(2) 
 
s =  . .
 .  .
s(N )

• Although we will mostly work with complex-valued signals, sometimes it is useful


to consider only real-valued signals. The corresponding set of all N -dimensional
real-valued vectors is called RN . Since any real-valued vector can be viewed as a
complex-valued vector with a zero imaginary part, RN is a subset of CN .
• Even though the domain of definition of our signals is a set consisting of N
points, it is sometimes helpful to pretend that the signals are actually defined
for all integer n. Two most commonly used ways of doing this are padding with
zeros and periodic extension. The former assumes that the signal values outside
of n = 0, 1, . . . , N − 1 are all zero:
s(n) = 0, n < 0 or n ≥ N.
The latter assumes that the signals under consideration are periodic with pe-
riod N :
s(n) = s(n mod N ), n < 0 or n ≥ N,
in other words, s(N ) = s(0), s(N + 1) = s(1), etc.
38 CHAPTER 1. ANALYSIS OF DISCRETE-TIME LINEAR TIME-INVARIANT SYSTEMS

y
 A
 r y
x1 A 1
y1 A C x2 
A y2 r
A  y2
A 
A
 
B r A
r rD
x1 O x2 x

Figure 1.21. Two vectors in the real plane are orthogonal if and only if they form a 90◦ angle.

• We will often be using symbol “∈” which means “is an element of”. For example,
s ∈ CN means: “s is an element of CN ”.

Inner Products and Orthogonality


Definition 1.4. The inner product of two vectors s ∈ CN and g ∈ CN is denoted by
hs, gi and is defined by
N
X −1
hs, gi = s(n)(g(n))∗ .
n=0
Two vectors s and g are defined to be orthogonal (denoted s ⊥ g) if their inner product
is zero:
s ⊥ g means hs, gi = 0.

Example 1.12. Let us see whether our definition of orthogonality makes sense for
vectors in the real plane R2 . Two
 vectors
 in the plane
 are called
 orthogonal if the angle
x x
between them is 90◦ . Let s = 1
∈ R2 and g = 2
∈ R2 be two vectors in the
y1 y2
real plane shown in Fig. 1.21. Their inner product is then hs, gi = x1 x2 + y1 y2 . If the
inner product is equal zero, then
x1 y2
− = ,
y1 x2
which says that the right triangles 4OBA and 4CDO are similar. From the similarity
of these triangles, we get ∠AOB = ∠OCD. Therefore,

∠COA = 180◦ − ∠AOB − ∠DOC


= 180◦ − ∠OCD − ∠DOC
= 90◦ .

Similar reasoning applies if the two vectors are oriented differently with respect to the
coordinate axes. Therefore, saying that the inner product of two vectors in the real
Sec. 1.3. Frequency Analysis 39

plane is zero is equivalent to saying that they form a 90◦ angle. This shows that our
definition of orthogonality in CN is an appropriate generalization of orthogonality in
the real plane.
It is easily seen that the inner product of a vector with itself is always a real number:
N
X −1 N
X −1

hs, si = s(n)(s(n)) = |s(n)|2 .
n=0 n=0

This number is called the energy of the vector.


Definition 1.5. The inner product of a vector s with itself is called the energy of s.
The square root of the energy is called the norm4 of s, and is denoted ksk:
p
ksk = hs, si.
 
x1 p
Example 1.13. For example, the norm of the vector in Fig. 1.21 is x21 + y12
y1
which is simply the length of the segment OA. Our definition of a norm therefore
generalizes the familiar concept of the length of a vector in the real plane.
Here are some properties of inner products and norms:
1. hg, si = hs, gi∗ .
2. ha1 s1 + a2 s2 , gi = a1 hs1 , gi + a2 hs2 , gi.
3. hs, a1 g1 + a2 g2 i = a∗1 hs, g1 i + a∗2 hs, g2 i.
4. kask = |a| · ksk.
5. Pythagoras’s theorem: the sum of energies of two orthogonal vectors is equal to
the energy of their sum, i.e.,
if hs, gi = 0, then ksk2 + kgk2 = ks + gk2 .

Fig. 1.21 and the two examples above illustrate the fact that our definitions of
orthogonality and the norm in CN generalize the corresponding concepts from planar
geometry. Therefore, when considering vectors in CN it is often helpful to draw planar
pictures to guide our intuition. Note, however, that the proof of any facts concerning
inner products and norms in CN (for example, the properties listed above) cannot be
based solely on pictures: the pictures are there to guide our reasoning, but rigorous
proofs must rely only on definitions and properties proved before. For example, in
proving Property 1 above we can only rely on our definition of the inner product. Once
Property 1 is proved, we can use both Property 1 (if we need to) and the definition of
the inner product in proving Property 2, etc.
4
We will soon see that the term norm is actually more general. The specific norm that is the square
root of the energy is called the Euclidean norm or the `2 norm or the 2-norm.
40 CHAPTER 1. ANALYSIS OF DISCRETE-TIME LINEAR TIME-INVARIANT SYSTEMS

As
A
A
A
A
A
A
s g1 A

 s g2 g1
Projection of s onto G: sG = sg1 + sg2
s
#
# s − sg g2
#
#
#
# Space G, the span of g1 and g2 .
sg = ag g
(a) (b)

Figure 1.22. (a) The orthogonal projection sg of a vector s onto another vector g. (b) The orthogonal
projection sG of a vector s onto a space G can be obtained by projecting s onto g1 and g2 and adding
the results, where {g1 , g2 } is an orthogonal basis for G.

Orthogonal Projections
In the real plane, the coordinates of a vector are given by the projections
 of the vector
x1
onto the coordinate axes. For example, the projections of the vector in Fig. 1.21
y1
   
x1 0
onto the x-axis and y-axis are the vectors and , respectively. We will
0 y1
soon see that the coordinates of a signal in a Fourier basis–that is, the Fourier series
coefficients–can also be computed from the projections of the signal onto the individual
Fourier basis functions.
We define the projection of a vector s ∈ CN onto another vector g ∈ CN by gener-
alizing the notion of an orthogonal projection from planar geometry. Specifically, when
we project s onto g, we get another vector sg which is collinear with g (i.e. it is of the
form ag) such that the difference s − sg is orthogonal to g. An illustration of this, for
the real plane, is given in Fig. 1.22(a).
Definition 1.6. The orthogonal projection of a vector s ∈ CN onto a nonzero vector
g ∈ CN is such a vector sg ∈ CN that:
1. sg = ag for some complex number a, and
2. s − sg ⊥ g.
The orthogonal projection of a real-valued vector s ∈ RN onto another real-valued vector
g ∈ RN is defined similarly.
Note again that, even though our definition applies to the general case of CN , we
can use the planar picture of Fig. 1.22(a) to guide our analysis. From this picture, we
Sec. 1.3. Frequency Analysis 41

y
 
 1
 2






  X  X
 
1 2 x
0 0

Figure 1.23. Illustration to Example 1.14.

immediately see that the coefficient a in the expression sg = ag is not arbitrary. If it


is too small or too large, the angle between s − sg and g will not be 90◦ . To find the
correct coefficient a, note that the two conditions in the definition above imply that

hs − ag, gi = 0.

But this equation can be rewritten as follows: hs, gi − ahg, gi = 0. Therefore,


hs, gi
a =
hg, gi
sg = ag
hs, gi
= g. (1.7)
hg, gi

Example 1.14. Does our result of Eq. (1.7) make  sense


 for 2-dimensional real
 vectors?

1 2
Suppose that we want to project the vector s = onto the vector g = . As
2 0
 
1
shown in Fig. 1.23, the result should clearly be sg = . Does our formula (1.7)
0
give the same answer?
   
1 2
,  
2 0 2
sg =     ·
2 2 0
,
0 0
   
1·2+2·0 2 1 2
= · = ·
2·2+0·0 0 2 0
 
1
= .
0
42 CHAPTER 1. ANALYSIS OF DISCRETE-TIME LINEAR TIME-INVARIANT SYSTEMS

Our definition works as expected in the plane. Just as we did with the concepts of
orthogonality and length before, we have generalized the planar concept of orthogonal
projection to CN .
We now generalize our orthogonal projection formula (1.7) to the case when we
project a vector s onto a space G. This is illustrated in Fig. 1.22(b). We show that, if
an orthogonal basis for space G is known, it is easy to compute the projection of any
vector s onto G. Of course we need to precisely define what we mean by an orthogonal
basis before we proceed.
Definition 1.7. A subset G of CN is called a vector subspace of CN if
1. ag ∈ G for any g ∈ G and any a ∈ C,
2. and g1 + g2 ∈ G for any g1 , g2 ∈ G.
A subset G of RN is called a vector subspace of RN if
1. ag ∈ G for any g ∈ G and any a ∈ R,
2. and g1 + g2 ∈ G for any g1 , g2 ∈ G.
In other words, a set G in CN (or in RN ) is called a vector subspace if it is closed
under multiplication by a scalar and under vector addition.
 
α
Example 1.15. The set of all vectors of the form  0  where α ∈ C, is a vector
0
subspace of C : if you multiply a vector of this form by a complex number, you get
3

a vector of this form; if you add two vectors of this form, you again get a vector of
this form. On the other hand, this set of vectors is not a vector subspace of R3 simply
because it is not a subset of R3 .  
α
The set of all vectors of the form  0  where α ∈ C, is not a vector subspace of
1
 

C3 : if you multiply a vector like that by 2, you get  0  which is no longer in the
2
set.
Definition 1.8. Vectors g1 , g2 , . . . , gm are called linearly independent if none of them
can be expressed as a linear combination of the others:
X
gi 6= ak gk for i = 1, 2, . . . , m.
k6=i

Equivalently, g1 , g2 , . . . , gm are called linearly independent if


a1 g1 + a2 g2 + . . . + am gm = 0 implies a1 = a2 = . . . am = 0.
Sec. 1.3. Frequency Analysis 43

Definition 1.9. The space spanned by vectors g1 , g2 , . . . , gm is the set of all their linear
combinations, i.e. the set of all vectors of the form

a1 g1 + a2 g2 + . . . + am gm ,

where a1 , a2 , . . . , am are numbers (complex numbers if we are in CN , real numbers if we


are in RN ).

Definition 1.10. If G = span{g1 , . . . , gm } and if g1 , . . . , gm are linearly independent,


then {g1 , . . . , gm } is said to be a basis for space G. If, in addition, g1 , . . . , gm are
pairwise orthogonal, the basis is said to be an orthogonal basis.
         
1 0 x 1 0
Example 1.16. span , = C since
2 =x +y for
0 1 y 0 1
     
x 1 0
any ∈ C2 . It is an easy exercise to show that and are linearly
y 0 1
independent and, moreover, orthogonal. Therefore, these two vectors form an orthogonal
basis for C2 .

We will need the following important result from linear algebra which we state here
without proof.

Theorem 1.2. Any N linearly independent vectors in CN (RN ) form a basis for CN
(RN ). Any N pairwise orthogonal nonzero vectors in CN (RN ) form an orthogonal
basis for CN (RN ).

We now define the orthogonal projection sG of a vector s onto a subspace G of CN .


We use the 3-dimensional picture of Fig. 1.22(b) as a guide. First, sG must lie in G.
Second, the difference between s and sG must be orthogonal to G.

Definition 1.11. The orthogonal projection of a vector s ∈ CN onto a subspace G of


CN is such a vector sG ∈ CN that:

1. sG ∈ G, and

2. s − sG ⊥ G, that is, s − sG is orthogonal to any vector in G.

The orthogonal projection of a real-valued vector s ∈ RN onto a subspace G of RN is


defined similarly.

Let sG be the orthogonal projection of s onto a subspace G of CN . Suppose that


{g1 , . . . , gm } is an orthogonal basis for G. One consequence of this is that any vector
in G can be represented as a linear combination of vectors g1 , . . . , gm . In particular,
since sG ∈ G, we have that
m
X
sG = ak gk for some complex numbers a1 , . . . , am . (1.8)
k=1
44 CHAPTER 1. ANALYSIS OF DISCRETE-TIME LINEAR TIME-INVARIANT SYSTEMS

These coefficients a1 , . . . , am cannot be any arbitrary set of numbers, however: they


have to be such numbers that s − sG is orthogonal to any vector in G. In particular,
s − sG ⊥ g1 , s − sG ⊥ g2 , . . ., s − sG ⊥ gm :

hs − sG , gp i = 0, for p = 1, . . . , m
hs, gp i − hsG , gp i = 0
*m +
X
hs, gp i − ak gk , gp = 0
k=1
Xm
hs, gp i − ak hgk , gp i = 0.
k=1

But notice that the orthogonality of the basis {g1 , . . . , gm } implies that hgk , gp i = 0
unless p = k. Therefore, only one term in the summation can be nonzero–the term for
k = p:

hs, gp i − ap hgp , gp i = 0
hs, gp i
ap = for p = 1, . . . , m.
hgp , gp i

Note that, since {g1 , . . . , gm } is a basis, gp 6= 0 and therefore hgp , gp i =


6 0. This means
that dividing by hgp , gp i is allowed. Substituting the expression we obtained for the
coefficients into Eq. (1.8), we obtain:
m
X hs, gk i
sG = gk . (1.9)
hgk , gk i
k=1

Comparing this result with our result (1.7) for projecting one vector onto another, we
see that projecting onto a space G which has an orthogonal basis {g1 , . . . , gm } amounts
to the following:

• project onto the individual basis vectors;

• sum the results.

This is illustrated in Fig. 1.22(b) for projecting a 3-dimensional vector onto a plane
spanned by two orthogonal vectors g1 and g2 .
Formula (1.9) is actually a lot more than a formula for projecting a vector onto a
subspace. Note that, if the vector s belongs to G then its projection onto G is equal to
the vector itself:
sG = s if s ∈ G.
In this case, Eq. (1.9) tells us how to represent s as a linear combination of orthogonal
basis vectors.
Sec. 1.3. Frequency Analysis 45

s ,, s−f

,
,
,
,
, s − sG
,
,
,
,
,
,
@ sG − f
@
@ sG , the orthogonal projection
@ of s onto space G.
@
@
f @

Space G with orthogonal basis {g1 , . . . , gm }.

Figure 1.24. Illustration to the proof of Theorem 1.4: The closest point in space G to a fixed vector
s is the orthogonal projection sG of s onto G.

Theorem 1.3 (Decomposition and Reconstruction in an Orthogonal Basis).


Suppose that {g1 , . . . , gm } is an orthogonal basis for a subspace G of CN (in particular
G could be CN itself ), and suppose that s ∈ G. Then
m
X
s = ak gk , (synthesis or reconstruction formula) (1.10)
k=1
hs, gk i
where ak = for k = 1, . . . , m.
hgk , gk i
(analysis or decomposition formula) (1.11)

The coefficients (1.11) are unique, i.e. there is no other set of coefficients that satisfy
Eq. (1.10).

As we will shortly see, a particular case of these equations are the DT Fourier series
formulas. These equations, however, are very general: they work for non-Fourier bases
of CN . Slight modifications of these equations also apply to other spaces of DT and CT
signals. For example, the CT Fourier series formulas are essentially a particular case of
Eqs. (1.10,1.11).
Suppose now that s does not belong to the subspace G. Can we find coefficients
a1 , . . . , am such that Eq. (1.10) is satisfied approximately? Specifically, we would like
to find the coefficients that minimize the energy (or, equivalently, the 2-norm) of the
46 CHAPTER 1. ANALYSIS OF DISCRETE-TIME LINEAR TIME-INVARIANT SYSTEMS

error–i.e. the 2-norm of the difference between the two sides of (1.10):

m
X

find a1 , . . . , am to minimize s − ak gk .

k=1

It turns out that the answer is still Eq. (1.11)–i.e. that the orthogonal projection sG
of s onto G is the closest vector to s among all vectors in G. To show this, consider
Fig. 1.24. Using the definition of orthogonal projection, we see that the vector s − sG
must be orthogonal to G, i.e.

s − sG ⊥ v for any v ∈ G.

For any arbitrary vector f ∈ G, we have sG − f ∈ G. Hence (s − sG ) ⊥ (sG − f ). We can


therefore apply Pythagoras’s theorem to the triangle formed by vectors s − f , s − sG ,
and sG − f :

ks − f k2 = k(s − sG ) + (sG − f )k2


= ks − sG k2 + ksG − f k2
≥ ks − sG k2 .

Therefore, ks − sG k ≤ ks − f k for any f ∈ G, or alternatively,

ks − sG k = min ks − f k.
f ∈G

So sG is the closest vector in G to s. It is easily seen, moreover, that equality is achieved


only if f = sG which means that sG is the unique closest vector to s.

Theorem 1.4 (Approximation by an Orthogonal Set of Vectors). Suppose that


{g1 , . . . , gm } is an orthogonal basis for a subspace G of CN (in particular G could be
CN itself ), and suppose that s ∈ CN is a vector which may or may not belong to G. We
seek to approximate s by a vector in G. If we look for an approximation ŝ ∈ G which
minimizes the 2-norm kŝ − sk of the error, then
m
X
ŝ = ak gk , (1.12)
k=1
hs, gk i
where ak = for k = 1, . . . , m. (1.13)
hgk , gk i

The coefficients (1.13) are unique, i.e. there is no other set of coefficients that results
in the minimum 2-norm of the error.
Sec. 1.3. Frequency Analysis 47

 1.3.3 Discrete-Time Fourier Series and DFT


Example 1.17. Let g1 and g2 be the following two discrete-time complex exponential
functions defined for n = 1, 2, 3, 4:
 
j2π(n − 1)
g1 (n) = exp , n = 1, 2, 3, 4;
4
 
j2π2(n − 1)
and g2 (n) = exp , n = 1, 2, 3, 4.
4
(a) Suppose that 

 2, n=1

−1 + j, n=2
s(n) =

 0, n=3

−1 − j, n = 4.
Can the signal s be represented as a linear combination of g1 and g2 ? If so, find
coefficients a1 , a2 in this representation:

s(n) = a1 g1 (n) + a2 g2 (n), n = 1, 2, 3, 4.

(b) Suppose that 



 0, n=1

0, n=2
s0 (n) =

 1, n=3

0, n = 4.
Can the signal s0 be represented as a linear combination of g1 and g2 ? If so, find
coefficients a01 , a02 in this representation:

s0 (n) = a01 g1 (n) + a02 g2 (n), n = 1, 2, 3, 4.

Solution.
(a) Write all three signals as vectors, i.e.,
     
g1 (1) g2 (1) s(1)
 g1 (2)   g2 (2)   s(2) 
g1 =   , g2 = 
 g1 (3)  
, s =  .
g2 (3)   s(3) 
g1 (4) g2 (4) s(4)

What are the entries of these vectors?



j2π0
g1 (1) = exp = exp(j0),
4
   π
j2π1
g1 (2) = exp = exp j ,
4 2
48 CHAPTER 1. ANALYSIS OF DISCRETE-TIME LINEAR TIME-INVARIANT SYSTEMS

=
u g1 (2) = j

u u <
g1 (3) = −1 g1 (1) = 1
u g (4) = −j
1

Figure 1.25. Illustration to Example 1.17: The four entries of vector g1 as points in the complex
plane C.


j2π2
g1 (3) = exp = exp(jπ),
4
   
j2π3 3π
g1 (4) = exp = exp j .
4 2

Fig. 1.25 shows a plot of these in the complex plane (recall that exp(jθ) has absolute
value 1 and angle θ).
Calculations for g2 are similar. We obtain:
   
1 1
 j   
g1 =   , g2 =  −1  .
 −1   1 
−j −1

One method of solving this problem would be to simply notice that


 
2
 −1 + j 
s= 
 = g1 + g2 (by inspection).

0
−1 − j

We have thus represented s as a linear combination of g1 and g2 , with coefficients


a1 = a2 = 1:
s(n) = g1 (n) + g2 (n) n = 1, 2, 3, 4.
Another method is to notice that g1 and g2 are orthogonal,

hg1 , g2 i = 1 · 1∗ + j · (−1)∗ + (−1) · 1 + (−j) · (−1)∗ = 1 − j − 1 + j = 0.


Sec. 1.3. Frequency Analysis 49

0
A s
A
A
A
A
A
A
A
A
A
− 4 g1
1
A

  1 @ g1
 g
4 2@
Projection of s0 onto G: − 14 g1 + 14 g2 @
@
@
@
g2 @
@
s = g1 + g2
Space G, the span of g1 and g2 .

Figure 1.26. Illustration to Example 1.17: Vector s lies in the space spanned by g1 and g2 , and
therefore can be represented as their linear combination. Vector s0 is not in the space spanned by g1
and g2 and cannot be represented as a linear combination of g1 and g2 . The closest linear combination
of g1 and g2 to s0 is the orthogonal projection of s0 onto the span of g1 and g2 .

We can therefore use Theorem 1.3: if s is representable as a linear combination of g1


and g2 then the coefficients are found from:

hs, g1 i 2 · 1∗ + (−1 + j) · j ∗ + 0 · (−1)∗ + (−1 − j) · (−j)∗


a1 = =
hg1 , g1 i 1 · 1∗ + j · j ∗ + (−1) · (−1)∗ + (−j) · (−j)∗
2 + (j + 1) + 0 + (−j + 1)
= = 1,
1+1+1+1
hs, g2 i 2 · 1∗ + (−1 + j) · (−1)∗ + 0 · 1∗ + (−1 − j) · (−1)∗
a2 = =
hg2 , g2 i 1 · 1∗ + (−1) · (−1)∗ + 1 · 1∗ + (−1) · (−1)∗
2 + (1 − j) + 0 + (1 + j)
= = 1.
1+1+1+1
Since g1 +g2 = s, the answer is that s can indeed be represented as a linear combination
of g1 and g2 , with coefficients a1 = a2 = 1. Geometrically, this means that s lies in the
space spanned by g1 and g2 , as illustrated in Fig. 1.26.
(b) We take the same approach as in Part (a): if s0 is representable as a linear
50 CHAPTER 1. ANALYSIS OF DISCRETE-TIME LINEAR TIME-INVARIANT SYSTEMS

combination of g1 and g2 then the coefficients are found from:

hs0 , g1 i 0 · 1∗ + 0 · j ∗ + 1 · (−1)∗ + 0 · (−j)∗ 1


a01 = = =− ,
hg1 , g1 i 4 4
hs0 , g2 i 0 · 1∗ + 0 · (−1)∗ + 1 · 1∗ + 0 · (−1)∗ 1
a02 = = = .
hg2 , g2 i 4 4

However,  
0
 −j − 1 
 
1 1  
− g1 + g2 =  4  6= s0 .
4 4  1/2 
 j−1 
4
0
Therefore, s cannot be represented as a linear combination of g1 and g2 . Geometrically,
this means that s0 lies outside of the space spanned by g1 and g2 , as illustrated in
Fig. 1.26. The coefficients a01 = −1/4 and a02 = 1/4 we computed are actually the
coefficients of the orthogonal projection of s0 onto this space. Theorem 1.4 states that
in this case − 14 g1 + 14 g2 is the best approximation of s0 as a linear combination of g1
and g2 , in the sense that it minimizes the 2-norm of the error.

Example 1.18. In addition to the signals g1 (n) and g2 (n) defined in Example 1.17,
define the signals g0 (n) and g3 (n) as follows:
 
j2π0(n − 1)
g0 (n) = exp , n = 1, 2, 3, 4;
4
 
j2π3(n − 1)
and g3 (n) = exp , n = 1, 2, 3, 4.
4

In other words, we now have four signals, gk (n), k = 0, 1, 2, 3, defined for n = 1, 2, 3, 4


by:  
j2πk(n − 1)
gk (n) = exp .
4
Similarly to Example 1.17, it is easy to check that these four signals are pairwise or-
thogonal. Since they are all nonzero, Theorem 1.2 implies that they form an orthogonal
basis for C4 . Theorem 1.3 is therefore applicable to any signal in C4 , in particular, both
to s and s0 defined in Example 1.17.
(a) Using Theorem 1.3, find coefficients a0 , a1 , a2 , a3 in the following Fourier series
expansion:

s(n) = a0 g0 (n) + a1 g1 (n) + a2 g2 (n) + a3 g3 (n), n = 1, 2, 3, 4,

for signal s(n) defined in Example 1.17.


Sec. 1.3. Frequency Analysis 51

(b) Using Theorem 1.3, find coefficients a00 , a01 , a02 , a03 in the following Fourier series
expansion:
s0 (n) = a00 g0 (n) + a01 g1 (n) + a02 g2 (n) + a03 g3 (n), n = 1, 2, 3, 4,
for signal s0 (n) defined in Example 1.17.
Solution.
(a) We already know from Example 1.17 that s = g1 +g2 –i.e., the additional basis signals
g0 and g3 are not needed to represent s. The answer is a0 = a3 = 0 and a1 = a2 = 1. If
we did not have the results of Example 1.17 available to us, we would proceed similarly
to Example 1.17. First, we write all signals as vectors:
         
1 1 1 1 2
 1         
g0 =   , g1 =  j  , g2 =  −1  , g3 =  −j  , s =  −1 + j  .
 1   −1   1   −1   0 
1 −j −1 j −1 − j
Then we calculate the inner products used in Eq. (1.11), and compute the coefficients.
These calculations were done in Example 1.17 for g1 and g2 . The calculations for g0
and g3 are similar: hg0 , g0 i = hg3 , g3 i = 4, and
hs, g0 i = 2 · 1∗ + (−1 + j) · 1∗ + 0 · 1∗ + (−1 − j) · 1∗ = 0,
hs, g0 i 0
a0 = = = 0,
hg0 , g0 i 4
hs, g3 i = 2 · 1 + (−1 + j) · (−j)∗ + 0 · (−1)∗ + (−1 − j) · j ∗

= 2 + (−1 + j)j + (−1 − j)(−j)


= 2 − j + j 2 + j + j 2 = 0,
hs, g3 i 0
a3 = = = 0.
hg3 , g3 i 4
(b) We can use hgk , gk i = 4, computed in Example 1.17 and Part (a) above. Recall
that  
0
 0 
s0 =  
 1 .
0
This makes its inner products with the basis vectors very simple:
hs0 , gk i
a0k =
hgk , gk i
0 · (gk (1))∗ + 0 · (gk (2))∗ + 1 · (gk (3))∗ + 0 · (gk (4))∗
=
4
(gk (3))∗
=
 4
1/4, k = 0, 2,
=
−1/4, k = 1, 3.
52 CHAPTER 1. ANALYSIS OF DISCRETE-TIME LINEAR TIME-INVARIANT SYSTEMS

Therefore,
1 1 1 1
s0 (n) = g0 (n) − g1 (n) + g2 (n) − g3 (n).
4 4 4 4
0
This is consistent with what we saw in Example 1.17: s cannot be represented as a
linear combination of only g1 and g2 . Note, however, that if in the expansion
1 1 1 1
s0 = g0 − g1 + g2 − g3 ,
4 4 4 4
we drop the terms which do not contain g1 and g2 , we will get the following vector:
1 1
− g1 + g2 ,
4 4
which is the answer we obtained in Example 1.17. This is the closest approximation of
s0 by a linear combination of g1 and g2 .
Now let us generalize Examples 1.17 and 1.18 from four dimensions to N .

Example 1.19 (Discrete Fourier Transform). Consider the following DT complex


exponential functions:
 
1 j2πkn
gk (n) = exp , n = 0, . . . , N − 1; k = 0, . . . , N − 1. (1.14)
N N

In other words, there are N functions, g0 (n), g1 (n), . . . , gN −1 (n), and each of them is
defined for n = 0, 1, . . . , N − 1.
(a) Prove that these N signals are pairwise orthogonal, and find their energies.
(b) Find a formula for the Fourier series coefficients X(0), X(1), . . . , X(N − 1) of an
N -point complex-valued signal x(n),
N
X −1
x(n) = X(k)gk (n), n = 0, . . . , N − 1
k=0
NX−1  
1 j2πkn
= X(k) exp , n = 0, . . . , N − 1. (1.15)
N N
k=0

Solution.
(a) To show orthogonality and compute the energies, we need to calculate all inner
products hgk , gi i, for all k = 0, . . . , N − 1 and i = 0, . . . , N − 1. If we can show that
these inner products for k 6= i are zero, we will show that the signals are pairwise
orthogonal. Moreover, the inner products for k = i will give us the energies.
N
X −1
hgk , gi i = gk (n)(gi (n))∗
n=0
Sec. 1.3. Frequency Analysis 53

N
X −1    
1 j2πkn 1 j2πin
= exp exp −
N N N N
n=0
N −1  
1 X j2π(k − i)n
= exp
N2 N
n=0
N −1   
1 X j2π(k − i) n
= exp .
N2 N
n=0

When k = i, each term of the summation is equal to 1, and therefore the sum is N .
The energy of each gk is therefore N/N 2 = 1/N . When k 6= i, the sum is zero (why?).
(b) Since g0 , . . . , gN −1 are nonzero and pairwise orthogonal, we can apply Theorem
1.2 to infer that {g0 , . . . , gN −1 } is an orthogonal basis for CN . Any signal s ∈ CN can
therefore be uniquely represented as their linear combination, according to Theorem
1.3. The coefficients in the representation are given by Eq. (1.11). The denominator of
that formula is the energy of gk , which we found in Part (a) to be 1/N . Therefore,

hx, gk i
X(k) =
hgk , gk i
hx, gk i
=
1/N
N
X −1
= N x(n)(gk (n))∗
n=0
N
X −1  
1 j2πkn
= N exp −
x(n)
N N
n=0
N
X −1  
j2πkn
= x(n) exp − , k = 0, . . . , N − 1. (1.16)
N
n=0

The representation (1.15) of an N -point complex-valued signal x as a linear combination


of complex exponentials (1.14) of frequencies 0, 2π/N, . . . , 2π(N − 1)/N is called the
DT Fourier series for the signal x:
N −1  
1 X j2πkn
x(n) = X(k) exp , n = 0, . . . , N − 1. (1.17)
N N
k=0

The signal X comprised of the N Fourier series coefficients is called the discrete Fourier
transform (DFT) of the signal x. The DFT X(k) is obtained from x(n) as follows:
N
X −1  
j2πkn
X(k) = x(n) exp − , k = 0, . . . , N − 1. (1.18)
N
n=0
54 CHAPTER 1. ANALYSIS OF DISCRETE-TIME LINEAR TIME-INVARIANT SYSTEMS

Since Eq. (1.17) is the recipe for obtaining signal samples x(n) from the DFT, it is
sometimes called the inverse DFT formula.
Eqs. (1.17) and (1.18) are particular cases of Eqs. (1.10) and (1.11), respectively, and
were easily obtained in Example 1.19 by applying Theorem 1.3 to a complex exponential
basis (also called Fourier basis) for CN . Eq. (1.17) tells us how to represent any signal
in CN as the linear combination of Fourier basis functions. The k-th term in the
representation is the orthogonal projection of the signal onto the k-th basis signal. The
projection coefficients are calculated using Eq. (1.18).
Note that Example 1.18 is a special case of Example 1.19: by setting N = 4 in
Eqs. (1.17) and (1.18) and appropriately normalizing the basis functions, we can get
the answers to Example 1.18.

 1.3.4 Complex Exponentials as Eigenfunctions of LTI Systems; Discrete-


Time Fourier Transform (DTFT)
One of the major reasons for the importance of Fourier series representations (and
also Fourier transforms) is that the Fourier basis functions are eigenfunctions of LTI
systems. In other words, the response of an LTI system to any one of the Fourier basis
functions of Eq. (1.14) gk is that same function times a complex number, αk gk , where
the complex number αk depends on the system and on the frequency of the Fourier
basis function. To see this, let us consider an LTI system with the impulse response h,
and use the DT convolution formula to calculate its response to the following signal:
x(n) = ejωn for all integer n,
where the frequency ω is an arbitrary fixed real number. We have that the response y
is:
X∞
y(n) = h(m)x(n − m)
m=−∞

X
= h(m)ejω(n−m)
m=−∞

X
= ejωn h(m)e−jωm
m=−∞
jωn jω
= e H(e ),

X
where H(ejω ) = h(m)e−jωm .
m=−∞

The function H, viewed as a function of ω, is called the frequency response of the


system, and is the discrete-time Fourier transform (DTFT, not to be confused with
DFT) of the impulse response h.
We will have much more to say about DTFTs below. For now, it is important to
note that we obtained the following property.
Sec. 1.3. Frequency Analysis 55

Theorem 1.5. Consider an LTI system whose impulse response is h. If the com-
plex exponential signal ejω0 n , defined for all integer n, is its input, then the output is
H(ejω0 )ejω0 n , for all integer n, where H(ejω0 ) is the frequency response of the system
evaluated at the frequency ω0 of the input signal.

Example 1.20. (a) Find a difference equation that describes the system in Fig. 1.27,
i.e., relates the output of the overall system to its input. Here, A, B, and C are
fixed constants.
x(n)
Delay Delay
by 1 by 1
C
B  y(n)
+

A

Figure 1.27. The system diagram for Example 1.20.

Solution. From the system diagram of Fig. 1.27, we have:

y(n) = Ax(n) + Bx(n − 1) + Cx(n − 2).

(b) Find the frequency response of this system by calculating the response to a complex
exponential.
Solution. To find H(ejω ), we calculate the response y(n) of the system to the
input signal ejωn ,

x(n) = ejωn
⇒ y(n) = Aejωn + Bejω(n−1) + Cejω(n−2)
= (A + Be−jω + Ce−2jω ) ejωn .
| {z }
H(ejω )

(c) Suppose that A = B = C = 1 and x(n) = 5, for −∞ < n < ∞. Calculate y(n)
using the frequency response.
Solution.

x(n) = 5ej0n
⇒ y(n) = H(ej0 ) · (5ej0n )
= (1 + 1 + 1) · 5
= 15, for all integer n.
56 CHAPTER 1. ANALYSIS OF DISCRETE-TIME LINEAR TIME-INVARIANT SYSTEMS

Example 1.21. Consider a DT LTI system with a known frequency response H(ejω ).
The response of such a system to an everlasting complex exponential input signal g(n) =
ejωn , −∞ < n < ∞, is:

y(n) = g(n)H(ejω ), for − ∞ < n < ∞. (1.19)

In other words, ejωn is an eigenfunction of the system with an eigenvalue H(ejω ).


(a) Does Eq. (1.19) hold for sinusoidal inputs? In other words, suppose that the input
to a DT LTI system is cos (ωn + ϕ), for −∞ < n < ∞. Is it always the case that
the output is cos (ωn + ϕ)H(ejω ), for −∞ < n < ∞? Does this hold for DT LTI
systems with even frequency responses, i.e., for systems with H(ejω ) = H(e−jω )?
Solution. We decompose cos(ωn + ϕ) into complex exponentials,
1 n j(ωn+ϕ) o
cos(ωn + ϕ) = e + e−j(ωn+ϕ)
2
1 jϕ jωn 1 −jϕ −jωn
= e e + e e
2 2
1 jϕ 1
⇒ y(n) = e H(ejω )ejωn + e−jϕ H(e−jω )e−jωn .
2 2
Unless H(ejω ) is even, i.e., H(ejω ) = H(e−jω ), or else y(n) is not equal to
cos(ωn + ϕ)H(ejω ). Note: in general, cos(ωn + ϕ) and ejωn u(n), are not eigen-
functions of LTI systems.
(b) Use the above property (1.19) to derive a simple expression for the response of the
system to the following input signal: 2 cos (8n + π4 ).
Solution. Using the previous part, we get:
 
1 jπ j8 j8n 1 −j π −j8 −j8n
y(n) = 2 e H(e )e + e
4 4 H(e )e
2 2
π π
= H(ej8 )ej (8n+ 4 ) + H(e−j8 )e−j (8n+ 4 ) .

 1.3.5 Further Remarks on the Importance of DFT


Suppose that S is an LTI system with a known impulse response h of duration N :

h(n) 6= 0 for n = 0, . . . , N − 1
h(n) = 0 otherwise.

Suppose further that x is an input signal which is periodic with period N . How many
arithmetic operations will it take to calculate the response y using the convolution
formula? First, notice that the response is also periodic with period N :

X ∞
X
y(n+N ) = h(m)x(n+N −m) = h(m)x(n−m) = y(n), for any integer n.
m=−∞ m=−∞
Sec. 1.3. Frequency Analysis 57

We therefore only need to calculate N samples of y(n), for example, for n = 0, . . . , N .


Since h has duration N and x has infinite duration, the convolution sum will always
have N terms:

X N
X −1
y(n) = h(m)x(n − m) = h(m)x(n − m)
m=−∞ m=0
= h(0)x(n) + h(1)x(n − 1) + . . . + h(N − 1)x(n − N + 1), for n = 0, . . . , N − 1.

Therefore, the computation of one particular sample y(n) requires N multiplications and
N − 1 additions.5 There are altogether N samples of y to be computed, and therefore
the overall number of multiplications is N 2 and the overall number of additions is
N (N − 1) = N 2 − N . The overall order of the number of operations required is N 2 .
In other words, the process of calculating the response has computational complexity
O(N 2 ). This is a very high computational cost for something as basic and as frequently
needed as calculating a discrete-time convolution. Specifically, suppose that on our
computer, this operation takes 1 second for N = N1 . Then for N = 100N1 it will be
approximately 1002 = 10000 times slower, i.e., it will take approximately 2.8 hours.
We need an algorithm for calculating convolutions more quickly. Let us use the fact
that, since x is periodic with period N , it can be represented as the following linear
combination:
N
X −1
x= X(k)gk ,
k=0
where gk are the Fourier basis functions given by Eq. (1.14). Using the linearity of our
system, we obtain the following representation for the output:
N
X −1
y = S[x] = X(k)S[gk ].
k=0

But since gk is a complex exponential of frequency 2πk/N , we can use Theorem 1.5 to
write the response to gk as S[gk ] = H(ej2πk/N )gk . Substituting this into the formula
for y, we get:
N
X −1
y = S[x] = X(k)H(ej2πk/N )gk .
k=0
But this is a representation of the signal y as a linear combination of Fourier basis signals
gk , with coefficients X(k)H(ej2πk/N ). In other words, the Fourier series coefficients of
y are:
Y (k) = X(k)H(ej2πk/N ), for k = 0, . . . , N − 1.
Assuming that the values of H(ej2πk/N ) for k = 0, . . . , N − 1 are known, we have
discovered the following procedure for calculating y:
5
These are complex multiplications and complex additions since we assume in general that our
signals are complex-valued.
58 CHAPTER 1. ANALYSIS OF DISCRETE-TIME LINEAR TIME-INVARIANT SYSTEMS

Step 1. Calculate the N DFT coefficients X(k) of x using the DFT formula.
Step 2. Calculate the N DFT coefficients Y (k) = X(k)H(ej2πk/N ) of y.
Step 3. Calculate y from its DFT coefficients using the inverse DFT formula.
It turns out that Steps 1 and 3, the DFT and the inverse DFT, can be implemented
using a fast Fourier transform (FFT)6 algorithms whose computational complexity is
O(N log N ). Moreover, if the N values H(ej2πk/N ) of the frequency response were
not known but had to be calculated from the impulse response h, that, too, could
be done using FFT with computational complexity O(N log N ). Step 2 consists of N
multiplications. The overall computational complexity is therefore only O(N log N ), a
marked improvement over O(N 2 ), especially for large N .
To summarize, we have just seen two aspects of why it is important to study rep-
resentations of signals as weighted sums of complex exponentials in the context of our
study of LTI systems.
• Conceptual importance: LTI systems process each frequency component sep-
arately, and in a very simple way (i.e. by multiplying it by a frequency-dependent
complex number).
• Computational importance: To obtain the response of an LTI system with an
N -point impulse response to an N -periodic signal, we need O(N 2 ) computational
effort if we use the convolution formula directly. The computational complexity is,
however, reduced to O(N log N ) if we use the frequency-domain representations
instead.

 1.3.6 Continuous-Time Fourier Series


As indicated above, the notions of orthogonal bases and projections can be extended
to spaces of CT signals. Determining whether a series representation converges (and if
so, what it converges to) is much more complicated than for finite-duration DT signals.
We therefore will only consider one very important example–CT Fourier series.
Consider the set of signals L2 (T0 ) defined as follows: it is the set of all periodic
complex-valued CT signals s(t) with period T0 for which
Z τ +T0
|s(t)|2 dt < ∞,
τ
where τ is an arbitrary real number–i.e., the integral is taken over any period. It turns
out that L2 (T0 ) is a vector space (each vector in this case being a continuous-time
signal). If we define the inner product of two functions as follows:
Z τ +T0
hs, gi = s(t)(g(t)∗ dt,
τ
6
We will discuss the mechanics of FFT below. What is important for now is that it is a fast algorithm
for calculating the DFT of a signal.
Sec. 1.3. Frequency Analysis 59

then the infinite collection of T0 -periodic complex exponentials


 
j2πkt
gk (t) = exp , k = 0, ±1, ±2, . . .
T0

forms an orthogonal basis for L2 (T0 ). We can represent any T0 -periodic CT signal s as
a linear combination of these complex exponentials:

X ∞
X  
j2πkt
s(t) = ak gk (t) = ak exp . (1.20)
T0
k=−∞ k=−∞

The first “=” sign in Eq. (1.20) needs careful interpretation: unlike the finite-duration
DT case, the equality here is not pointwise. Instead, the equality is understood in the
following sense:

XM

s − ak gk → 0 as N → −∞ and M → ∞.

k=N

Nevertheless, formula (1.10) is still valid:

hs, gk i
ak = . (1.21)
hgk , gk i

The inner product of gk and gi is easily computed:


Z τ +T0    
j2πkt j2πit
hgk , gi i = exp exp − dt
τ T0 T0
Z τ +T0  
j2π(k − i)t
= exp dt
τ T0

T0 if k = i
=
0 if k 6= i

The fact that hgk , gi i = 0 shows that our vectors are indeed pairwise orthogonal.7
Substituting hgk , gk i = T0 back into Eq. (1.21), we get:
Z τ +T0  
hs, gk i 1 j2πkt
ak = = s(t) exp − dt. (1.22)
T0 T0 τ T0

7
Note, however, that this is not enough to prove that they form an orthogonal basis for L2 (T0 ) since
L2 (T0 ) is infinite-dimensional, and Theorem 1.2 no longer holds: it is not true that any infinite set of
nonzero orthogonal vectors forms an orthogonal basis for L2 (T0 ). Proving the fact that signals gk do
form an orthogonal basis for L2 (T0 ) is beyond the scope of this course.
60 CHAPTER 1. ANALYSIS OF DISCRETE-TIME LINEAR TIME-INVARIANT SYSTEMS

s(t)
A
t t t t t t

−T0 − T20 − t20 t0


2
T0
2
T0 t

Figure 1.28. Signal s(t) of Example 1.22.

Example 1.22. Let T0 , t0 , and A be three positive real numbers such that T0 > t0 > 0.
Consider the following periodic signal:

A if |t| ≤ t20
s(t) =
0 if |t| ≤ t20 ,
periodically extended with period T0 , as shown in Fig. 1.28. Using Eq. (1.22) with
τ = −T0 /2, its Fourier series coefficients are:
Z t0 /2
1 At0
a0 = A dt = ,
T0 −t0 /2 T0
Z t0 /2  
1 j2πkt
ak = A exp − dt
T0 −t0 /2 T0
 
A T0 j2πkt t0 /2
= · exp −
T0 −j2πk T0 −t0 /2
    
A 1 jπkt0 jπkt0
= · · exp − exp −
πk 2j T0 T0
 
A πkt0
= sin .
πk T0

sin θ
(Note that this last formula is also valid for k = 0 if we define = 1.)
θ θ=0
Another common way of decomposing CT periodic signals as linear combinations
of sinusoidal signals is by using sines and cosines as basis functions, instead of complex
exponentials. The following infinite collection of functions is also an orthogonal basis
for L2 (T0 ), and is also called a Fourier basis:
c0 (t) = 1,
 
2πkt
ck (t) = cos , k = 1, 2, . . .
T0
 
2πkt
sk (t) = sin , k = 1, 2, . . .
T0
Sec. 1.3. Frequency Analysis 61

As we did previously, let us first prove that these functions are pairwise orthogonal, and
find their energies over one period. We need to consider all pairwise inner products–
which will now be integrals of products of trigonometric functions. We will therefore
need the following formulas:
1
sin α sin β = (cos(α − β) − cos(α + β)) (1.23)
2
1
sin α cos β = (sin(α − β) + sin(α + β)) (1.24)
2
1
cos α cos β = (cos(α − β) + cos(α + β)) (1.25)
2
Now we compute the inner products, keeping in mind that sk (t) is defined for k ≥ 1
while ck (t) is defined for k ≥ 0:
Z τ +T0    
t t
hsk , si i = sin 2πk sin 2πi dt
τ T0 T0
Z     
Eq. (1.23) 1 τ +T0 t t
= cos 2π(k − i) − cos 2π(k + i) dt
2 τ T0 T0
 T0
= 2 , k =i
0, k 6= i.

Z τ +T0   
t t
hsk , ci i = sin 2πk cos 2πi dt
τ T0 T0
Z     
Eq. (1.24) 1 τ +T0 t t
= sin 2π(k − i) + sin 2π(k + i) dt = 0.
2 τ T0 T0
Z 
τ +T0   
t t
hck , ci i = cos 2πk cos 2πi dt
τ T0 T0
Z     
Eq. (1.25) 1 τ +T0 t t
= cos 2π(k − i) + cos 2π(k + i) dt
2 τ T0 T0

 T0 , k = i = 0
T0
= , k = i 6= 0
 2
0, k 6= i.
We are now ready to derive formulas for the coefficients a1 , a2 . . . and b0 , b1 , b2 . . . of the
expansion of a CT T0 -periodic signal s(t):

X ∞
X
s(t) = b0 + ak sk (t) + bk ck (t). (1.26)
k=1 k=1

hs, c0 i
b0 =
hc0 , c0 i
62 CHAPTER 1. ANALYSIS OF DISCRETE-TIME LINEAR TIME-INVARIANT SYSTEMS

x(t)
1
t t t t t t

−2 −1 0 1 2 t

Figure 1.29. Signal x(t) of Example 1.23.

Z τ +T0
1
= s(t) dt
T0 τ
hs, ck i
bk =
hck , ck i
Z τ +T0  
2 t
= s(t) cos 2πk dt, k = 1, 2, . . .
T0 τ T0
hs, sk i
ak =
hsk , sk i
Z τ +T0  
2 t
= s(t) sin 2πk dt, k = 1, 2, . . .
T0 τ T0

Example 1.23. Suppose that the period is T0 = 2, and let signal x be defined by:

1, −1 ≤ t < 0
x(t) =
0, 0 ≤ t < 1,
as illustrated in Fig. 1.29 Let us compute the Fourier series coefficients with respect to
the Fourier basis of sines and cosines. From the formulas above, with τ = −1,
Z
1 0 1
b0 = 1 dt =
2 −1 2
Z 0  
2 t
For k ≥ 1, bk = cos 2πk dt
2 −1 2
Z 0
= cos(πkt) dt
−1
t=0
1
= sin(πkt) =0
πk t=−1
Z  
2 0 t
ak = sin 2πk dt
2 −1 2
Z 0
= sin(πkt) dt
−1
Sec. 1.3. Frequency Analysis 63
t=0
1
= − cos(πkt)
πk t=−1

− πk
2
, if k is odd
=
0, if k is even.

A different method of finding the coefficients is to notice that

(a) Signal x is related to signal s of Example 1.22. If we set t0 = 1, A = 1, and T0 = 2


in Example 1.23 and shift s by t0 /2 to the left, we will obtain x: x(t) = s(t + t20 ).

(b) Complex exponential basis signals are related to the sine and cosine basis signals.

Using the Fourier series coefficients obtained in Example 1.22, we have the following
representation of x(t) in the complex exponential Fourier basis:
 
t0
x(t) = s t +
2
X∞    
A πkt0 t0
= sin gk t +
πk T0 2
k=−∞
X∞    
A πkt0 j2πk(t + t0 /2)
= sin exp
πk T0 T0
k=−∞
X∞      
A πkt0 j2πk(t0 /2) j2πkt
= sin exp exp
πk T0 T0 T0
k=−∞
X∞    
A πkt0 jπkt0
= sin exp gk (t),
πk T0 T0
k=−∞

which means that the coefficients of x in the complex exponential Fourier basis are:
   
A πkt0 jπkt0
αk = sin exp
πk T0 T0
   
1 πk jπk
= sin exp . (1.27)
πk 2 2

But notice that the complex exponential basis functions are related to the sine and cosine
basis functions as follows:

g0 (t) = 1 = c0 (t),
   
2πkt 2πkt
For k ≥ 1, gk (t) = cos + j sin = ck (t) + jsk (t),
T0 T0
   
2πkt 2πkt
g−k (t) = cos − + j sin − = ck (t) − jsk (t).
T0 T0
64 CHAPTER 1. ANALYSIS OF DISCRETE-TIME LINEAR TIME-INVARIANT SYSTEMS

Therefore,

X
x(t) = αk gk (t)
k=−∞

X −1
X
= α0 g0 (t) + αk gk (t) + αk gk (t)
k=1 k=−∞
X∞
= α0 g0 (t) + [αk gk (t) + α−k g−k (t)]
k=1
X∞
= α0 c0 (t) + [αk (ck (t) + jsk (t)) + α−k (ck (t) − jsk (t))]
k=1
X∞ ∞
X
= α0 c0 (t) + (αk + α−k )ck (t) + j(αk − α−k )sk (t).
k=1 k=1

Matching these coefficients with the coefficients in Eq. (1.26), we get the following rela-
tionship between the exponential Fourier series coefficients and the sine-cosine Fourier
series coefficients of any T0 -periodic signal:
b0 = α0 ,
bk = αk + α−k for k = 1, 2, . . . ,
ak = j(αk − α−k ) for k = 1, 2, . . . .
Using expression (1.27) we found for the coefficients αk of the specific signal x we are
considering in this example, we get:
  
1 sin(πk/2) jπk 1
b0 = α0 = exp = ;
2 (πk)/2 2 2
   k=0     
1 πk jπk 1 πk jπk
bk = αk + α−k = sin exp + sin − exp −
πk 2 2 π(−k) 2 2
     
1 πk jπk jπk
= sin exp + exp −
πk 2 2 2
   
2 πk πk
= sin cos
πk 2 2
Eq. (1.24) 1
= (sin 0 + sin πk) = 0;
πk         
1 πk jπk 1 πk jπk
ak = j(αk − α−k ) = j sin exp − sin − exp −
πk 2 2 π(−k) 2 2
     
j πk jπk jπk
= sin exp − exp −
πk 2 2 2
 
2 πk
= − sin2
πk 2
Sec. 1.3. Frequency Analysis 65

− πk
2
, if k is odd
=
0, if k is even.

Notice that these are the same results we got before by evaluating the inner products
with sines and cosines directly.

 1.3.7 Discrete-Time Fourier Transform (DTFT), Revisited


The definition of the discrete-time Fourier transform of a discrete-time signal x(n) is:

X

X(e ) = x(n)e−jωn .
n=−∞

Notice that x(n) are the continuous-time Fourier series coefficients of X(ejω ). To see
this better, we can relate DTFT to the CTFS formulas above by making the following
identifications:

T0 = 2π,
ω = 2πt/T0 = t,
n = −k,
x(n) = a−k .

The DTFT formula then becomes the inverse CTFS formula (1.20), and therefore the
inverse DTFT formula is obtained by relabeling variables in the CTFS formula (1.22):
Z ω0 +2π
1
x(n) = X(ejω )ejωn dω.
2π ω0

Thus, DTFS/DFT, CTFS, and DTFT are all particular cases of our general framework
of orthogonal representations.
Properties of the DTFT.

1. Linearity: the DTFT of a1 x1 (n) + a2 x2 (n) is


 
a1 X1 ejω + a2 X2 ejω .

2. Delay: the DTFT of x (n − n0 ) is



X ∞
X
−jω(n−n0 ) −jωn0 m=n−n0
x (n − n0 ) e e = x(m)e−jωm e−jωn0
n=−∞ n=−∞
| {z }
X(ejω )

= X ejω e−jωn0 .
66 CHAPTER 1. ANALYSIS OF DISCRETE-TIME LINEAR TIME-INVARIANT SYSTEMS

3. Convolution:
 X
Y ejω = y(n)e−jωn
n
" #
X X
= h(n − k)x(k) e−jωn
n k
" #
X X
−jωn
= h(n − k)e x(k)
k n
Xh  i
H ejω e−jωk x(k)
property 2
=
k
X
= H ejω e−jωk x(k)
|k {z }
X(ejω )
 
= H ejω X ejω
  
So, if y(n) = h ∗ x(n), then Y ejω = H ejω X ejω .

4. Parseval’s theorem:

X Z
1 π 
|x(n)| 2
= X ejω 2 dω
n=−∞
2π −π

X Z π
1  
x(n)y ∗ (n) = X ejω Y ∗ ejω dω
n=−∞
2π −π

5.

X
j0

X e = x(n)
n=−∞

6. Z π
1 
x(0) = X ejω dω
2π −π

7. Modulation: the DTFT of x(n)ejω0 n is


X  
x(n)ej(ω0 −ω)n = X ej(ω−ω0 ) .
n

Example 1.24.
1
y(n) = (x(n) + x(n − 1))
2
Find the frequency jω
response H(e ). Is this system a low-pass, band-pass, or high-pass

filter? Plot H ejω and ∠H ejω .
Sec. 1.3. Frequency Analysis 67

Solution.
Method 1. Use the eigenfunction property:

x(n) = ejωn ⇒ y(n) = H ejω ejωn

1  jωn 
y(n) = e + ejω(n−1)
2
1
= (1 + e−jω ) ejωn
2
| {z }
H(ejω )
 1 −j ω  j ω ω

H ejω = e 2 e 2 + e−j 2
2
ω ω
= e−j 2 cos
2
Method 2.
 

 1
Y e = DTFT (x(n) + x(n − 1))
2
1 1
= DTFT{x(n)} + DTFT{x(n − 1)}
2 2
1  1 
= X ejω + X ejω e−jω
2 2
1 jω
 
= X e 1 + e−jω
2 
 Y ejω
H ejω =
X (ejω )
1 
= 1 + e−jω
2
Method 3. Impulse response: h(n) = 12 (δ(n) + δ(n − 1))

H ejω = DTFT{h(n)}
X∞
DTFT{δ(n)} = δ(n)e−jωn = 1
n=−∞
−jω·1
DTFT{δ(n − 1)} = 1 · e = e−jω
 1 
H ejω = 1 + e−jω
2
To plot the magnitude response and the phase response, we note:

H ejω = cos ω
2

 −j ω ω
∠H e = |∠e{z 2} + ∠ cos ,
−ω | {z 2}
2
0
68 CHAPTER 1. ANALYSIS OF DISCRETE-TIME LINEAR TIME-INVARIANT SYSTEMS

because
ω π ω π
cos ≥ 0 for − ≤ ≤ , i.e., − π ≤ ω ≤ π.
2 2 2 2
Note: H (jω)
is periodic
 with period 2π, since it is a DTFT of a DT signal. Since the

values of H e jω at low frequencies (near ω = 0) are close to one, and its values at
high frequencies (near ω = ±π) are close to zero, this a low-pass filter.
˛ ` jω ´˛
˛H e ˛

rrr rrr
ω
−π π

˛ ` ´˛
Figure 1.30. The plot of ˛H ejω ˛, for Ex. 1.24.

` ´
∠H ejω
π
2
HH HH
rrr HH H rrr
HH
−π
HH ω
HH HH π
H H
−π
2

` ´
Figure 1.31. The plot of ∠H ejω , for Ex. 1.24.

Example 1.25. Repeat Ex. 1.24 for the following system:


1
y(n) = (x(1) − x(n − 2)).
2
Solution.
1
h(n) = (δ(n) − δ(n − 2))
2
 1 
H ejω = 1 − e−2jω
2  
1 jω 
= je−jω e − e−jω = je−jω sin ω
2j
| {z }
sin ω
Sec. 1.3. Frequency Analysis 69

H ejω = | sin ω|

∠H ejω = ∠j + |∠e{z −jω
} +∠ sin ω
|{z}
π −ω
2
 π
2 − ω, 0≤ω≤π
=
− π2 − ω, −π ≤ ω ≤ 0,

because 
≥ 0, 0≤ω≤π
sin ω
≤ 0, −π ≤ ω ≤ 0,
and therefore, 
0, 0≤ω≤π
∠ sin ω =
−π, −π ≤ ω ≤ 0.
This is a band-pass filter. Note: it is a convention to keep angles in the range [−π, π].
˛ ` jω ´˛
˛H e ˛

rrr rrr

− π2 π ω
−π 2 π

˛ ` ´˛
Figure 1.32. The plot of ˛H ejω ˛, for Ex. 1.25.

` ´
∠H ejω
π
2

rrr @ @ @ rrr
@ @ @
@ @ @
@ −π −@π @ π
π ω
@
2
@ 2
@
@ @
−π
@
2

` ´
Figure 1.33. The plot of ∠H ejω , for Ex. 1.25.

 1.3.8 Discrete Fourier Transform (DFT), Revisited


Our motivation for studying frequency analysis of DT signals is summarized in Fig. 1.34.
70 CHAPTER 1. ANALYSIS OF DISCRETE-TIME LINEAR TIME-INVARIANT SYSTEMS

X
N −1 X
N −1 “ 2πk ”
x(n) = X(k)gk (n) y(n) = X(k)H ej N gk (n)
k=0 k=0
H(ejω )

Figure 1.34. A system diagram motivating the use of frequency analysis.

In this figure, the input x(n) is N -periodic and the basis function gk (n) is given by
1 j 2πk n
gk (n) = e N for k = 0, · · · , N − 1 and for all n. (1.28)
N
• Conceptual importance: LTI systems process each harmonic separately in a sim-
ple way (i.e., multiply each harmonic by a frequency dependent complex number
gk (n)).

• Computational importance:
 2πk 
– to obtain X(k)H ej N from X(k) for k = 0, 1, · · · , N − 1, we need only
N operations.
 2πk 
– to obtain X(k) from x(n) and y(n) from X(k)H ej N , we need only
O(N log N ) operations. This can be done through FFT.

If we put x(n) and gk (n) in vector form, we get


   
1 j 2πk ·0
x(0) N e N

   1 j 2πk ·1 
 x(1)   Ne
N 
x=  , gk =    for k = 0, 1, · · · , N − 1. (1.29)

..
 .
.. 
.  
x(N − 1) 2πk
1 j N ·(N −1)
e N

We can then write


N
X −1
x= X(k)gk . (1.30)
k=0

Since the gk ’s are pairwise orthogonal, we can use the projection formula to calculate
the coefficients and obtain the inversion formula (1.18).

• Remark 1. DFT 6= DTFT (although related).


DFT: discrete “frequency” k.
DTFT: continuous frequency ω.
Sec. 1.3. Frequency Analysis 71
2πk
• Remark 2. ej N
n
is periodic as a function of n, with period N .
Therefore, IDFT defines a periodic signal for −∞ < n < ∞.
We will often think of x(n) as N -periodic.

If we let  
X(0)
 .. 
X= . 
X(N − 1)
then we can rewrite Eq. (1.30) as
N
X −1
x = X(k)gk
k=0
= X(0)g0 + X(1)g1 + · · · + X(N − 1)gN −1
 
X(0)
 .. 
= ( g0 g1 · · · gN −1 )  . 
X(N − 1)
= ( g0 g1 · · · gN −1 )X
= BX,

where B is an N × N matrix whose columns are gk ’s and whose entry Bnk in the n-th
row and k-th column is given by
1 j 2π(k−1)(n−1)
Bnk = e N for n = 1, 2, · · · , N ; k = 1, 2, · · · , N.
N
To get the formula for DFT, we premultiply x = BX by the matrix A = N B H , where
yH = (y∗ )T means the “conjugate transpose of y”. Since the rows of B H are conjugate
transposes of gk ’s, we have  
g0H
 gH 
 1 
A = N  . ,
 . .
H
gN −1
where  
gkH = 1 −j 2πk ·0 1 −j 2πk ·1 1 −j 2πk ·(N −1)
Ne
N
Ne
N ··· Ne
N .
The entry in the k-th row and n-th column of A is:
2π(k−1)(n−1)

Akn = N Bnk = e−j N for k = 1, 2, · · · , N ; n = 1, 2, · · · , N.

Premultiplying x = BX by A, we get

Ax = ABX.
72 CHAPTER 1. ANALYSIS OF DISCRETE-TIME LINEAR TIME-INVARIANT SYSTEMS

X = A x

N ×1 N ×N N ×1

Figure 1.35. An illustration of the DFT as matrix multiplication.

We now calculate AB.


 
g0H
 g1H 
 
AB = N  ..  ( g 0 g1 · · · gN −1 )
 . 
H
gN −1
 
g0H g0 g0H g1 ··· g0H gN −1
 g1H g0 g1H g1 ··· g1H gN −1 
 
= N .. .. .. 
 . . . 
H g H H g
gN −1 0 gN −1 g1 · · · gN −1 N −1

Note that since

gkH gp = hgp , gk i
N
X −1
= gp (n)gk∗ (n),
n=0

AB simplifies to
 1

N 0
 1 
 N 
AB = N  .. 
 . 
0 1
 N
1
 0 
 1 
=  .. 
 . 
0
1
= I, where I is the identity matrix.
Sec. 1.3. Frequency Analysis 73

Hence, A and B are inverses of each other. The DFT of x can then be calculated by

X = Ax.

From Fig. 1.35, we see that we need N 2 complex multiplications for a brute force
implementation of DFT. However, the fact that the matrix A is highly structured can
be exploited to produce a much faster algorithm for multiplying a vector by this matrix
A.

Вам также может понравиться