Академический Документы
Профессиональный Документы
Культура Документы
y z
"
"
R "
"
"
" θ
"
"
x <
Figure 1.18. A complex number z can be represented in Cartesian coordinates (x, y) or polar coordi-
nates (R, θ).
z = z1 z 2
|z| = |z1 | · |z2 |
=
θ = θ1 + θ2
z
y
" 1
"
2 "
θ"
"
"
" θ1
"
x <
Figure 1.19. Multiplication of two complex numbers z1 = |z1 |ejθ1 and z2 = |z2 |ejθ2 (z2 is not shown).
The result is z = |z|ejθ with |z| = |z1 | · |z2 | and θ = θ1 + θ2 .
=
2 C 2
= C z = 2j
√
y z = x + jy 1 z = 1 + j, |z| = 2
"
"
"
"
"
" θ = π/4
"
"
b < @ <
b x 1
b @ 1/z = 1/2 − j/2
b @
b @
b
b @
−y b z ∗ = x − jy @ z∗ = 1 − j
−1
(a) (b)
Figure 1.20. (a) Complex number z and its complex conjugate z ∗ . (b) Illustrations to Example 1.11.
Example 1.11. Let us compute the various quantities defined above for z = 1 + j.
1. z ∗ = 1 − j.
√ √
2. |z| = √12 + 12p= 2. Alternatively,
p √
|z| = zz ∗ = (1 + j)(1 − j) = 1 + j − j − j 2 = 2.
3. <(z) = =(z) = 1.
√ √ π
4. Polar representation: z = 2(cos π4 + j sin π4 ) = 2ej 4 .
5. To compute z 2 , square the absolute value and double the angle:
π
z 2 = 2(cos π2 + j sin π2 ) = 2j = 2ej 2 .
The same answer is obtained from the Cartesian representation:
(1 + j)(1 + j) = 1 + 2j + j 2 = 1 + 2j − 1 = 2j.
6. To compute 1/z, multiply both the numerator and the denominator by z ∗ :
1 z∗ 1−j 1−j 1 j
= ∗ = = = − .
z zz (1 + j)(1 − j) 2 2 2
Alternatively, use the polar representation:
1 1 1 π
= √ j π = √ e−j 4
z 2e 4 2
√ √ !
1 π π 1 2 2
= √ cos − + j sin − =√ −j
2 4 4 2 2 2
1 j
= − .
2 2
36 CHAPTER 1. ANALYSIS OF DISCRETE-TIME LINEAR TIME-INVARIANT SYSTEMS
2
j
We can check to make sure that (1/z) · z = 1: 1
2 − 2 (1 + j) = 12 − 2j + 2j − j2 = 1.
3. If not, can we at least find coefficients ak such that Eq. (1.6) is approximately
satisfied?
Precise answers to these three questions are impossible unless we define what we mean
by “orthogonal” and “approximately”. In order to do this, we generalize the notions
of orthogonality and length found in planar geometry, to spaces of signals, via the
following procedure:
• define the distance between any two signals, by appropriately generalizing the
notion of distance in a plane.
Sec. 1.3. Frequency Analysis 37
We address the first item on this agenda by initially restricting our attention to complex-
valued DT signals defined on a fixed interval, say, [0, N − 1] where N is some fixed
nonnegative integer. In other words, we consider DT signals whose domain is the set
{0, 1, . . . , N − 1} and whose range is C. Each such signal s can be represented as an
N -dimensional vector s, by recording its N samples in a column:
s(0)
s(1)
s= .. .
.
s(N − 1)
The collection of all such signals is therefore the same as the set of all N -dimensional
complex-valued vectors which we call CN .
Several remarks about notational conventions are in order here.
• In writing, vectors are usually denoted by underlining them: s. In printed texts,
however, it is customary to use boldface letters (s) to denote vectors.
• A transpose of a column vector is a row vector–that is, an equivalent expression
for s is: s = (s(0), s(1), . . . , s(N − 1))T .
• We will occasionally be using vectors to represent signals defined for n = 1, 2, . . . , N
rather than n = 0, 1, . . . , N − 1. In this case,
s(1)
s(2)
s = . .
. .
s(N )
y
A
r y
x1 A 1
y1 A C x2
A y2 r
A y2
A
A
B r A
r rD
x1 O x2 x
Figure 1.21. Two vectors in the real plane are orthogonal if and only if they form a 90◦ angle.
• We will often be using symbol “∈” which means “is an element of”. For example,
s ∈ CN means: “s is an element of CN ”.
Example 1.12. Let us see whether our definition of orthogonality makes sense for
vectors in the real plane R2 . Two
vectors
in the plane
are called
orthogonal if the angle
x x
between them is 90◦ . Let s = 1
∈ R2 and g = 2
∈ R2 be two vectors in the
y1 y2
real plane shown in Fig. 1.21. Their inner product is then hs, gi = x1 x2 + y1 y2 . If the
inner product is equal zero, then
x1 y2
− = ,
y1 x2
which says that the right triangles 4OBA and 4CDO are similar. From the similarity
of these triangles, we get ∠AOB = ∠OCD. Therefore,
Similar reasoning applies if the two vectors are oriented differently with respect to the
coordinate axes. Therefore, saying that the inner product of two vectors in the real
Sec. 1.3. Frequency Analysis 39
plane is zero is equivalent to saying that they form a 90◦ angle. This shows that our
definition of orthogonality in CN is an appropriate generalization of orthogonality in
the real plane.
It is easily seen that the inner product of a vector with itself is always a real number:
N
X −1 N
X −1
∗
hs, si = s(n)(s(n)) = |s(n)|2 .
n=0 n=0
Fig. 1.21 and the two examples above illustrate the fact that our definitions of
orthogonality and the norm in CN generalize the corresponding concepts from planar
geometry. Therefore, when considering vectors in CN it is often helpful to draw planar
pictures to guide our intuition. Note, however, that the proof of any facts concerning
inner products and norms in CN (for example, the properties listed above) cannot be
based solely on pictures: the pictures are there to guide our reasoning, but rigorous
proofs must rely only on definitions and properties proved before. For example, in
proving Property 1 above we can only rely on our definition of the inner product. Once
Property 1 is proved, we can use both Property 1 (if we need to) and the definition of
the inner product in proving Property 2, etc.
4
We will soon see that the term norm is actually more general. The specific norm that is the square
root of the energy is called the Euclidean norm or the `2 norm or the 2-norm.
40 CHAPTER 1. ANALYSIS OF DISCRETE-TIME LINEAR TIME-INVARIANT SYSTEMS
As
A
A
A
A
A
A
s g1 A
s g2 g1
Projection of s onto G: sG = sg1 + sg2
s
#
# s − sg g2
#
#
#
# Space G, the span of g1 and g2 .
sg = ag g
(a) (b)
Figure 1.22. (a) The orthogonal projection sg of a vector s onto another vector g. (b) The orthogonal
projection sG of a vector s onto a space G can be obtained by projecting s onto g1 and g2 and adding
the results, where {g1 , g2 } is an orthogonal basis for G.
Orthogonal Projections
In the real plane, the coordinates of a vector are given by the projections
of the vector
x1
onto the coordinate axes. For example, the projections of the vector in Fig. 1.21
y1
x1 0
onto the x-axis and y-axis are the vectors and , respectively. We will
0 y1
soon see that the coordinates of a signal in a Fourier basis–that is, the Fourier series
coefficients–can also be computed from the projections of the signal onto the individual
Fourier basis functions.
We define the projection of a vector s ∈ CN onto another vector g ∈ CN by gener-
alizing the notion of an orthogonal projection from planar geometry. Specifically, when
we project s onto g, we get another vector sg which is collinear with g (i.e. it is of the
form ag) such that the difference s − sg is orthogonal to g. An illustration of this, for
the real plane, is given in Fig. 1.22(a).
Definition 1.6. The orthogonal projection of a vector s ∈ CN onto a nonzero vector
g ∈ CN is such a vector sg ∈ CN that:
1. sg = ag for some complex number a, and
2. s − sg ⊥ g.
The orthogonal projection of a real-valued vector s ∈ RN onto another real-valued vector
g ∈ RN is defined similarly.
Note again that, even though our definition applies to the general case of CN , we
can use the planar picture of Fig. 1.22(a) to guide our analysis. From this picture, we
Sec. 1.3. Frequency Analysis 41
y
1
2
X X
1 2 x
0 0
hs − ag, gi = 0.
Our definition works as expected in the plane. Just as we did with the concepts of
orthogonality and length before, we have generalized the planar concept of orthogonal
projection to CN .
We now generalize our orthogonal projection formula (1.7) to the case when we
project a vector s onto a space G. This is illustrated in Fig. 1.22(b). We show that, if
an orthogonal basis for space G is known, it is easy to compute the projection of any
vector s onto G. Of course we need to precisely define what we mean by an orthogonal
basis before we proceed.
Definition 1.7. A subset G of CN is called a vector subspace of CN if
1. ag ∈ G for any g ∈ G and any a ∈ C,
2. and g1 + g2 ∈ G for any g1 , g2 ∈ G.
A subset G of RN is called a vector subspace of RN if
1. ag ∈ G for any g ∈ G and any a ∈ R,
2. and g1 + g2 ∈ G for any g1 , g2 ∈ G.
In other words, a set G in CN (or in RN ) is called a vector subspace if it is closed
under multiplication by a scalar and under vector addition.
α
Example 1.15. The set of all vectors of the form 0 where α ∈ C, is a vector
0
subspace of C : if you multiply a vector of this form by a complex number, you get
3
a vector of this form; if you add two vectors of this form, you again get a vector of
this form. On the other hand, this set of vectors is not a vector subspace of R3 simply
because it is not a subset of R3 .
α
The set of all vectors of the form 0 where α ∈ C, is not a vector subspace of
1
2α
C3 : if you multiply a vector like that by 2, you get 0 which is no longer in the
2
set.
Definition 1.8. Vectors g1 , g2 , . . . , gm are called linearly independent if none of them
can be expressed as a linear combination of the others:
X
gi 6= ak gk for i = 1, 2, . . . , m.
k6=i
Definition 1.9. The space spanned by vectors g1 , g2 , . . . , gm is the set of all their linear
combinations, i.e. the set of all vectors of the form
a1 g1 + a2 g2 + . . . + am gm ,
We will need the following important result from linear algebra which we state here
without proof.
Theorem 1.2. Any N linearly independent vectors in CN (RN ) form a basis for CN
(RN ). Any N pairwise orthogonal nonzero vectors in CN (RN ) form an orthogonal
basis for CN (RN ).
1. sG ∈ G, and
hs − sG , gp i = 0, for p = 1, . . . , m
hs, gp i − hsG , gp i = 0
*m +
X
hs, gp i − ak gk , gp = 0
k=1
Xm
hs, gp i − ak hgk , gp i = 0.
k=1
But notice that the orthogonality of the basis {g1 , . . . , gm } implies that hgk , gp i = 0
unless p = k. Therefore, only one term in the summation can be nonzero–the term for
k = p:
hs, gp i − ap hgp , gp i = 0
hs, gp i
ap = for p = 1, . . . , m.
hgp , gp i
Comparing this result with our result (1.7) for projecting one vector onto another, we
see that projecting onto a space G which has an orthogonal basis {g1 , . . . , gm } amounts
to the following:
This is illustrated in Fig. 1.22(b) for projecting a 3-dimensional vector onto a plane
spanned by two orthogonal vectors g1 and g2 .
Formula (1.9) is actually a lot more than a formula for projecting a vector onto a
subspace. Note that, if the vector s belongs to G then its projection onto G is equal to
the vector itself:
sG = s if s ∈ G.
In this case, Eq. (1.9) tells us how to represent s as a linear combination of orthogonal
basis vectors.
Sec. 1.3. Frequency Analysis 45
s ,, s−f
,
,
,
,
, s − sG
,
,
,
,
,
,
@ sG − f
@
@ sG , the orthogonal projection
@ of s onto space G.
@
@
f @
Figure 1.24. Illustration to the proof of Theorem 1.4: The closest point in space G to a fixed vector
s is the orthogonal projection sG of s onto G.
The coefficients (1.11) are unique, i.e. there is no other set of coefficients that satisfy
Eq. (1.10).
As we will shortly see, a particular case of these equations are the DT Fourier series
formulas. These equations, however, are very general: they work for non-Fourier bases
of CN . Slight modifications of these equations also apply to other spaces of DT and CT
signals. For example, the CT Fourier series formulas are essentially a particular case of
Eqs. (1.10,1.11).
Suppose now that s does not belong to the subspace G. Can we find coefficients
a1 , . . . , am such that Eq. (1.10) is satisfied approximately? Specifically, we would like
to find the coefficients that minimize the energy (or, equivalently, the 2-norm) of the
46 CHAPTER 1. ANALYSIS OF DISCRETE-TIME LINEAR TIME-INVARIANT SYSTEMS
error–i.e. the 2-norm of the difference between the two sides of (1.10):
m
X
find a1 , . . . , am to minimize
s − ak gk
.
k=1
It turns out that the answer is still Eq. (1.11)–i.e. that the orthogonal projection sG
of s onto G is the closest vector to s among all vectors in G. To show this, consider
Fig. 1.24. Using the definition of orthogonal projection, we see that the vector s − sG
must be orthogonal to G, i.e.
s − sG ⊥ v for any v ∈ G.
ks − sG k = min ks − f k.
f ∈G
The coefficients (1.13) are unique, i.e. there is no other set of coefficients that results
in the minimum 2-norm of the error.
Sec. 1.3. Frequency Analysis 47
Solution.
(a) Write all three signals as vectors, i.e.,
g1 (1) g2 (1) s(1)
g1 (2) g2 (2) s(2)
g1 = , g2 =
g1 (3)
, s = .
g2 (3) s(3)
g1 (4) g2 (4) s(4)
=
u g1 (2) = j
u u <
g1 (3) = −1 g1 (1) = 1
u g (4) = −j
1
Figure 1.25. Illustration to Example 1.17: The four entries of vector g1 as points in the complex
plane C.
j2π2
g1 (3) = exp = exp(jπ),
4
j2π3 3π
g1 (4) = exp = exp j .
4 2
Fig. 1.25 shows a plot of these in the complex plane (recall that exp(jθ) has absolute
value 1 and angle θ).
Calculations for g2 are similar. We obtain:
1 1
j
g1 = , g2 = −1 .
−1 1
−j −1
0
A s
A
A
A
A
A
A
A
A
A
− 4 g1
1
A
1 @ g1
g
4 2@
Projection of s0 onto G: − 14 g1 + 14 g2 @
@
@
@
g2 @
@
s = g1 + g2
Space G, the span of g1 and g2 .
Figure 1.26. Illustration to Example 1.17: Vector s lies in the space spanned by g1 and g2 , and
therefore can be represented as their linear combination. Vector s0 is not in the space spanned by g1
and g2 and cannot be represented as a linear combination of g1 and g2 . The closest linear combination
of g1 and g2 to s0 is the orthogonal projection of s0 onto the span of g1 and g2 .
However,
0
−j − 1
1 1
− g1 + g2 = 4 6= s0 .
4 4 1/2
j−1
4
0
Therefore, s cannot be represented as a linear combination of g1 and g2 . Geometrically,
this means that s0 lies outside of the space spanned by g1 and g2 , as illustrated in
Fig. 1.26. The coefficients a01 = −1/4 and a02 = 1/4 we computed are actually the
coefficients of the orthogonal projection of s0 onto this space. Theorem 1.4 states that
in this case − 14 g1 + 14 g2 is the best approximation of s0 as a linear combination of g1
and g2 , in the sense that it minimizes the 2-norm of the error.
Example 1.18. In addition to the signals g1 (n) and g2 (n) defined in Example 1.17,
define the signals g0 (n) and g3 (n) as follows:
j2π0(n − 1)
g0 (n) = exp , n = 1, 2, 3, 4;
4
j2π3(n − 1)
and g3 (n) = exp , n = 1, 2, 3, 4.
4
(b) Using Theorem 1.3, find coefficients a00 , a01 , a02 , a03 in the following Fourier series
expansion:
s0 (n) = a00 g0 (n) + a01 g1 (n) + a02 g2 (n) + a03 g3 (n), n = 1, 2, 3, 4,
for signal s0 (n) defined in Example 1.17.
Solution.
(a) We already know from Example 1.17 that s = g1 +g2 –i.e., the additional basis signals
g0 and g3 are not needed to represent s. The answer is a0 = a3 = 0 and a1 = a2 = 1. If
we did not have the results of Example 1.17 available to us, we would proceed similarly
to Example 1.17. First, we write all signals as vectors:
1 1 1 1 2
1
g0 = , g1 = j , g2 = −1 , g3 = −j , s = −1 + j .
1 −1 1 −1 0
1 −j −1 j −1 − j
Then we calculate the inner products used in Eq. (1.11), and compute the coefficients.
These calculations were done in Example 1.17 for g1 and g2 . The calculations for g0
and g3 are similar: hg0 , g0 i = hg3 , g3 i = 4, and
hs, g0 i = 2 · 1∗ + (−1 + j) · 1∗ + 0 · 1∗ + (−1 − j) · 1∗ = 0,
hs, g0 i 0
a0 = = = 0,
hg0 , g0 i 4
hs, g3 i = 2 · 1 + (−1 + j) · (−j)∗ + 0 · (−1)∗ + (−1 − j) · j ∗
∗
Therefore,
1 1 1 1
s0 (n) = g0 (n) − g1 (n) + g2 (n) − g3 (n).
4 4 4 4
0
This is consistent with what we saw in Example 1.17: s cannot be represented as a
linear combination of only g1 and g2 . Note, however, that if in the expansion
1 1 1 1
s0 = g0 − g1 + g2 − g3 ,
4 4 4 4
we drop the terms which do not contain g1 and g2 , we will get the following vector:
1 1
− g1 + g2 ,
4 4
which is the answer we obtained in Example 1.17. This is the closest approximation of
s0 by a linear combination of g1 and g2 .
Now let us generalize Examples 1.17 and 1.18 from four dimensions to N .
In other words, there are N functions, g0 (n), g1 (n), . . . , gN −1 (n), and each of them is
defined for n = 0, 1, . . . , N − 1.
(a) Prove that these N signals are pairwise orthogonal, and find their energies.
(b) Find a formula for the Fourier series coefficients X(0), X(1), . . . , X(N − 1) of an
N -point complex-valued signal x(n),
N
X −1
x(n) = X(k)gk (n), n = 0, . . . , N − 1
k=0
NX−1
1 j2πkn
= X(k) exp , n = 0, . . . , N − 1. (1.15)
N N
k=0
Solution.
(a) To show orthogonality and compute the energies, we need to calculate all inner
products hgk , gi i, for all k = 0, . . . , N − 1 and i = 0, . . . , N − 1. If we can show that
these inner products for k 6= i are zero, we will show that the signals are pairwise
orthogonal. Moreover, the inner products for k = i will give us the energies.
N
X −1
hgk , gi i = gk (n)(gi (n))∗
n=0
Sec. 1.3. Frequency Analysis 53
N
X −1
1 j2πkn 1 j2πin
= exp exp −
N N N N
n=0
N −1
1 X j2π(k − i)n
= exp
N2 N
n=0
N −1
1 X j2π(k − i) n
= exp .
N2 N
n=0
When k = i, each term of the summation is equal to 1, and therefore the sum is N .
The energy of each gk is therefore N/N 2 = 1/N . When k 6= i, the sum is zero (why?).
(b) Since g0 , . . . , gN −1 are nonzero and pairwise orthogonal, we can apply Theorem
1.2 to infer that {g0 , . . . , gN −1 } is an orthogonal basis for CN . Any signal s ∈ CN can
therefore be uniquely represented as their linear combination, according to Theorem
1.3. The coefficients in the representation are given by Eq. (1.11). The denominator of
that formula is the energy of gk , which we found in Part (a) to be 1/N . Therefore,
hx, gk i
X(k) =
hgk , gk i
hx, gk i
=
1/N
N
X −1
= N x(n)(gk (n))∗
n=0
N
X −1
1 j2πkn
= N exp −
x(n)
N N
n=0
N
X −1
j2πkn
= x(n) exp − , k = 0, . . . , N − 1. (1.16)
N
n=0
The signal X comprised of the N Fourier series coefficients is called the discrete Fourier
transform (DFT) of the signal x. The DFT X(k) is obtained from x(n) as follows:
N
X −1
j2πkn
X(k) = x(n) exp − , k = 0, . . . , N − 1. (1.18)
N
n=0
54 CHAPTER 1. ANALYSIS OF DISCRETE-TIME LINEAR TIME-INVARIANT SYSTEMS
Since Eq. (1.17) is the recipe for obtaining signal samples x(n) from the DFT, it is
sometimes called the inverse DFT formula.
Eqs. (1.17) and (1.18) are particular cases of Eqs. (1.10) and (1.11), respectively, and
were easily obtained in Example 1.19 by applying Theorem 1.3 to a complex exponential
basis (also called Fourier basis) for CN . Eq. (1.17) tells us how to represent any signal
in CN as the linear combination of Fourier basis functions. The k-th term in the
representation is the orthogonal projection of the signal onto the k-th basis signal. The
projection coefficients are calculated using Eq. (1.18).
Note that Example 1.18 is a special case of Example 1.19: by setting N = 4 in
Eqs. (1.17) and (1.18) and appropriately normalizing the basis functions, we can get
the answers to Example 1.18.
Theorem 1.5. Consider an LTI system whose impulse response is h. If the com-
plex exponential signal ejω0 n , defined for all integer n, is its input, then the output is
H(ejω0 )ejω0 n , for all integer n, where H(ejω0 ) is the frequency response of the system
evaluated at the frequency ω0 of the input signal.
Example 1.20. (a) Find a difference equation that describes the system in Fig. 1.27,
i.e., relates the output of the overall system to its input. Here, A, B, and C are
fixed constants.
x(n)
Delay Delay
by 1 by 1
C
B y(n)
+
A
(b) Find the frequency response of this system by calculating the response to a complex
exponential.
Solution. To find H(ejω ), we calculate the response y(n) of the system to the
input signal ejωn ,
x(n) = ejωn
⇒ y(n) = Aejωn + Bejω(n−1) + Cejω(n−2)
= (A + Be−jω + Ce−2jω ) ejωn .
| {z }
H(ejω )
(c) Suppose that A = B = C = 1 and x(n) = 5, for −∞ < n < ∞. Calculate y(n)
using the frequency response.
Solution.
x(n) = 5ej0n
⇒ y(n) = H(ej0 ) · (5ej0n )
= (1 + 1 + 1) · 5
= 15, for all integer n.
56 CHAPTER 1. ANALYSIS OF DISCRETE-TIME LINEAR TIME-INVARIANT SYSTEMS
Example 1.21. Consider a DT LTI system with a known frequency response H(ejω ).
The response of such a system to an everlasting complex exponential input signal g(n) =
ejωn , −∞ < n < ∞, is:
h(n) 6= 0 for n = 0, . . . , N − 1
h(n) = 0 otherwise.
Suppose further that x is an input signal which is periodic with period N . How many
arithmetic operations will it take to calculate the response y using the convolution
formula? First, notice that the response is also periodic with period N :
∞
X ∞
X
y(n+N ) = h(m)x(n+N −m) = h(m)x(n−m) = y(n), for any integer n.
m=−∞ m=−∞
Sec. 1.3. Frequency Analysis 57
Therefore, the computation of one particular sample y(n) requires N multiplications and
N − 1 additions.5 There are altogether N samples of y to be computed, and therefore
the overall number of multiplications is N 2 and the overall number of additions is
N (N − 1) = N 2 − N . The overall order of the number of operations required is N 2 .
In other words, the process of calculating the response has computational complexity
O(N 2 ). This is a very high computational cost for something as basic and as frequently
needed as calculating a discrete-time convolution. Specifically, suppose that on our
computer, this operation takes 1 second for N = N1 . Then for N = 100N1 it will be
approximately 1002 = 10000 times slower, i.e., it will take approximately 2.8 hours.
We need an algorithm for calculating convolutions more quickly. Let us use the fact
that, since x is periodic with period N , it can be represented as the following linear
combination:
N
X −1
x= X(k)gk ,
k=0
where gk are the Fourier basis functions given by Eq. (1.14). Using the linearity of our
system, we obtain the following representation for the output:
N
X −1
y = S[x] = X(k)S[gk ].
k=0
But since gk is a complex exponential of frequency 2πk/N , we can use Theorem 1.5 to
write the response to gk as S[gk ] = H(ej2πk/N )gk . Substituting this into the formula
for y, we get:
N
X −1
y = S[x] = X(k)H(ej2πk/N )gk .
k=0
But this is a representation of the signal y as a linear combination of Fourier basis signals
gk , with coefficients X(k)H(ej2πk/N ). In other words, the Fourier series coefficients of
y are:
Y (k) = X(k)H(ej2πk/N ), for k = 0, . . . , N − 1.
Assuming that the values of H(ej2πk/N ) for k = 0, . . . , N − 1 are known, we have
discovered the following procedure for calculating y:
5
These are complex multiplications and complex additions since we assume in general that our
signals are complex-valued.
58 CHAPTER 1. ANALYSIS OF DISCRETE-TIME LINEAR TIME-INVARIANT SYSTEMS
Step 1. Calculate the N DFT coefficients X(k) of x using the DFT formula.
Step 2. Calculate the N DFT coefficients Y (k) = X(k)H(ej2πk/N ) of y.
Step 3. Calculate y from its DFT coefficients using the inverse DFT formula.
It turns out that Steps 1 and 3, the DFT and the inverse DFT, can be implemented
using a fast Fourier transform (FFT)6 algorithms whose computational complexity is
O(N log N ). Moreover, if the N values H(ej2πk/N ) of the frequency response were
not known but had to be calculated from the impulse response h, that, too, could
be done using FFT with computational complexity O(N log N ). Step 2 consists of N
multiplications. The overall computational complexity is therefore only O(N log N ), a
marked improvement over O(N 2 ), especially for large N .
To summarize, we have just seen two aspects of why it is important to study rep-
resentations of signals as weighted sums of complex exponentials in the context of our
study of LTI systems.
• Conceptual importance: LTI systems process each frequency component sep-
arately, and in a very simple way (i.e. by multiplying it by a frequency-dependent
complex number).
• Computational importance: To obtain the response of an LTI system with an
N -point impulse response to an N -periodic signal, we need O(N 2 ) computational
effort if we use the convolution formula directly. The computational complexity is,
however, reduced to O(N log N ) if we use the frequency-domain representations
instead.
forms an orthogonal basis for L2 (T0 ). We can represent any T0 -periodic CT signal s as
a linear combination of these complex exponentials:
∞
X ∞
X
j2πkt
s(t) = ak gk (t) = ak exp . (1.20)
T0
k=−∞ k=−∞
The first “=” sign in Eq. (1.20) needs careful interpretation: unlike the finite-duration
DT case, the equality here is not pointwise. Instead, the equality is understood in the
following sense:
XM
s − ak gk
→ 0 as N → −∞ and M → ∞.
k=N
hs, gk i
ak = . (1.21)
hgk , gk i
The fact that hgk , gi i = 0 shows that our vectors are indeed pairwise orthogonal.7
Substituting hgk , gk i = T0 back into Eq. (1.21), we get:
Z τ +T0
hs, gk i 1 j2πkt
ak = = s(t) exp − dt. (1.22)
T0 T0 τ T0
7
Note, however, that this is not enough to prove that they form an orthogonal basis for L2 (T0 ) since
L2 (T0 ) is infinite-dimensional, and Theorem 1.2 no longer holds: it is not true that any infinite set of
nonzero orthogonal vectors forms an orthogonal basis for L2 (T0 ). Proving the fact that signals gk do
form an orthogonal basis for L2 (T0 ) is beyond the scope of this course.
60 CHAPTER 1. ANALYSIS OF DISCRETE-TIME LINEAR TIME-INVARIANT SYSTEMS
s(t)
A
t t t t t t
Example 1.22. Let T0 , t0 , and A be three positive real numbers such that T0 > t0 > 0.
Consider the following periodic signal:
A if |t| ≤ t20
s(t) =
0 if |t| ≤ t20 ,
periodically extended with period T0 , as shown in Fig. 1.28. Using Eq. (1.22) with
τ = −T0 /2, its Fourier series coefficients are:
Z t0 /2
1 At0
a0 = A dt = ,
T0 −t0 /2 T0
Z t0 /2
1 j2πkt
ak = A exp − dt
T0 −t0 /2 T0
A T0 j2πkt t0 /2
= · exp −
T0 −j2πk T0 −t0 /2
A 1 jπkt0 jπkt0
= · · exp − exp −
πk 2j T0 T0
A πkt0
= sin .
πk T0
sin θ
(Note that this last formula is also valid for k = 0 if we define = 1.)
θ θ=0
Another common way of decomposing CT periodic signals as linear combinations
of sinusoidal signals is by using sines and cosines as basis functions, instead of complex
exponentials. The following infinite collection of functions is also an orthogonal basis
for L2 (T0 ), and is also called a Fourier basis:
c0 (t) = 1,
2πkt
ck (t) = cos , k = 1, 2, . . .
T0
2πkt
sk (t) = sin , k = 1, 2, . . .
T0
Sec. 1.3. Frequency Analysis 61
As we did previously, let us first prove that these functions are pairwise orthogonal, and
find their energies over one period. We need to consider all pairwise inner products–
which will now be integrals of products of trigonometric functions. We will therefore
need the following formulas:
1
sin α sin β = (cos(α − β) − cos(α + β)) (1.23)
2
1
sin α cos β = (sin(α − β) + sin(α + β)) (1.24)
2
1
cos α cos β = (cos(α − β) + cos(α + β)) (1.25)
2
Now we compute the inner products, keeping in mind that sk (t) is defined for k ≥ 1
while ck (t) is defined for k ≥ 0:
Z τ +T0
t t
hsk , si i = sin 2πk sin 2πi dt
τ T0 T0
Z
Eq. (1.23) 1 τ +T0 t t
= cos 2π(k − i) − cos 2π(k + i) dt
2 τ T0 T0
T0
= 2 , k =i
0, k 6= i.
Z τ +T0
t t
hsk , ci i = sin 2πk cos 2πi dt
τ T0 T0
Z
Eq. (1.24) 1 τ +T0 t t
= sin 2π(k − i) + sin 2π(k + i) dt = 0.
2 τ T0 T0
Z
τ +T0
t t
hck , ci i = cos 2πk cos 2πi dt
τ T0 T0
Z
Eq. (1.25) 1 τ +T0 t t
= cos 2π(k − i) + cos 2π(k + i) dt
2 τ T0 T0
T0 , k = i = 0
T0
= , k = i 6= 0
2
0, k 6= i.
We are now ready to derive formulas for the coefficients a1 , a2 . . . and b0 , b1 , b2 . . . of the
expansion of a CT T0 -periodic signal s(t):
∞
X ∞
X
s(t) = b0 + ak sk (t) + bk ck (t). (1.26)
k=1 k=1
hs, c0 i
b0 =
hc0 , c0 i
62 CHAPTER 1. ANALYSIS OF DISCRETE-TIME LINEAR TIME-INVARIANT SYSTEMS
x(t)
1
t t t t t t
−2 −1 0 1 2 t
Z τ +T0
1
= s(t) dt
T0 τ
hs, ck i
bk =
hck , ck i
Z τ +T0
2 t
= s(t) cos 2πk dt, k = 1, 2, . . .
T0 τ T0
hs, sk i
ak =
hsk , sk i
Z τ +T0
2 t
= s(t) sin 2πk dt, k = 1, 2, . . .
T0 τ T0
Example 1.23. Suppose that the period is T0 = 2, and let signal x be defined by:
1, −1 ≤ t < 0
x(t) =
0, 0 ≤ t < 1,
as illustrated in Fig. 1.29 Let us compute the Fourier series coefficients with respect to
the Fourier basis of sines and cosines. From the formulas above, with τ = −1,
Z
1 0 1
b0 = 1 dt =
2 −1 2
Z 0
2 t
For k ≥ 1, bk = cos 2πk dt
2 −1 2
Z 0
= cos(πkt) dt
−1
t=0
1
= sin(πkt) =0
πk t=−1
Z
2 0 t
ak = sin 2πk dt
2 −1 2
Z 0
= sin(πkt) dt
−1
Sec. 1.3. Frequency Analysis 63
t=0
1
= − cos(πkt)
πk t=−1
− πk
2
, if k is odd
=
0, if k is even.
(b) Complex exponential basis signals are related to the sine and cosine basis signals.
Using the Fourier series coefficients obtained in Example 1.22, we have the following
representation of x(t) in the complex exponential Fourier basis:
t0
x(t) = s t +
2
X∞
A πkt0 t0
= sin gk t +
πk T0 2
k=−∞
X∞
A πkt0 j2πk(t + t0 /2)
= sin exp
πk T0 T0
k=−∞
X∞
A πkt0 j2πk(t0 /2) j2πkt
= sin exp exp
πk T0 T0 T0
k=−∞
X∞
A πkt0 jπkt0
= sin exp gk (t),
πk T0 T0
k=−∞
which means that the coefficients of x in the complex exponential Fourier basis are:
A πkt0 jπkt0
αk = sin exp
πk T0 T0
1 πk jπk
= sin exp . (1.27)
πk 2 2
But notice that the complex exponential basis functions are related to the sine and cosine
basis functions as follows:
g0 (t) = 1 = c0 (t),
2πkt 2πkt
For k ≥ 1, gk (t) = cos + j sin = ck (t) + jsk (t),
T0 T0
2πkt 2πkt
g−k (t) = cos − + j sin − = ck (t) − jsk (t).
T0 T0
64 CHAPTER 1. ANALYSIS OF DISCRETE-TIME LINEAR TIME-INVARIANT SYSTEMS
Therefore,
∞
X
x(t) = αk gk (t)
k=−∞
∞
X −1
X
= α0 g0 (t) + αk gk (t) + αk gk (t)
k=1 k=−∞
X∞
= α0 g0 (t) + [αk gk (t) + α−k g−k (t)]
k=1
X∞
= α0 c0 (t) + [αk (ck (t) + jsk (t)) + α−k (ck (t) − jsk (t))]
k=1
X∞ ∞
X
= α0 c0 (t) + (αk + α−k )ck (t) + j(αk − α−k )sk (t).
k=1 k=1
Matching these coefficients with the coefficients in Eq. (1.26), we get the following rela-
tionship between the exponential Fourier series coefficients and the sine-cosine Fourier
series coefficients of any T0 -periodic signal:
b0 = α0 ,
bk = αk + α−k for k = 1, 2, . . . ,
ak = j(αk − α−k ) for k = 1, 2, . . . .
Using expression (1.27) we found for the coefficients αk of the specific signal x we are
considering in this example, we get:
1 sin(πk/2) jπk 1
b0 = α0 = exp = ;
2 (πk)/2 2 2
k=0
1 πk jπk 1 πk jπk
bk = αk + α−k = sin exp + sin − exp −
πk 2 2 π(−k) 2 2
1 πk jπk jπk
= sin exp + exp −
πk 2 2 2
2 πk πk
= sin cos
πk 2 2
Eq. (1.24) 1
= (sin 0 + sin πk) = 0;
πk
1 πk jπk 1 πk jπk
ak = j(αk − α−k ) = j sin exp − sin − exp −
πk 2 2 π(−k) 2 2
j πk jπk jπk
= sin exp − exp −
πk 2 2 2
2 πk
= − sin2
πk 2
Sec. 1.3. Frequency Analysis 65
− πk
2
, if k is odd
=
0, if k is even.
Notice that these are the same results we got before by evaluating the inner products
with sines and cosines directly.
Notice that x(n) are the continuous-time Fourier series coefficients of X(ejω ). To see
this better, we can relate DTFT to the CTFS formulas above by making the following
identifications:
T0 = 2π,
ω = 2πt/T0 = t,
n = −k,
x(n) = a−k .
The DTFT formula then becomes the inverse CTFS formula (1.20), and therefore the
inverse DTFT formula is obtained by relabeling variables in the CTFS formula (1.22):
Z ω0 +2π
1
x(n) = X(ejω )ejωn dω.
2π ω0
Thus, DTFS/DFT, CTFS, and DTFT are all particular cases of our general framework
of orthogonal representations.
Properties of the DTFT.
3. Convolution:
X
Y ejω = y(n)e−jωn
n
" #
X X
= h(n − k)x(k) e−jωn
n k
" #
X X
−jωn
= h(n − k)e x(k)
k n
Xh i
H ejω e−jωk x(k)
property 2
=
k
X
= H ejω e−jωk x(k)
|k {z }
X(ejω )
= H ejω X ejω
So, if y(n) = h ∗ x(n), then Y ejω = H ejω X ejω .
4. Parseval’s theorem:
∞
X Z
1 π
|x(n)| 2
= X ejω 2 dω
n=−∞
2π −π
∞
X Z π
1
x(n)y ∗ (n) = X ejω Y ∗ ejω dω
n=−∞
2π −π
5.
∞
X
j0
X e = x(n)
n=−∞
6. Z π
1
x(0) = X ejω dω
2π −π
Example 1.24.
1
y(n) = (x(n) + x(n − 1))
2
Find the frequency jω
response H(e ). Is this system a low-pass, band-pass, or high-pass
filter? Plot H ejω and ∠H ejω .
Sec. 1.3. Frequency Analysis 67
Solution.
Method 1. Use the eigenfunction property:
x(n) = ejωn ⇒ y(n) = H ejω ejωn
1 jωn
y(n) = e + ejω(n−1)
2
1
= (1 + e−jω ) ejωn
2
| {z }
H(ejω )
1 −j ω j ω ω
H ejω = e 2 e 2 + e−j 2
2
ω ω
= e−j 2 cos
2
Method 2.
jω
1
Y e = DTFT (x(n) + x(n − 1))
2
1 1
= DTFT{x(n)} + DTFT{x(n − 1)}
2 2
1 1
= X ejω + X ejω e−jω
2 2
1 jω
= X e 1 + e−jω
2
Y ejω
H ejω =
X (ejω )
1
= 1 + e−jω
2
Method 3. Impulse response: h(n) = 12 (δ(n) + δ(n − 1))
H ejω = DTFT{h(n)}
X∞
DTFT{δ(n)} = δ(n)e−jωn = 1
n=−∞
−jω·1
DTFT{δ(n − 1)} = 1 · e = e−jω
1
H ejω = 1 + e−jω
2
To plot the magnitude response and the phase response, we note:
H ejω = cos ω
2
jω
−j ω ω
∠H e = |∠e{z 2} + ∠ cos ,
−ω | {z 2}
2
0
68 CHAPTER 1. ANALYSIS OF DISCRETE-TIME LINEAR TIME-INVARIANT SYSTEMS
because
ω π ω π
cos ≥ 0 for − ≤ ≤ , i.e., − π ≤ ω ≤ π.
2 2 2 2
Note: H (jω)
is periodic
with period 2π, since it is a DTFT of a DT signal. Since the
values of H e jω at low frequencies (near ω = 0) are close to one, and its values at
high frequencies (near ω = ±π) are close to zero, this a low-pass filter.
˛ ` jω ´˛
˛H e ˛
rrr rrr
ω
−π π
˛ ` ´˛
Figure 1.30. The plot of ˛H ejω ˛, for Ex. 1.24.
` ´
∠H ejω
π
2
HH HH
rrr HH H rrr
HH
−π
HH ω
HH HH π
H H
−π
2
` ´
Figure 1.31. The plot of ∠H ejω , for Ex. 1.24.
because
≥ 0, 0≤ω≤π
sin ω
≤ 0, −π ≤ ω ≤ 0,
and therefore,
0, 0≤ω≤π
∠ sin ω =
−π, −π ≤ ω ≤ 0.
This is a band-pass filter. Note: it is a convention to keep angles in the range [−π, π].
˛ ` jω ´˛
˛H e ˛
rrr rrr
− π2 π ω
−π 2 π
˛ ` ´˛
Figure 1.32. The plot of ˛H ejω ˛, for Ex. 1.25.
` ´
∠H ejω
π
2
rrr @ @ @ rrr
@ @ @
@ @ @
@ −π −@π @ π
π ω
@
2
@ 2
@
@ @
−π
@
2
` ´
Figure 1.33. The plot of ∠H ejω , for Ex. 1.25.
X
N −1 X
N −1 “ 2πk ”
x(n) = X(k)gk (n) y(n) = X(k)H ej N gk (n)
k=0 k=0
H(ejω )
In this figure, the input x(n) is N -periodic and the basis function gk (n) is given by
1 j 2πk n
gk (n) = e N for k = 0, · · · , N − 1 and for all n. (1.28)
N
• Conceptual importance: LTI systems process each harmonic separately in a sim-
ple way (i.e., multiply each harmonic by a frequency dependent complex number
gk (n)).
• Computational importance:
2πk
– to obtain X(k)H ej N from X(k) for k = 0, 1, · · · , N − 1, we need only
N operations.
2πk
– to obtain X(k) from x(n) and y(n) from X(k)H ej N , we need only
O(N log N ) operations. This can be done through FFT.
1 j 2πk ·1
x(1) Ne
N
x= , gk = for k = 0, 1, · · · , N − 1. (1.29)
..
.
..
.
x(N − 1) 2πk
1 j N ·(N −1)
e N
Since the gk ’s are pairwise orthogonal, we can use the projection formula to calculate
the coefficients and obtain the inversion formula (1.18).
If we let
X(0)
..
X= .
X(N − 1)
then we can rewrite Eq. (1.30) as
N
X −1
x = X(k)gk
k=0
= X(0)g0 + X(1)g1 + · · · + X(N − 1)gN −1
X(0)
..
= ( g0 g1 · · · gN −1 ) .
X(N − 1)
= ( g0 g1 · · · gN −1 )X
= BX,
where B is an N × N matrix whose columns are gk ’s and whose entry Bnk in the n-th
row and k-th column is given by
1 j 2π(k−1)(n−1)
Bnk = e N for n = 1, 2, · · · , N ; k = 1, 2, · · · , N.
N
To get the formula for DFT, we premultiply x = BX by the matrix A = N B H , where
yH = (y∗ )T means the “conjugate transpose of y”. Since the rows of B H are conjugate
transposes of gk ’s, we have
g0H
gH
1
A = N . ,
. .
H
gN −1
where
gkH = 1 −j 2πk ·0 1 −j 2πk ·1 1 −j 2πk ·(N −1)
Ne
N
Ne
N ··· Ne
N .
The entry in the k-th row and n-th column of A is:
2π(k−1)(n−1)
∗
Akn = N Bnk = e−j N for k = 1, 2, · · · , N ; n = 1, 2, · · · , N.
Premultiplying x = BX by A, we get
Ax = ABX.
72 CHAPTER 1. ANALYSIS OF DISCRETE-TIME LINEAR TIME-INVARIANT SYSTEMS
X = A x
N ×1 N ×N N ×1
gkH gp = hgp , gk i
N
X −1
= gp (n)gk∗ (n),
n=0
AB simplifies to
1
N 0
1
N
AB = N ..
.
0 1
N
1
0
1
= ..
.
0
1
= I, where I is the identity matrix.
Sec. 1.3. Frequency Analysis 73
Hence, A and B are inverses of each other. The DFT of x can then be calculated by
X = Ax.
From Fig. 1.35, we see that we need N 2 complex multiplications for a brute force
implementation of DFT. However, the fact that the matrix A is highly structured can
be exploited to produce a much faster algorithm for multiplying a vector by this matrix
A.