Вы находитесь на странице: 1из 7

ESE500 F19: Homework 3

Due: In Class, Wednesday October 9

For a given time variant system ẋ = A(t)x + B(t)u, we define φA (t, τ ) to be the unique function from
R×R to Rn×n such that for any initial point τ = t0 ∈ R, the function φA (t, t0 ) is the solution of the following
differential equation with initial condition.
d
φA (t, t0 ) = A(t)φA (t, t0 ), φA (t0 , t0 ) = I
dt
It can be shown that φA (t, τ ) has the following properties.
i) ∀t0 , t1 , t2 ∈ R : φA (t2 , t1 )φA (t1 , t0 ) = φA (t2 , t0 )
ii) ∀t, τ ∈ R : φ−1
A (t, τ ) = φA (τ, t)

Problem 1. Using the definition and properties of transition matrix given above, prove that
d
φA (t, τ ) = −φA (t, τ )A(τ )

Solution:
−1
First note that for a given invertible square matrix C(t), we have d
dt C = −C −1 dC
dt C
−1
because

CC −1 = I
d d
⇒ CC −1 = I = 0
dt dt
dC −1 dC −1
⇒ C +C =0
dt dt
dC −1 dC −1
⇒ C =− C
dt dt
−1
dC dC −1
⇒ = −C −1 C
dt dt
Now, we have
d d −1
φA (t, τ ) = φ (τ, t)
dτ dτ A  
d
= −φ−1
A (τ, t) φ A (τ, t) φ−1
A (τ, t)

 
= −φ−1
A (τ, t) A(τ )φ A (τ, t) φ−1
A (τ, t)

= −φ−1
A (τ, t)A(τ )
= −φA (t, τ )A(τ )

1
Problem 2. In this problem, using the definition and properties of transition matrix given above, you will
follow the steps to prove that Rt
tr(A(s))ds
det (φA (t, t0 )) = e t0
a) For any square matrix H and α ∈ R, prove that

det(I + αH) = 1 + α tr(H) + f (α)


f (α)
where limα→0 α = 0. Note that here f (α) is a scalar.
b) Prove that
φA (t + α, t0 ) = φA (t, t0 ) + α A(t)φA (t, t0 ) + F (α)φA (t, t0 )
where limα→0 F (α)
α = 0. Note that here F (α) is a matrix.
c) Prove that
d
det (φA (t, t0 )) = tr(A(t)) det (φA (t, t0 ))
dt
d) Conclude that Rt
tr(A(s))ds
det (φA (t, t0 )) = e t0

Solution:

a) It can easily be shown that determinant of any matrix is the product of its eigenvalues and trace of any
matrix is the summation of its eigenvalues. Also, it is known that λ ∈ eig(A) if and only if f (λ) ∈
eig(f (A)). This is an application of the spectral mapping theorem, and can be found in the linear algebra
lemmas document under the linear algebra notes folder on Canvas. Indeed, these facts are accepted and
can be used in this problem without proof.
Let H be an n × n matrix an λ1 , . . . , λn be all eigenvalues of H with considering their repetition. Then,
1 + αλ1 , . . . , 1 + αλn are eigenvalues of I + αH. therefore,
n n
!
Y X
det(I + αH) = (1 + αλi ) = 1 + α λi + f (α) = 1 + α tr(H) + f (α)
i=1 i=1

where f (α) contains terms with higher powers of α. More precisely,


 
n
X X
f (α) = αk  λi1 . . . λik 
k=2 1≤i1 <i2 <···<ik ≤n

f (α)
Now, trivially, limα→0 α = 0.
b) Using Taylor expansion, we can write

X α k dk
d
φA (t + α, t0 ) = φA (t, t0 ) + α φA (t, t0 ) + φA (t, t0 ) (1)
dt k! dtk
k=2

Now, we claim that for k ≥ 1, there exists a matrix Fk (t) such that

dk
φA (t, t0 ) = Fk (t)φA (t, t0 )
dtk
We prove this claim using induction. For k = 1, we have
d
φA (t, t0 ) = A(t)φA (t, t0 )
dt

2
Assume the statement is true for k, we will show that it is true for k + 1.

dk+1
 
d
φ A (t, t0 ) = Fk (t)φ A (t, t0 )
dtk+1 dt
d
= Ḟk (t)φA (t, t0 ) + Fk (t) φA (t, t0 )
dt
= Ḟk (t)φA (t, t0 ) + Fk (t)A(t)φA (t, t0 )
 
= Ḟk (t) + Fk (t)A(t) φA (t, t0 )

= Fk+1 (t)φA (t, t0 )

Therefore, by replacing in (1), we have



X αk
φA (t + α, t0 ) = φA (t, t0 ) + α A(t)φA (t, t0 ) + Fk (t)φA (t, t0 )
k!
k=2
= φA (t, t0 ) + α A(t)φA (t, t0 ) + F (α)φA (t, t0 )

where

X αk
F (α) = Fk (t)
k!
k=2
F (α)
then trivially, limα→0 α =0

c) Using the definition of derivative, we have


 
d 1
det (φA (t, t0 )) = lim det(φA (t + α, t0 )) − det(φA (t, t0 ))
dt α→0 α
   
1
= lim det φA (t, t0 ) + α A(t)φA (t, t0 ) + F (α)φA (t, t0 ) − det(φA (t, t0 ))
α→0 α
    
1
= lim det I + α A(t) + F (α) φA (t, t0 ) − det(φA (t, t0 ))
α→0 α
   
1
= lim det I + α A(t) + F (α) det(φA (t, t0 )) − det(φA (t, t0 ))
α→0 α
    
1 1
= det(φA (t, t0 )) lim det I + α A(t) + F (α) −1
α→0 α α
   
1 1
= det(φA (t, t0 )) lim 1 + α tr A(t) + F (α) + f (α) − 1
α→0 α α
 
1
= det(φA (t, t0 )) lim α tr (A(t)) + tr(F (α)) + f (α)
α→0 α
   
1 1
= det(φA (t, t0 )) lim tr (A(t)) + tr F (α) + f (α)
α→0 α α
= det(φA (t, t0 ))tr (A(t))

3
d) Fix t0 ∈ R and define y(t) = det (φA (t, t0 )). Thus, ẏ(t) = tr(A(t))y(t). Therefore,
ẏ(t)
ẏ(t) = tr(A(t))y(t) ⇒ = tr(A(t))
y(t)
Z t Z t
ẏ(s)
⇒ ds = tr(A(s))ds
t0 y(s) t0
t Z t

⇒ ln y(s) = tr(A(s))ds
t0 t0
Z t
⇒ ln det (φA (t, t0 )) − ln det (φA (t0 , t0 )) = tr(A(s))ds
t0
Z t
⇒ ln det (φA (t, t0 )) = tr(A(s))ds Note that φA (t0 , t0 ) = I
t0
Rt
tr(A(s))ds
⇒ det (φA (t, t0 )) = e t0

Problem 3. Consider a continuous LTI system ẋ = Ax + Bu and call it system (1). Assume that the states
of the discrete LTI system xk+1 = Ãxk + B̃uk (system (2)) are obtained by sampling (with zero-order hold)
from the states of system (1) with sampling time T , i.e., xk = x(kT ) for all k. Suppose T = 1. Prove that
system (1) is asymptotically stable if and only if system (2) is asymptotically stable.
Solution:
For any given T > 0 we can argue as the following. It is known that à = eAT . Note that λ ∈ eig(A) ⇔
e ∈ eig(eAT ). Also note that
λT

λT T Re(λ)+jT Im(λ) T Re(λ) jT Im(λ)


e = e = e e = eT Re(λ)

Now, we can write

System (1) is asymptotically stable ⇔ ∀λ ∈ eig(A) : Re(λ) < 0


⇔ ∀λ ∈ eig(A) : eT Re(λ) < 1
⇔ ∀λ ∈ eig(A) : eλT < 1

⇔ ∀λ0 ∈ eig(eAT ) : |λ0 | < 1


⇔ System (2) is asymptotically stable

Problem 4. Prove that all eigenvalues of A ∈ Rn×n have real parts less than −µ < 0 if and only if, for any
given symmetric positive definite matrix Q, the equation:

AT P + P A + 2µP = −Q

has a unique symmetric positive definite solution P .


Solution:

∀λ ∈ eig(A) : Re(λ) < −µ ⇔ ∀λ ∈ eig(A) : Re(λ + µ) < 0


⇔ ∀λ0 ∈ eig(A + µI) : Re(λ0 ) < 0
⇔ ẋ = (A + µI)x is aymptotically stable
⇔ ∀Q > 0 ∃!P > 0 : (A + µI)T P + P (A + µI) = −Q
⇔ ∀Q > 0 ∃!P > 0 : AT P + P A + 2µP = −Q

4
Problem 5. In each part, with the given choice of A, determine whether the continuous time system ẋ = Ax
and the discrete time system xk+1 = Axk are asymptotically stable, marginally stable or unstable. For each
part, justify your answer. Note that each part has two answers, one for the continuous time and one for the
discrete time system.
 1 
−4 0
a) A =
1 −2
 
0 −5
b) A =
1 2

−1 − 16
 
c) A = 1
3 2
 
0 0
d) A =
0 0
 
1 0
e) A =
0 1
Solution:
In each case, first we compute eigenvalues of the system, then check the stability in both continuous and
discrete cases.

a) λ1 = −2, λ2 = − 14 .
Re(λ1 ) < 0, Re(λ2 ) < 0 ⇒ ẋ = Ax is asymptotically stable.
|λ1 | > 1, |λ2 | < 1 ⇒ xk+1 = Axk is unstable.
b) λ1 = 1 + j2, λ2 = 1 − j2.
Re(λ1 ) > 0, Re(λ2 ) > 0 ⇒ ẋ = Ax is unstable.
|λ1 | > 1, |λ2 | > 1 ⇒ xk+1 = Axk is unstable.
c) λ1 = 0, λ2 = − 12 .
Re(λ1 ) = 0, Re(λ2 ) < 0. dim(N (A − λ1 I)) = dim(N (A)) = 1
⇒ algebraic multiplicity (λ1 ) = 1 = dim(N (A − λ1 I))
⇒ ẋ = Ax is marginally stable.
|λ1 | < 1, |λ2 | < 1 ⇒ xk+1 = Axk is asymptotically stable.

d) λ1 = λ2 = 0.
Re(λ1 ) = 0. dim(N (A − λ1 I)) = dim(N (A)) = 2
⇒ algebraic multiplicity (λ1 ) = 2 = dim(N (A − λ1 I))
⇒ ẋ = Ax is marginally stable.
|λ1 | < 1, |λ2 | < 1 ⇒ xk+1 = Axk is asymptotically stable.

e) λ1 = λ2 = 1.
Re(λ1 ) > 0 ⇒ ẋ = Ax is unstable.
|λ1 | = 1. dim(N (A − λ1 I)) = 2
⇒ algebraic multiplicity (λ1 ) = 2 = dim(N (A − λ1 I))
⇒ xk+1 = Axk is marginally stable.

Problem 6. Consider the system

ẋ(t) = A(t)x(t)

where A(t) is a periodic matrix with period T . That means A(t + T ) = A(t) for all t ∈ R.

5
a) First, consider the state transition matrix Φ(t1 , t0 ) for the system. Define the matrix Ψ(t, 0) = Φ(t+T, 0).
Show the Ψ satisfies:
Ψ̇(t, 0) = A(t)Ψ(t, 0)
Ψ(0, 0) = Φ(T, 0)

b) Show that Φ(t + T, 0) = Φ(t, 0)Φ(T, 0).


c) Since we know that Φ(T, 0) is invertible, there exists some complex n×n matrix R such that Φ(T, 0) = eT R .

Define
P (t)−1 = Φ(t, 0)e−tR
and show that P (t)−1 is periodic with period T . This implies that P (t) is periodic with period T . Also
show that P (T ) = I.
d) Show that
Φ(t, t0 ) = P (t)−1 e(t−t0 )R P (t0 )
Hint: Note that Φ(t, t0 ) = Φ(t, 0)Φ(0, t0 )

e) Express the system using the coordinate frame z(t) = P (t)x(t). What do you notice about this new
system?
Solution:
a) Ψ̇(t, 0) = Φ̇(t + T, 0) = A(t + T )Φ(t + T, 0) = A(t)Ψ(t, 0)
Ψ(0, 0) = Φ(0 + T, 0) = Φ(T, 0)
b) Note that by definition Φ(t + T, 0) = Ψ(t, 0). From part a, Ψ(t, 0) satisfies the differential equation
f˙(t) = A(t)f (t)
f (0) = Φ(T, 0)
From the properties of the state transition matrix, we also know that:
∂h i
Φ(t, 0)Φ(T, 0) = A(t)Φ(t, 0)Φ(T, 0)
∂t
Also, note that Φ(t, 0)Φ(T, 0)|t=0 = Φ(T, 0). So Φ(t, 0)Φ(T, 0) also satisfies the differential equation
f˙(t) = A(t)f (t)
f (0) = Φ(T, 0)
So by the existence and uniqueness of solutions of differential equations, Φ(t + T, 0) = Φ(t, 0)Φ(T, 0)
c) To show that P (t)−1 is periodic, one must show that P (t + T )−1 = P (t)−1 for all t.
P (t + T )−1 = Φ(t + T, 0)e−(t+T )R
= Φ(t, 0)Φ(T, 0)e−tR e−T R

From Homework 2, Problem 1d, we know that eT R e−tR = eT R−tR since T R and −tR commute. So,
= Φ(t, 0)e−tR eT R e−T R
= Φ(t, 0)e−tR
= P (t)−1

6
Next, we show that P (T ) = I. To show that P (T ) = I, we can instead show that P (T )−1 = I. So

P (T )−1 = P (0)−1
= Φ(0, 0)e0
=I

d) From the definition of P (t)−1 , note that Φ(t, 0) = P (t)−1 etR

Φ(t, t0 ) = Φ(t, 0)Φ(0, t0 )


= Φ(t, 0)(Φ(t0 , 0))−1
= P (t)−1 etR (P (t0 )−1 et0 R )−1
= P (t)−1 etR e−t0 R P (t0 )
= P (t)−1 e(t−t0 )R P (t0 )

e) By the properties of the transition matrix, we know that x(t) = Φ(t, t0 )x(t0 ). Now plug in x(t) =
P (t)−1 z(t). So

P (t)−1 z(t) = Φ(t, t0 )P (t0 )−1 z(t0 )


=⇒ z(t) = P (t)Φ(t, t0 )P (t0 )−1 z(t0 )
=⇒ z(t) = P (t)P (t)−1 e(t−t0 )R P (t0 )P (t0 )−1 z(t0 )
=⇒ z(t) = e(t−t0 )R z(t0 )

But this equation describes the solution to the differential equation:

ż(t) = Rz(t)
z(0) = z(t0 )

So z(t) = P (t)x(t) is the solution to an LTI system. So we have shown that continuous periodic LTV
systems can be converted into continuous LTI systems with a time varying change of basis. This is also
known as the Floquet Theory.