Вы находитесь на странице: 1из 106

Linear System Theory and Design

SA01010048

LING QING

2.1 Consider the memoryless system with characteristics shown in Fig 2.19, in which u denotes
the input and y the output. Which of them is a linear system? Is it possible to introduce a new
output so that the system in Fig 2.19(b) is linear?

Figure 2.19
Translation: 2.19 u y
2.19(b)

Answer: The input-output relation in Fig 2.1(a) can be described as:

y = a *u
Here a is a constant. It is a memoryless system. Easy to testify that it is a linear system.
The input-output relation in Fig 2.1(b) can be described as:

y = a *u + b
Here a and b are all constants. Testify whether it has the property of additivity. Let:

y1 = a * u1 + b
y2 = a * u2 + b
then:

( y1 + y 2 ) = a * (u1 + u 2 ) + 2 * b
So it does not has the property of additivity, therefore, is not a linear system.
But we can introduce a new output so that it is linear. Let:

z = y b
z = a *u
z is the new output introduced. Easy to testify that it is a linear system.
The input-output relation in Fig 2.1(c) can be described as:

y = a (u ) * u
a(u) is a function of input u. Choose two different input, get the outputs:

y1 = a1 * u1
1

Linear System Theory and Design

SA01010048

LING QING

y2 = a2 * u 2
Assure:

a1 a 2
then:

( y1 + y 2 ) = a1 * u1 + a 2 * u 2
So it does not has the property of additivity, therefore, is not a linear system.
2.2 The impulse response of an ideal lowpass filter is given by

g (t ) = 2

sin 2 (t t 0 )
2 (t t 0 )

for all t, where w and to are constants. Is the ideal lowpass filter causal? Is is possible to built
the filter in the real world?
Translation: tw to

Answer: Consider two different time: ts and tr, ts < tr, the value of g(ts-tr) denotes the output at
time ts, excited by the impulse input at time tr. It indicates that the system output at time
ts is dependent on future input at time tr. In other words, the system is not causal. We
know that all physical system should be causal, so it is impossible to built the filter in
the real world.
2.3 Consider a system whose input u and output y are related by

u (t ) for t a
y (t ) = ( Pa u )(t ) :=
0 for t > a
where a is a fixed constant. The system is called a truncation operator, which chops off the
input after time a. Is the system linear? Is it time-invariant? Is it causal?
Translation: a
a

Answer: Consider the input-output relation at any time t, t<=a:

y=u

Easy to testify that it is linear.


Consider the input-output relation at any time t, t>a:

y=0
Easy to testify that it is linear. So for any time, the system is linear.
Consider whether it is time-invariable. Define the initial time of input to, system input is
u(t), t>=to. Let to<a, so It decides the system output y(t), t>=to:
2

Linear System Theory and Design

SA01010048

LING QING

u (t ) for t 0 t a
y (t ) =
0 for other t
Shift the initial time to to+T. Let to+T>a , then input is u(t-T), t>=to+T. System output:

y ' (t ) = 0
Suppose that u(t) is not equal to 0, y(t) is not equal to y(t-T). According to the definition,
this system is not time-invariant.
For any time t, system output y(t) is decided by current input u(t) exclusively. So it is a
causal system.
2.4 The input and output of an initially relaxed system can be denoted by y=Hu, where H is some
mathematical operator. Show that if the system is causal, then

Pa y = Pa Hu = Pa HPa u
where Pa is the truncation operator defined in Problem 2.3. Is it true PaHu=HPau?
Translation: y=Hu H
Pa 2.3
PaHu=HPau
Answer: Notice y=Hu, so:

Pa y = Pa Hu
Define the initial time 0, since the system is causal, output y begins in time 0.
If a<=0,then u=Hu. Add operation PaH in both left and right of the equation:

Pa Hu = Pa HPa u
If a>0, we can divide u to 2 parts:

u (t ) for 0 t a
p (t ) =
0 for other t
u (t ) for t > a
q (t ) =
0 for other t
u(t)=p(t)+q(t). Pay attention that the system is casual, so the output excited by q(t) cant
affect that of p(t). It is to say, system output from 0 to a is decided only by p(t). Since
PaHu chops off Hu after time a, easy to conclude PaHu=PaHp(t). Notice that p(t)=Pau,
also we have:

Pa Hu = Pa HPa u
It means under any condition, the following equation is correct:

Pa y = Pa Hu = Pa HPa u
PaHu=HPau is false. Consider a delay operator H, Hu(t)=u(t-2), and a=1, u(t) is a step
input begins at time 0, then PaHu covers from 1 to 2, but HPau covers from 1 to 3.
3

Linear System Theory and Design

SA01010048

LING QING

2.5 Consider a system with input u and output y. Three experiments are performed on the system
using the inputs u1(t), u2(t) and u3(t) for t>=0. In each case, the initial state x(0) at time t=0 is
the same. The corresponding outputs are denoted by y1,y2 and y3. Which of the following
statements are correct if x(0)<>0?
1. If u3=u1+u2, then y3=y1+y2.
2. If u3=0.5(u1+u2), then y3=0.5(y1+y2).
3. If u3=u1-u2, then y3=y1-y2.
Translation: u y u1(t),
u2(t) u3(t)t>=0 x(0)
y1,y2 y3 x(0)
Answer: A linear system has the superposition property:

1 x1 (t 0 ) + 2 x 2 (t 0 )
1 y1 (t ) + 2 y 2 (t ), t t 0
1u1 (t ) + 2 u 2 (t ), t t 0
In case 1:

1 = 1

2 = 1

1 x1 (t 0 ) + 2 x 2 (t 0 ) = 2 x(0) x(0)
So y3<>y1+y2.
In case 2:

2 = 0.5

1 = 0.5

1 x1 (t 0 ) + 2 x 2 (t 0 ) = x(0)
So y3=0.5(y1+y2).
In case 3:

1 = 1

2 = 1

1 x1 (t 0 ) + 2 x 2 (t 0 ) = 0 x(0)
So y3<>y1-y2.
2.6 Consider a system whose input and output are related by

u 2 (t ) / u (t 1) if u (t 1) 0
y (t ) =
0 if u (t 1) = 0

for all t.
Show that the system satisfies the homogeneity property but not the additivity property.
Translation: ,,.
Answer: Suppose the system is initially relaxed, system input:

p (t ) = u (t )
a is any real constant. Then system output q(t):
4

Linear System Theory and Design

SA01010048

LING QING

p 2 (t ) / p(t 1) if p(t 1) 0
q(t ) =
0 if p (t 1) = 0

u 2 (t ) / u (t 1) if u (t 1) 0
=
0 if u (t 1) = 0

So it satisfies the homogeneity property.


If the system satisfies the additivity property, consider system input m(t) and n(t),
m(0)=1, m(1)=2; n(0)=-1, n(1)=3. Then system outputs at time 1 are:

r (1) = m 2 (1) / m(0) = 4


s (1) = n 2 (1) / n(0) = 9
y (1) = [m(1) + n(1)] 2 /[ m(0) + n(0)] = 0
r (1) + s (1
So the system does not satisfy the additivity property.
2.7 Show that if the additivity property holds, then the homogeneity property holds for all rational
numbers a . Thus if a system has continuity property, then additivity implies homogeneity.
Translation: a

Answer: Any rational number a can be denoted by:

a = m/n
Here m and n are both integer. Firstly, prove that if system input-output can be described
as following:

x y

then:

mx my

Easy to conclude it from additivity.


Secondly, prove that if a system input-output can be described as following:

x y

then:

x/n y/n
Suppose:

x/n u
Using additivity:

n * ( x / n) = x nu
So:

y = nu
u = y/n
5

Linear System Theory and Design

SA01010048

LING QING

It is to say that:

x/n y/n
Then:

x*m/n y*m/n

ax ay
It is the property of homogeneity.
2.8 Let g(t,T)=g(t+a,T+a) for all t,T and a. Show that g(t,T) depends only on t-T.
Translation: t,T ag(t,T)=g(t+a,T+a) g(t,T) t-T
Answer: Define:

y = t T

x = t +T
So:

t=
Then:

x+ y
2

T=

x y
2

x+ y x y
,
)
2
2
x+ y
x y
= g(
+ a,
+ a)
2
2
x+ y x+ y x y x+ y
= g(
+
,
+
)
2
2
2
2

g (t , T ) = g (

= g ( y,0)
So:

g (t , T ) g ( y,0)
=
=0
x
x

It proves that g(t,T) depends only on t-T.

2.9 Consider a system with impulse response as shown in Fig2.20(a). What is the zero-state
response excited by the input u(t) shown in Fig2.20(b)?

Fig2.20
6

Linear System Theory and Design

SA01010048

LING QING

Translation: 2.20(a) 2.20(b) u(t)

Answer: Write out the function of g(t) and u(t):

t 0 t 1
g (t ) =
2 t 1 t 2
1 0 t 1
u (t ) =
1 1 t 2
then y(t) equals to the convolution integral:
t

y (t ) = g (r )u (t r )dr
0

If 0=<t=<1, 0=<r=<1, 0<=t-r<=1:


t

t2
y (t ) = rdr =
2
0
If 1<=t<=2:

y (t ) =

t 1

g (r )u (t r )dr +
0

t 1

g (r )u (t r )dr + g (r )u (t r )dr

= y1 (t ) + y 2 (t ) + y 3 (t )
Calculate integral separately:

y1 (t ) =

t 1

g (r )u (t r )dr

0 r 1

1 t r 2

0 r 1

0 t r 1

1 r 2

0 t r 1

t 1

rdr =
0

(t 1) 2
2

g (r )u (t r )dr

y 2 (t ) =

t 1

rdr =

t 1

1 (t 1) 2

2
2

y 3 (t ) = g (r )u (t r )dr
1

= (2 r )dr = 2(t 1)
1

t 2 1
2

3
y (t ) = y1 (t ) + y 2 (t ) + y 3 (t ) = t 2 + 4t 2
2
7

Linear System Theory and Design

SA01010048

LING QING

2.10 Consider a system described by

y+ 2 y 3 y = u u

What are the transfer function and the impulse response of the system?
Translation:
Answer: Applying the Laplace transform to system input-output equation, supposing that the
System is initial relaxed:

s 2Y ( s ) + 2sY ( s ) 3Y ( s ) = sY ( s ) Y ( s )
System transfer function:

G (s) =

U (s)
s 1
1
= 2
=
Y ( s) s + 2s 3 s + 3

Impulse response:

g (t ) = L1 [G ( s )] = L1 [

1
] = e 3t
s+3

2.11 Let y(t) be the unit-step response of a linear time-invariant system. Show that the impulse
response of the system equals dy(t)/dt.
Translation: y(t) dy(t)/dt.
Answer: Let m(t) be the impulse response, and system transfer function is G(s):

Y ( s) = G ( s) *

1
s

M (s) = G (s)
M ( s) = Y ( s) * s
So:

m(t ) = dy (t ) / dt
2.12 Consider a two-input and two-output system described by

D11 ( p) y1 (t ) + D12 ( p) y 2 (t ) = N 11 ( p)u1 (t ) + N 12 ( p )u 2 (t )


D21 ( p) y1 (t ) + D22 ( p) y 2 (t ) = N 21 ( p )u1 (t ) + N 22 ( p )u 2 (t )
where Nij and Dij are polynomials of p:=d/dt. What is the transfer matrix of the system?
Translation: Nij Dij p:=d/dt

Answer: For any polynomial of p, N(p), its Laplace transform is N(s).


Applying Laplace transform to the input-output equation:

D11 ( s )Y1 ( s ) + D12 ( s )Y2 ( s ) = N 11 ( s )U 1 ( s ) + N 12 ( s )U 2 ( s )


D21 ( s )Y1 ( s ) + D22 ( s )Y2 ( s ) = N 21 ( s )U 1 ( s ) + N 22 ( s )U 2 ( s )
8

Linear System Theory and Design

SA01010048

LING QING

Write to the form of matrix:

D11 ( s ) D12 ( s ) Y1 ( s ) N 11 ( s ) N 12 ( s ) U 1 ( s )
D ( s ) D ( s ) Y ( s ) = N ( s ) N ( s ) U ( s )
22
22
21
2 21
2
Y1 ( s ) D11 ( s ) D12 ( s )
Y ( s ) = D ( s ) D ( s )
22
2 21

N 11 ( s ) N 12 ( s ) U 1 ( s )
N ( s ) N ( s ) U ( s )
22
21
2

So the transfer function matrix is:

D11 ( s ) D12 ( s )
G (s ) =

D21 ( s ) D22 ( s )

N 11 ( s ) N 12 ( s )
N ( s) N ( s)
22
21

By the premising that the matrix inverse:

D11 ( s ) D12 ( s )
D ( s) D ( s)
22
21

exists.
2.11 Consider the feedback systems shows in Fig2.5. Show that the unit-step responses of the
positive-feedback system are as shown in Fig2.21(a) for a=1 and in Fig2.21(b) for a=0.5.
Show also that the unit-step responses of the negative-feedback system are as shown in Fig
2.21(c) and 2.21(d), respectively, for a=1 and a=0.5.

Fig 2.21
Translation: 2.5 a=1
2.21(a) a=0.5 2.21(b)
2.21(c) 2.21(b) a=1 a=0.5
9

Linear System Theory and Design

SA01010048

LING QING

Answer: Firstly, consider the positive-feedback system. Its impulse response is:

g (t ) = a i (t i )
i =1

Using convolution integral:

y (t ) = a i r (t i )
i =1

When input is unit-step signal:


n

y ( n) = a i
i =1

y (t ) = y (n)

n t n +1

Easy to draw the response curve, for a=1 and a=0.5, respectively, as Fig 2.21(a) and Fig
2.21(b) shown.
Secondly, consider the negative-feedback system. Its impulse response is:

g (t ) = (a ) i (t i )
i =1

Using convolution integral:

y (t ) = (a ) i r (t i )
i =1

When input is unit-step signal:


n

y ( n) = ( a ) i
i =1

y (t ) = y (n)

n t n +1

Easy to draw the response curve, for a=1 and a=0.5, respectively, as Fig 2.21(c) and Fig
2.21(d) shown.
2.14 Draw an op-amp circuit diagram for

2 4
2
x=
x + u

0 5
4

y = [3 10]x 2u
2.15 Find state equations to describe the pendulum system in Fig 2.22. The systems are useful to
model one- or two-link robotic manipulators. If , 1 and 2 are very small, can you
consider the two systems as linear?
10

Linear System Theory and Design

SA01010048

LING QING

Translation: 2.22

Answer: For Fig2.22(a), the application of Newtons law to the linear movements yields:

f cos mg = m

u f sin = m

2
d2
(
l
cos

)
=
ml
(

sin

cos )
dt 2

2
d2
(
l
sin

)
=
ml
(

cos

sin )
dt 2

Assuming and to be small, we can use the approximation sin = , cos =1.

By retaining only the linear terms in and , we obtain f = mg and:

g
l

= +

1
u
ml

Select state variables as x1 = , x 2 = and output y =

1
0
1
x=
x+

u
g / l 0
1 / ml

y = [1 0]x
For Fig2.22(b), the application of Newtons law to the linear movements yields:

f1 cos 1 f 2 cos 2 m1 g = m1

d2
(l1 cos 1 )
dt 2

= m1l1 ( 1 sin 1 1 cos 1 )


d2
f 2 sin 2 f1 sin 1 = m1 2 (l1 sin 1 )
dt

= m1l1 ( 1 cos 1 1 sin 1 )


f 2 cos 2 m2 g = m2

d2
(l1 cos 1 + l 2 cos 2 )
dt 2

= m2 l1 ( 1 sin 1 1 cos 1 ) + m2 l 2 ( 2 sin 2 2 cos 2 )


d2
u f 2 sin 2 = m2 2 (l1 sin 1 + l 2 sin 2 )
dt

= m2 l1 ( 1 cos 1 1 sin 1 ) + m2 l 2 ( 2 cos 2 2 sin 2 )

11

Linear System Theory and Design

Assuming

SA01010048

LING QING

1 , 2 and 1 , 2 to be small, we can use the approximation sin 1 = 1 ,

sin 2 = 2 , cos 1 =1, cos 2 =1. By retaining only the linear terms in 1 , 2 and

1 , 2 , we obtain f 2 = m2 g , f1 = (m1 + m2 ) g and:

1 =

2 =

m g
(m1 + m2 ) g
1 + 2 2
m1l1
m1l1

(m1 + m2 ) g
(m + m2 ) g
1
1 1
2 +
u
m1l 2
m1l 2
m2 l 2

Select state variables as

x1 = 1 , x 2 = 1 , x3 = 2 , x 4 = 2 and output

y1 1
y = :
2 2

0
x 1
x 2 (m1 + m2 ) g / m1l1
=
0
x3

x (m1 + m2 ) g / m1l 2
4

y1 1 0 0 0
y = 0 0 1 0

1
0
0
m2 g / m1l1
0
0
0 (m1 + m2 ) g / m2 l 2

0 x1
0
0
0 x 2
u
+

0
1 x3


0 x 4
1 / m2 l 2

x1
x
2
x3

x4

2.17 The soft landing phase of a lunar module descending on the moon can be modeled as shown
in Fig2.24. The thrust generated is assumed to be proportional to the derivation of m, where
m is the mass of the module. Then the system can be described by

m y = k m mg
Where g is the gravity constant on the lunar surface. Define state variables of the system as:

x1 = y , x 2 = y , x3 = m , y = m
Find a state-space equation to describe the system.
Translation: 2.24 m
g

Answer: The system is not linear, so we can linearize it.


Suppose:

12

Linear System Theory and Design

SA01010048

y = gt / 2 + y

y = g + y

y = gt + y

LING QING

m=m

m = m0 + m
So:

( m0 + m) ( g + y ) = k m ( m0 + m) g

m0 g m g + m0 y = k m m0 g m g

m0 y = k m
Define state variables as:

x1 = y , x 2 = y , x 3 = m , y = m
Then:


x1
x1
0 1 0 0

x + k / m u
=
0
0
0
x
0
2
2
0 0 0 1
x3
x3



x1

y = [1 0 0] x 2

x3


2.19 Find a state equation to describe the network shown in Fig2.26.Find also its transfer function.
Translation: 2.26
Answer: Select state variables as:

x1 : Voltage of left capacitor


x 2 : Voltage of right capacitor
x3 : Current of inductor
Applying Kirchhoffs current law:
13

Linear System Theory and Design

SA01010048

x3 = ( x 2 L x3 ) / R

u = C x1 + x1 R

u = C x 2 + x3
y = x2
From the upper equations, we get:

x1 = x1 / CR + u / C = x1 + u

x 2 = x3 / C + u / C = x3 + u

x3 = x 2 / L Rx3 / L = x 2 x3
y = x2
They can be combined in matrix form as:

x1 1 0 0 x1 1
x = 0 0 1 x + 1u
2
2

x3 0 1 1 x3 0

x1
y = [0 1 0] x 2
x3
Use MATLAB to compute transfer function. We type:
A=[-1,0,0;0,0,-1;0,1,-1];
B=[1;1;0];
C=[0,1,0];
D=[0];
[N1,D1]=ss2tf(A,B,C,D,1)
Which yields:
N1 =
0
1.0000
2.0000
1.0000
D1 =
1.0000
2.0000
2.0000
1.0000
So the transfer function is:
^

G (s) =

s 2 + 2s + 1
s +1
= 2
3
2
s + 2s + 2s + 1 s + s + 1

14

LING QING

Linear System Theory and Design

SA01010048

LING QING

2.20 Find a state equation to describe the network shown in Fig2.2. Compute also its transfer
matrix.
Translation: 2.2
Answer: Select state variables as Fig 2.2 shown. Applying Kirchhoffs current law:

u1 = R1C1 x1 + x1 + x 2 + L1 x 3 + R2 x 3

C1 x 1 + u 2 = C 2 x 2

x3 = C 2 x 2

u1 = R1 ( x3 u 2 ) + x1 + x 2 + L1 x 3 + R2 x3

y = L1 x 3 + R2 x3
From the upper equations, we get:

x1 = x3 / C1 u 2 / C1

x 2 = x3 / C 2

x3 = x1 / L1 x 2 / L1 ( R1 + R1 ) x3 / L1 + u1 / L1 + R1u 2 / L1
y = x1 x 2 R1 x3 + u1 + R1u 2
They can be combined in matrix form as:

x1 0
x = 0
2
x3 L1

0
0
L1

1 / C1

1/ C2

( R1 + R2 ) / L1

x1 0
x + 0
2
x3 1 / L1

1 / C1
u
0 1
u
R1 / L1 2

x1
u
y = [ 1 1 R1 ] x 2 + [1 R2 ] 1
u 2
x3
Applying Laplace Transform to upper equations:
^

y(s) =

^
L1 s + R2
u 1 (s)
L1 s + R1 + R1 + 1 / C1 s + 1 / C 2 s
^
( L1 s + R2 )( R1 + 1 / C 2 s )
u 2 (s)
L1 s + R1 + R1 + 1 / C1 s + 1 / C 2 s

L1 s + R2
G ( s) =
L1 s + R1 + R1 + 1 / C1 s + 1 / C 2 s

15

( L1 s + R2 )( R1 + 1 / C 2 s )

L1 s + R1 + R1 + 1 / C1 s + 1 / C 2 s

Linear System Theory and Design

SA01010048

LING QING

2.18 Find the transfer functions from u to y1 and from y1 to y of the hydraulic tank system
shown in Fig2.25. Does the transfer function from u to y equal the product of the two
transfer functions? Is this also true for the system shown in Fig2.14?
Translation: 2.25 u y1 y1 y

u y 2.14
Answer: Write out the equation about u , y1 and y :

y1 = x1 / R1
y 2 = x 2 / R2
A1 dx1 = (u y1 ) / dt
A2 dx 2 = ( y1 y ) / dt
Applying Laplace transform:
^

y 1 / u = 1 /(1 + A1 R1 s )
^

y/ y 1 = 1 /(1 + A2 R2 s )
y/ u = 1 /(1 + A1 R1 s )(1 + A2 R2 s )
So:
^

y/ u = ( y 1 / u ) ( y/ y 1 )
But it is not true for Fig2.14, because of the loading problem in the two tanks.

16

3.1consider Fig3.1 ,what is the representation of the vector x with respect to the basis
What is the representation of q1 with respect ro (i2
? q1 (i2

(q1

i2 ) ?

q 2 ) ? 3.1 , x (q1 i2 )

q 2 ) ?
1
8
q1 and i2 as
3
3

1 8
is
, this can
3 3

If we draw from x two lines in parallel with i2 and q1 , they tutersect at


shown , thus the representation of x with respect to the basis

(q1

i2 )

1
3 0 13
i2 3 =
8
8
1
1

3

3

1
be verified from x = = q 2
3

(i2

q 2 ) ,we draw from q1 two lines in

parallel with q 2 and i2 , they intersect at -2 i2 and

3
q 2 , thus the representation of q1
2

To find the representation of q1 with respect to

with respect to (i2

3
q1 = = i2
1

3
, this can be verified from
2

q 2 ) , is 2

2 0 2 2
q2 3 =
3
2 1 2 2

3.2 what are the 1-morm ,2-norm , and infinite-norm of the vectors x1 = [2

3 1] , x 2 = [1 1 1] ,

x1 = [2 3 1] , x 2 = [1 1 1] 1-,2- ?

x1
x1
x1

= x1i = 2 + 3 + 1 = 6

x2

i =1

= x1' x1 = (2 2 + (3) 2 + 12 )
= max xi x1i = 3

x2

= 1+1+1 = 3

= 14

x2

= (12 + 12 + 12 )

= 3

= max xi x 2i = 1

3.3 find two orthonormal vectors that span the same space as the two vectors in problem 3.2 , 3.2
.
Schmidt orthonormalization praedure ,


u1 = x1 = [2 3 1]

q1 =

u 2 = x 2 ( q1' x 2 ) q1 = x 2

u1
u1

q2 =

u2
u2

1
14
=

[2

1
3

3 1]

[1

2
1
3
The two orthomormal vectors are q1 =
14
1

1 1]

1
1
q2 =
1
3
1

In fact , the vectors x1 and x 2 are orthogonal because x1 x 2 = x 2 x1 = 0 so we can only

normalize the two vectors q1 =

x1

2
1
=
3
x1
14
1

q2 =

x2

1
1
=
1
x2
3
1

3.4 comsider an n m matrix A with n m , if all colums of A are orthonormal , then

AA = I m , what can you say abort AA ? n m A( n m ), A


, AA = I m AA ?
Let

] [ ]

A = a1 a 2 a m = aij

n
0 if
ai' ai = ali ali =
i =1
1 if

n m

, if all colums of A are orthomormal , that is

i j
i= j

a1'
'
a1' a1 a1' a 2 a1' a m 1 0 0
a 2 ' '

AA = ai a 2 a m' =
= 0 1 0 = I m

a m' a1 a m' a 2 a m' a m 0 0 1

a '
m

a1'
'
m
a 2 m
then AA = ai' a 2' a m' = ai ai' = ( ail a jl ) nn
i =1
i =1
a '
m

in

general

a
i =1

il

0 if
a jl =
1 if

i j
i= j

if A is a symmetric square matrix , that is to say , n=m ail a jl for every i, l = 1,2 n
then AA = I m

3.5 find the ranks and mullities of the following matrices

0 1 0
A1 = 0 0 0
0 0 1

4 1 1
A2 = 3 2 0
1 1 0

3 4
1 2

A3 = 0 1 2 2
0 0
0 1

rank ( A1 ) = 2 rank ( A2 ) = 3 rank ( A3 ) = 3


Nullity ( A1 ) = 3 2 = 1 Nullity ( A2 ) = 3 3 = 0 Nullity ( A3 ) = 4 3 = 1
3.6 Find bases of the range spaces of the matrices in problem 3.5 3.5

1

the last two columns of A1 are linearly independent , so the set 0 ,

0

0
0 can be used as a

1

basis of the range space of A1 , all columns of A2 are linearly independent ,so the set

4

3
1

2
2

1

1
0 can be used as a basis of the range spare of A
2

let A3 = a1

a2

a3

a 4 ,where ai denotes of A3 a1 and

a 2 and

a3 and

a 4 are

linearly independent , the third colums can be expressed as a3 = a1 + 2a 2 , so the set

{a

a2

a3 can be used as a basis of the range space of A3

2 1
1

3.7 consider the linear algebraic equation 3 3 x = 0 x it has three equations and two

1
1 2
unknowns does a solution x exist in the equation ? is the solution unique ? does a solution exist if

y = [1 1 1] ?

2 1
1

3 3 x = 0 x , ?

1
1 2

? y = [1 1 1] ?

2 1

Let A = 3 3 = a1

1 2

y is the sum of

a 2 , clearly a1 and a 2 are linearly independent , so rank(A)=2 ,

a1 and a 2 ,that is rank([A y ])=2, rank(A)= rank([A y ]) so a solution x

exists in A x = y
Nullity(A)=2- rank(A)=0

The solution is unique x = [1 1] if y = [1 1 1] , then rank([A y ])=3 rank(A), that is


to say there doesnt exist a solution in A x = y

3 4
3
1 2

3.8 find the general solution of 0 1 2 2 x = 2 how many parameters do you have ?

1
0 0
0 1
3 4
3
1 2

0 1 2 2 x = 2 ,?

1
0 0
0 1
3 4
1 2

Let A = 0 1 2 2

0 0
0 1

3
y = 2 we can readily obtain rank(A)= rank([A y ])=3 so
1

this y lies in the range space of A and x p = [ 1 0

o 1] is a solution Nullity(A)=4-3=1 that

means the dimension of the null space of A is 1 , the number of parameters in the general solution

will be 1 , A basis of the null space of A is n = [1 2 1 o] thus the general solution of

1
1
0
2

A x = y can be expressed an x = x p + n =
+ for any real
0
1


0
1

is the only parameter


3.9 find the solution in example 3.3 that has the smallest Euclidean norm 3
,

0
0
1
2 + 2 4
4
1
1
2
for
the general solution in example 3.3 is x = + 1 + 2 =

0
0
1
1




2
0
0
1

any real

1 and 2

the Euclidean norm of x is

x = 12 + ( 1 + 2 2 4) 2 + ( 1 ) 2 + ( 2 ) 2
= 3 12 + 5 22 + 4 1 2 8 1 16 2 + 16

x
2 1
x
2 2

4
11
8
4
= 0 3 1 + 2 2 4 = 0 1 =

11

x = 11
16
4
= 0 5 2 + 2 1 8 = 0 2 =
11
11
16

11

has

the

smallest

Euclidean

norm ,
3.10 find the solution in problem 3.8 that has the smallest Euclidean norm 3.8
,

1
1
2
0

+ for any real


x=
0
1


1
0
x = ( 1) 2 + (2 ) 2 + 2 + 1 = 6 2 2 + 2
5
6
3
the Euclidean norm of x is x
1

= 0 12 2 = 0 = x = 6

6
1
6
1

has the

smallest Euclidean norm


n

3.11 consider the equation x{n} = A x[0] + A

n 1

bu[0] + A n 2 bu[1] + + Abu[n 2] + bu[n 1]

where A is an n n matrix and b is an n 1 column vector ,under what conditions on A and b will there

exist u[o], u[1], u[ n 1] to meet the equation for any x[n], and x[0] ?
A n n , b n 1 , A b , u[o], u[1], u[ n 1] ,

x[n], and x[0] , x{n} = A n x[0] + A n 1 bu[0] + A n 2 bu[1] + + Abu[n 2] + bu[n 1] ,


u[n 1]
u[n 2]
n
n 1

write the equation in this form x{n} = A x[0] = [b, Ab A b]


u[0]
where [b, Ab A n1 b] is an n n matrix and x{n} A n x[0] is an n 1 column vector , from the equation we
can see , u[o], u[1], u[ n 1] exist to meet the equation for any x[n], and x[0] ,if and only if

[b, Ab A n 1 b] = n under this condition , there will exist u[o], u[1], u[n 1] to meet the equation for any
x[n], and x[0] .
2
0
3.12 given A =
0

the basis b

2
0
A=
0

Ab

1
2
0
0

0
1
2
0

A2 b

0
1
0

0
0

b=
b = what are the representations of A with respect to
1
3
0


1

1

A 3 b and the basis b

Ab

1 0 0
0
1

2 1 0
0
2
b = b = A b
1
3
0 2 0



0 0 1

1

A2 b

Ab

A 3 b , respectively?

A2 b

A 3 b b

Ab

A2 b

A 3 b

24
0
1
1
4
32
2
3
2

? Ab =
, A b = A Ab =
, A b = A A b =
16
2
4



1
1
1

A 4 b = 8b + 20 Ab 18 A 2 b + 7 A 3 b
we have

Ab

Ab

A2 b

] [

A3 b = b

Ab

A2 b

0
1
3
A b =
0

0 0

8
0 0 20 thus the representation of A
1 0 18

0 1 7

with respect to the basis b

Ab

A2 b

0
1
A 3 b is A =
0

0
0
1
0

0 8
0 20
0 18

1 7

4
15
50
152
7
20
52
128
2
3
4

Ab =
,A b =
,A b =
,A b =
48
6
12
24




1
1
1
1
A 4 b = 8b + 20 Ab 18 A 2 b + 7 A 3 b

Ab

A2 b

Ab

respect to the basis b

] [

A3 b = b

Ab

Ab

A2 b

A2 b

0
1
3
A b =
0

0
1
A 3 b is A =
0

0
0
1
0

0
0
1
0

0 8
0 20
thus the representation of A with
0 18

1 7

0 8
0 20
0 18

1 7

3.13 find Jordan-form representations of the following matrices


jordan :

1
0
4
3
0
1 0 1
0
1 4 10

0
1 . A3 = 0 1 0 . A4 = 0 20
16 .
A1 = 0 2 0 , A2 = 0
0 25 20
0 0 2
2 4 3
0 0 3
the characteristic polynomial of A1 is 1 = det(I A1 ) = ( 1)( 2)( 3) thus the eigenvelues of A1 are
1 ,2 , 3 , they are all distinct . so the Jordan-form representation of A1 will be diagonal .
the eigenvectors associated with = 1, = 2, = 3 ,respectively can be any nonzero solution of

A1 q1 = q1 q1 = [1 0 0]

A1 q 2 = 2q 2 q 2 = [4 1 0] thus the jordan-form representation of A1 with respect to

A1 q3 = 3q3 q3 = [5 0 1]

{q

q2

1 0 0

q3 is A1 = 0 2 0
0 0 3

the characteristic polynomial of A2 is

2 ( ) = det(I A2 ) = (3 + 32 + 4 + 2 = ( + 1)( + 1 + i )( + 1 i )
eigenvalues 1,

1 + j and

1, 1 + j and

[1

1 1] , [1

A2 has

1 j the eigenvectors associated with

1 j are , respectively

j 1 2 j ] and [1 1 j 2 j ] the we have

1
1
1

Q = 1 1 + j 1
0
2j
2j

0
0
1
A = 0 1 + j
0
2

0
0
1

j and

= Q 1 A Q
2

the characteristic polynomial of A3 is 3 ( ) = det(I A3 ) = ( 1) ( 2) theus the


2

eigenvalues of A3 are 1 ,1 and 2 , the eigenvalue 1 has multiplicity 2 , and nullity

( A3 I ) = 3 rank ( A3 I ) = 3 1 = 2 the A3 has two tinearly independent eigenvectors

associated with 1 ,

( A3 I )q = 0 q1 = [1 0 0]

q 2 = [0 1 0]

( A3 I )q3 = 0 q3 = [1 0 1]

1 0 1
Q = 1 1 0 and
0 0 1

thus we have

1 0 0
A 3 = 0 1 0 = Q 1 A3 Q
0 0 2

the characteristic polynomial of A4 is 4 ( ) = det(I A4 ) = 3 clearly A4 has lnly one


distinct eigenvalue 0 with multiplicity 3 , Nullity( A4 -0I)=3-2=1 , thus A4 has only one
independent eigenvector associated with 0 , we can compute the generalized , eigenvectors

A1 v1 = 0 v1 = [1 0 0]
of A4 from equations below A1 v 2 = v1 v 2 = [0

4 5] , then the representation of A4

A1 v3 = v 2 v3 = [0 3 4]

with respect to the basis v1

v2

v3 is

0
1 0
0 1 0
A = 0 0 1 = Q 1 A Q where Q = 0 4 3
4
4

0 5 4
0 0 0

1
1
3.14 consider the companion-form matrix A =
0

2
0
1
0

3
0
0
1

4
0
0

show that its

characterisic polynomial is given by ( ) = + 1 + 2 + 3 + 4


4

show also that if i is an eigenvalue of A or a solution of ( ) = 0 then

3i i 1] is an eigenvector of A associated with i

3
i

A ( ) = + 1 + 2 + 3 + 4 , i i
4

, ( ) = 0 , i

3i i 1] A i

proof:

+ 1 2 3 4
1
0 0
( ) = det(I A2 ) = det
0
0
1

0 1
0
0 0
2 3 4

= ( + 1 ) det 1 0 + det 1
0
0 1
0 1
0
4
= 4 + 13 + 2 det
+ det 3

1
1
= 4 + 1 3 + 2 2 + 3 + 4

if i is an eigenvalue of A , that is to say , (i ) = 0 then we have

1
1

2
0
1
0

3
0
0
1

that is to say i

4 3i 13i 2 2i 3 i 4 4i
3i
2
3

0 2i
i
2i

= 2 = I i
=
i
i
0 i
i



0 1
1
1
i

3i i 1] is an eigenvetor oa A associated with i

13
2

3.15 show that the vandermonde determinant 1


1

32
22
2

33 34

23 24
equals
3 4

1i < j 4 ( j i ) , thus we conclude that the matrix is nonsingular or equivalently , the


eigenvectors are linearly independent if all eigenvalues are distinct , vandermonde

13
2
1
1

32
22
2

33
23
3

34

24
4 1i< j 4 ( j i ) ,

, , ,
proof:

13
2

det 1
1

32
22
2

33
23
3

0 22 ( 2 1 ) 23 (3 1 ) 24 ( 4 1 )
34

24
0 2 ( 2 1 ) 3 (3 1 ) 4 ( 4 1 )
= det
0
4
2 1
3 1
4 1

22 ( 2 1 ) 23 (3 1 ) 24 ( 4 1 )

= det 2 ( 2 1 ) 3 (3 1 ) 4 ( 4 1 )
2 1
3 1
4 1

3 (3 1 )(3 2 ) 4 ( 4 1 )( 4 2 )
0

(3 1 )(3 2 )
( 4 1 )( 4 2 )
= det 0

2 1
3 1
4 1
( 1 )(3 2 ) 4 ( 4 1 )( 4 2 )
= det 3 3
( 2 1 )
( 4 1 )( 4 2 )
(3 1 )(3 2 )
= (3 1 )[3 (3 1 )(3 2 )( 4 1 )( 4 2 ) 4 ( 4 1 )( 4 2 )(3 1 )(3 2 )]
= ( 2 1 )( 4 2 )(3 1 )(3 2 )( 4 1 )( 4 2 )
= 1i < j 4
let a, b, c and d be the eigenvalues of a matrix A , and they are distinct Assuming the matrix is
singular , that is abcd=0 , let a=0 , then we have

0 b 3

0 b2
det
0 b

1 1

c3
c2
c
1

d3
b 3
2
d

= det b 2

d
b

and from vandermonde determinant

c3
c2
c

b 2
d3

d 2 = bcd det b
1
d

c2
c
1

d2

d = bcd (d c)(d b)(c b)


1

a 3
2
a
det
a

b3

c3

b2
b

c2
c

0 b 3
d3

d2
0 b2
= det
0 b
d

1
1 1

0 b 3

2
so we can see det 0 b
0 b

1 1

c3
c2
c
1

c3
c2
c
1

d3

d2
= (d c)(d b)(c a )(b a ) = bcd (d c)(d b)
d

0 b 3
d3

d2
0 b2
= det
0 b
d

1
1 1

d3

d2 ,
d

c3
c2
c
1

that is to say bcd ( d c)(d b) = 0 a, b, c, and d

are not distinet

this implies the assumption is not true , that is , the matrix is nonsingular let q be the
i

eigenvectors of A , Aq = a q
1

Aq 2 = a q 2

Aq 3 = a q 3

Aq 4 = a q 4

a 1 q 1 + a 2 q 2 + a 3 q 3 + a 4 q 4 = 0

aa 1 q 1 + ba 2 q 2 + ca 3 q 3 + da 4 q 4 = 0
a1 q1 a 2 q 2 a 3 q 3 a 4 q 4
2
2
2
2
a
a
q
b
a
q
c
a
q
d
a
q
0
+
+
+
=
4
3
2
1

4
3
2
1
3
3
3
3
a a 1 q 1 + b a 2 q 2 + c a 3 q 3 + d a 4 q 4 = 0
a i q i = 0 n1
so
a = 0 q i linenrly independent
and q i 0 n1 i

3.16

1
1

a
b
c
d

show that the companion-form matrix in problem 3.14 is nonsingular if and only if

4 0 , under this assumption , show that its inverse equals


0
0

1
A =
0
1
4

1
0
0

0
1
0

0
0
1

0
0
1
4 0 , A = 0
1
4

3.14
3
4
1
0
0

0
1
0

0
0
1

3
4

proof: as we know , the chacteristic polynomial is

( ) = det(I A) = 4 + 13 + 2 2 + 3 + 4 so let = 0 , we have

a2
b2
d2
d2

a3

b3
= 0 44
d3
d 3

det( A) = (1) 4 det( A) = det( A) = 4


A is nonsingular if and only if

4 0
0
0

0
1
4
1
1

A 1

1
0
0

0
1
0

0
0
1

1 2 3 4 1

0
0
0 0
1

0
1
0
0 0
3
1
2

0
1
0 0
4
4
4 0
1
0
0 1
2 3 4 0

0
1
0 0
0
0
0 0

0
0
1 0
1
0
0 0
3
1
2
1
0
1
0 4
4
4
4 0

1
0
0
0
0
0
1
0

= 0
0
0
1
1
3
1
2
4
4
4
4

3.17 consider A = 0
0 0

0
1
0
0

0
0
1
0

0
0
= I4
0

0
1
0
0

0
0
1
0

0
0
= I4
0

tT 2 / 2

tT with 0 and T>0 show that [0 0 1] is an


2 T 2

generalized eigenvector of grade 3 and the three columns of Q = 0


0

T 2 / 2 0

tT
0
0

1 0

constitute a chain of generalized eigenvectors of length 3 , venty Q AQ = 0 t 1

0 0
1

A 0 ,T>0 , [0

0 1] 3 , Q 3

1 0

3 , Q AQ = 0 t 1

0 0
1

Proof : clearly A has only one distinct eigenvalue with multiplicity 3

0 0

2
( A I ) = 0 = 0

1 0
0 0

3
( A I ) = 0 = 0

1 0

0 2 T 2 0 2 T 2

0
0 0 = 0 0

0
0 1 0
0 2T 2 0 T

0
0 0 0
0
0 0 0

T 2 / 2 0 0 0 0 0

T 0 = 0 0 0 0 = 0
0

0 0 0 1

0 1] is a generalized eigenvctor of grade 3 ,

these two equation imply that [0

2
2 2
0 0 T T / 2 0 T

T 0 = T
( A I ) = 0 = 0 0
1 0 0
0 1 0
and
2T 2 0 T T 2 / 2 2T 2 2T 2

T T = 0
( A I ) = T = 0 0
0 0 0
0 0 0

that is the three columns of Q consititute a chain of generalized eigenvectors of length 3

AQ = 0
0 0

T 2 / 2 2T 2

T 0
0

2 2
1 0 T

Q 0 1 = 0
0 0 0

T 2 / 2 0 3T 2 32T 2 / 2 T 2 / 2


T
0 = 0
T
T
0
1 0
0

T 2 / 2 0 1 0 3T 2 32T 2 / 2 T 2 / 2

T
0 0 1 = 0
2T
T
0
1 0 0 0
0

1 0
Q AQ = 0 1
0 0
1

3.18 Find the characteristic polynomials and the minimal polynomials of the following matrices
,

1
0

1
0
0

0
1

1
0

0
0
(a)
0

1
0

1
0
0

0
1

1
0

0
0
(b)
0

1
0

1
0
0

0
0

1
0

0
0
(c )
0

1
0

1
0
0

0
0

1
0

0
0
(d )
0

(a ) 1 ( ) = ( 1 ) 3 ( 2 )

1 ( ) = ( 1 ) 3 ( 2 )

(b) 2 ( ) = ( 1 ) 4

2 ( ) = ( 1 ) 3

(c) 3 ( ) = ( 1 ) 4

3 ( ) = ( 1 ) 2

(d ) 4 ( ) = ( 1 ) 4

4 ( ) = ( 1 )

3.19

show that if is an eigenvalue of A with eigenvector x then f ( ) is an eigenvalue

of f ( A) with the same eigenvector x


A , f ( ) f ( A) , x f ( A)

f ( ) ,
proof let A be an n n matrix , use theorem 3.5 for any function f (x) we can define

h( x) = 0 + 1 x + + n 1 x n 1 which equals f (x) on the spectrum of A


if is an eigenvalue of A , then we have f ( ) = h( ) and

f ( ) A x x A 2 x = A x = 2 x,
f ( A) = h( A)

A 3 x = 3 x , 3 A k x = k x

f ( A) x = h( A) x = ( 0 I + 1 A + 3 + n 1 A n 1 ) x
= 0 x + 1x + 3 + n 1n 1 x
= h ( ) x = f ( ) x

which implies that f ( ) is an eigenvalue of f ( A) with the same eigenvector x

3.20

show that an n n matrix has the property A k = 0 for k m if and only if A has

eigenvalues 0 with multiplicity n and index m of less , such a matrix is called a nilpotent matrix
n n A k = 0 A n 0 m ,
,
proof : if A has eigenvalues 0 with multiplicity n and index M or less then the Jordan-form

J1

representation of A is A =

0 1
ni = n

where J = 0 1
J2
i =1
i

ni m

J 3
0 n n ma
i
i
i

from the nilpotent property , we have J i = 0

for

k ni so if

J k1

k m ma ni , J ik = 0 all i = 1,2, , l and A =


i

then A k = 0
If A k = 0

J1
A =

and only if f ( A ) = 0)

( f ( A) = 0 if

for

J k2
=0
k
J 3

k m, then A k = 0

l i

J2
, Ji =

J 3

1
li

for

1
li n n
i
i

k m, where
l

n
i =1

=n

So we have

k!

k
k 1
li k ni +1
kli
l i
(k ni + 1)(ni 1)!

k
k
li
J i =

= 0 i = 1,2, l

li k

li = 0, and ni m for i = 1,2, l


which implies that A has only one distinct eigenvalue o with multiplicity n and index m or less ,

1 1 0

3.21 given A = 0 0 1 , find A10 , A103

0 0 1

and

e At A A10 , A103

the characteristic polynomial of A is 2 ( ) = det(I A) = ( 1) 2


let h( ) = 0 + 1 + 2

on the spectrum of A , we have

f (0) = h(0) 010 = 0


f (1) = h(1) 110 = 0 + 1 + 2
f (1) = h (1) 10 110 = 1 + 2 2
the we have 0 = 0,

2 = 8 2 = 9

A10 = 8 A + 9 A 2
1 1 0 1 0 1 1 1 9
= 80 0 1 + 9 0 0 1 = 0 0 1
0 0 1 0 0 1 0 0 1
the compute

and

e At ,

0 = 0
0103 = 0
103
1 = 0 + 1 + 2 1 = 101
2 = 102
103 1103 = 1 + 2 2
A103

A103 = 101A + 102 A 2


1 1 0 1 1 102
1 1 0

= 1010 0 1 + 102 0 0 1 = 0 0 1
0 0 1 0 0 1
0 0 1
e0 = 0
0 = 1
t
e = 0 + 1 + 2 1 = 2e t te t 2
te t = 1 + 2 2
2 = te t e t + 1

At

to compute e :

Ae At = 0 I + 1 A + 2 A 2
1 1 1
1 1 0
1 0 0

t
t
t
t
= 0 1 0 + (2e e 2) 0 0 1 + (2e e + 1) 0 0 1
0 0 1
0 0 1
0 0 1
e t e t 1 te t e t + 1

1
et 1
= 0

0
0
et

3.22 use two different methods to compute e

At

3.13 A 1 A 4 e

for A 1 and A 4 in problem 3.13


At ,

e A2t

method 1 : the Jordan-form representation of A 1 with respect to the basis

{[1

0 0]

Ca e

Cb

A1t

[4

1 0]

[5

0 1] is A1 = diag {1, 2, 3}

1 4 5
= Qe Q , where Q = 0 1 0
0 0 1
A1t

t
1 4 5 e

= 0 1 0 0
0 0 1 0

0
e 2t
0

0 1 4 5

0 0 1 0
e 3t 0 0 1

Cc
Cd

e t

= 0
0

4e 2 t
e 2t
0

5e 3t 1 4 5 e t

0 0 1 0 = 0
e 3t 0 0 1 0

4(e 3t e t ) 5(e 3t e t )

e 2t
0

3t

e
0

0
1 0
0 1 0
A = 0 0 1 = Qe A4t Q where Q = 0 4 3
4

0 5 4
0 0 0
e A4t = Qe A4t Q 1
2
1 0 0 1 t t / 2

t
= 0 4 3 0 1
0 5 4 0 0
1
1 4t + 5t 2 / 2 3t + 2t 2

20t + 1
16t
= 0
0
25t
20t + 1

method 2: the characteristic polynomial of A1 is ( ) = ( 1)( 2)( 3). let

h( ) = 0 + 1 + 2 2 on the spectrum of A1 , we have

0 = 3e t 3e 3t + e 3t
f (1) = h(1) e t = 0 + 1 +
5
3
f (2) = h(2) e 2t = 0 + 2 1 + 3 2 2 = e t + 4e 2t + e 3t
2
2
f (3) = h(3) e 3t = 0 + 3 1 + 9 2 2
1 t
3 = ( e 2e 2 t + e 3 t )
2
e A1t = 0 I + 1 A2 + 2 A12
3e t 3e 2t + e 3t

0
=

0
3e 3e 2t + e 3t
0
t

0
1 4 10
3 3t
5 t

2t
0
+ ( 2 e + 4e 2 e ) 0 2 0
0 0 3
3e t 3e 2t + e 3t

1 12 40
1 t
+ (e 2e 2t + e 3t ) 0 4 0
2
0 0 9
e t 4(e 2t e t ) 5(e 3t e t )

e 2t
0
= 0

3t

0
e
0

the characteristic polynomial of A4 is ( ) = , let h( ) = 0 + 1 + 2


3

spectrum of A4 ,we have

f (0) = h(0) : 1 = 0
f (0) = h (0) : t = 1
f (0) = h (0) : t 2 = 2 2

on the

e A4t = 0 I + 1 A4 + 1 A42
thus

0
4
3t + 2t 2
0 5 4
2

= I + t 0 20
16 + t / 2 0 0 0
0 25
0 0 0
20

1 4t + 5t 2 / 2 3t + 2t 2

= 0
20t + 1
16t
0
25t
20t + 1

3.23 Show that functions of the same matrix ; that is f ( A) g ( A) = g ( A) f ( A) consequently


we have Ae
Ae

At

proof: let

At

= e At A f ( A) g ( A) = g ( A) f ( A)

= e At A

g ( A) = 0 I + 1 A + + n 1 An 1

( n is the order of A)

f ( A) = 0 I + 1 A + + n 1 An 1

n 1

f ( A) g ( A) = 0 0 I + ( 0 1 + 1 0 ) A + + i n 1 i A
i =0

n 1

= ( i k i ) Ak +
k =0 i =0
n 1

f ( A) g ( A) = ( i k i ) Ak +
k =0 i =0

let f ( A) = A,

2n 2

(
k = n i = k n +1

2n 2

k = n i = k n +1

+ i n i An + + n 1 n 1 A2 n 2
i =1

k i

) Ak

k i

) Ak = f ( A) g ( A)

n 1

g ( A) = e At then we have Ae At = e At A

1 0

3.24 let C = 0 2

0 0

0
0 , find a B such that e B = C show that of i = 0 for some
3

I ,then B does not exist

1 0

let C = 0 2

0 0

0
0 , find a B such that e B = C Is it true that ,for any nonsingular c ,there
3
B

exists a matrix B such that e = C ?

1 0

C = 0 2

0 0

0
0 1 1 1 = 0 , B e B = C
3

1 0

C = 0 2

0 0

0
0 , C B , e B = C ?
3

Let f ( ) = n e = so f ( B ) = ln e B = B

0
ln l1

ln l2
B = ln C = 0
0
0

0
0
ln l3

where

1 > 0, i = 1,2,3 , if = 0 for some i

ln l1 does not exist

0 n 1
0
n n
1 0

= 0 n

n
0
0

for C = 0 0 we have B = n C = 0
,

0 0
0
0
n 0
0 n

where > 0,

0 then n does mot exist , so B does mot exist , we can conclude

if

that , it is mot true that , for any nonsingular C THERE EXISTS a B such that
3.25

let ( sI A) =

ek = c

1
Adj ( sI A) and let m(s) be the monic greatest common divisor of
( s)

all entries of Adj(Si-A) , Verify for the matrix A2 in problem 3.13 that the minimal polynomial
of A equals ( s ) m( s )
, ( sI A) =

1
Adj ( sI A) , m(s) Adj(Si-A),
( s)

3.13 A2 A ( s ) m( s )
verification

1 0 0
1 0 1

A3 = 0 1 0
A3 = 0 1 0
0 0 2
0 0 2
j ( s ) = ( s 1) \ ( s 2)
( s ) = ( s 1) 2 ( s 2)
0
1
S 1

0 ,
sI A = 0
S 1
0
0
s 2
so
m( s ) = s 1
we can easily obtain that

3.26

Define

0
( s 1)
( s 1)( s 2)

0
( s 1)( s 2)
0
Adj ( sI A) =

0
0
( s 1) 2

( s ) = ( s ) m( s )

( sI A) 1 =

1
R0 s n 1 + R1 s n 2 + + Rn 2 s + Rn 1
( s )

( s ) = det( sI A) := s 2 + a 1 s n 1 + a 2 s n 2 + + a n

and

where

Ri are constant matrices theis

definition is valid because the degree in s of the adjoint of (sI-A) is at most n-1 , verify

tr ( AR0 )
1
tr ( AR1 )
2 =
2
tr ( AR1 )
3 =
2

1 =

R0 = I
R1 = AR0 + 1 I = A + 1 I
R2 = AR1 + 2 I = A 2 + 1 A + 2 I

tr ( AR1 )
n 1
tr ( AR1 )
n =
n

n 1 =

Rn 1 = ARn 2 + n 1 I = A n 1 + 1 A n 2 + 3 + n 2 A + n 1 I
0 = ARn 1 + n I

where tr stands for the trase of a matrix and is defined as the sum of all its diagonal entries this
process of computing a i and
( SI A) 1 :=

Ri is called the leverrier algorithm

1
[ R0 S n 1 + R1 S n 2 + + Rn 2 S + Rn 1 ] (s ) A
( s)

( s ) = det( sI A) := s + a 1 s
n

n 1

+ a 2 s n2 + + a n

and

Ri ,

, SI-A S N-1

tr ( AR0 )
1
tr ( AR1 )
2 =
2
tr ( AR1 )
3 =
2

1 =

R0 = I
R1 = AR0 + 1 I = A + 1 I
R2 = AR1 + 2 I = A 2 + 1 A + 2 I

tr ( AR1 )
n 1
tr ( AR1 )
n =
n

n 1 =

Rn 1 = ARn 2 + n 1 I = A n 1 + 1 A n 2 + 3 + n 2 A + n 1 I
0 = ARn 1 + n I

tr , i Ri leverrier .
verification:

( SI A)[ R0 S n 1 + R1 S n 2 + + Rn 2 S + Rn 1 ]
= R0 S n + ( R1 AR0 ) S n 1 + ( R2 AR1 ) S n 2 + ( Rn 1 ARn 2 ) S + ARn 1
= I ( S n + 1 S n 1 + 2 S n 2 + + n 1 S + n )
= I( S )
which implies

( SI A) 1 :=

1
[ R0 S n 1 + R1 S n 2 + + Rn 2 S + Rn 1 ]
( s)

where ( s ) = s + 1 s
n

n 1

+ 2 s n2 + + n

3.27 use problem 3.26 to prove the cayley-hamilton theorem


3.26 cayley-hamilton
proof:

R0 = I
R1 AR0 = 1 I
R2 AR1 = 2 I

Rn 1 ARn 2 = n 1 I
ARn 1 = n I
multiplying ith equation by A n i +1 yields ( i = 1,2 n )

A n R0 = A n
A n 1 R1 A n R0 = 1 A n 1
A n 2 R2 A n 1 R1 = 2 A n 2

ARn 1 A 2 Rn 2 = n 1 A
ARn 1 = n I
then we can see

A n + 1 A n 1 + 2 A n 2 + + n 1 A + n I
= A n R0 + + A n 1 R1 A n R0 + + ARn 1 A 2 Rn 2 ARn 1 = 0

that is

( A) = 0

3.28 use problem 3.26 to show

( sI A) 1
1
=
A n 1 + ( s + 1 ) A n 2 + ( s 2 + 1 s + 2 ) A n 3 + 3 + ( s n 1 + 1 s n 2 + 3 + n 1 ) I
( s)

3.26 ,
Proof:

1
[ R0 S n 1 + R1 S n 2 + 3 + Rn 2 S + Rn 1 ]
( s)
1
=
[ S n 1 + ( A + 1 I ) S n 2 + ( A 2 + 1 A + 2 I ) S n 3 + 3
( s)

( SI A) 1 :=

( A n 2 + 1 A n 3 + 3 + n 3 A + n 4 ) S + A n 1 + 1 A n 2 + 3 + n 2 A + n 1 I ]

1
A n 1 + ( s + 1 ) A n 2 + ( s 2 + 1 s + 2 ) A n 3 + 3 + ( s n 1 + 1 s n 2 + 3 + n 1 ) I
( s)

another : let 0 = 1

( SI A) 1 :=

1
[ R0 S n 1 + R1 S n 2 + 3 + Rn 2 S + Rn 1 ]
( s )

1 n 1
1 n 1 n 1i n 1
i l A L
Ri S n 1i =

S
( s ) i =i
( s ) i =0
i =0

n 1
1 N 1
n 1 I
0

i S n 1 I A + 3 +
+
S
A
i

( s ) i =0
i =0

S n 1 + ( A + 1 I ) S n 2 + ( A 2 + 1 A + 2 I ) S n 3 + 3
( A n 2 + 1 A n 3 + 3 + n 3 A + n 4 ) S + A n 1 + 1 A n 2 + 3 + n 2 A + n 1 I ]
=

1
A n 1 + ( s + 1 ) A n 2 + ( s 2 + 1 s + 2 ) A n 3 + 3 + ( s n 1 + 1 s n 2 + 3 + n 1 ) I
( s)

3.29 let all eigenvalues of A be distinct and let

that is

P; = Q 1

qi be a right eigenvector of A associated with

Aqi = I qi define Q = q1

q 2 q n and define

p1
p
2
=: ,


p n

where pi is the ith row of P , show that pi is a left eigenvector of A associated with i , that
is pi A = i pi A , qi i ,

Aqi = I qi , Q = q1

q 2 q n P; = Q 1

p1
p
2
=:


p n

pi P I , pi A i , pi A = i pi

Proof: all eigenvalues of A are distinct , and qi is a right eigenvector of A associated with i ,

and Q = q1

q2 qn

1
=
so we know that A

= Q 1 AQ = PAP 1
2

PA = A P That is
p1 A 1 p1
p1

p A p
p2

2 = 2 2


p n A n p n
p n

p1
1
p
2

:
A =


p n

so pi A = i pi , that is , pi is a

left eigenvector of A associated with i

3.30 show that if all eigenvalues of A are distinct , then ( SI A) 1 can be expressed as

( SI A) 1 =
with i

1
qi pi where qi and pi are right and left eigenvectors of A associated
s i

A , ( SI A)

( SI A) 1 =

1
qi pi qi pi A i ,
s i

Proof: if all eigenvalues of A are distinct , let qi be a right eigenvector of A associated with i ,

then Q = q1

q 2 q n is nonsingular , and : Q 1

p1
p
2
= where is a left eigenvector of


p n

qi pi ( SI A)
i

1
1
A associated with i , =
( s qi pi qi pi A) =
( s qi pi qi i pi )
s i
s i
=
That is ( SI A)

1
( s i )(qi pi ) = qi pi
s i

1
qi pi
s i

3.31

find

the

1
0
A=

2 2

to

meet

the

lyapunov

equation

in

(3.59)

with

3
B = 3 C = what are the eigenvalues of the lyapunov equation ? is the
3

lyapunov equation singular ? is the solution unique ?


A,B,C, M (3.59) lyapunov , >?

AM + MB = C
3 1
0
M =CM =

2 1
3

A ( ) = det(I A) = ( + 1 j )( + 1 + j ) B ( ) = det(I B) = + 3
The eigenvalues of the Lyapunov equation are

1 = 1 + j + 3 = 2 + j ) 2 = 1 j + 3 = 2 j
The lyapunov equation is nonsingular M satisfying the equation

1
0

1 2

3.32 repeat problem 3.31 for A =

3
3
B = 1 C1 = C 2 = with two
3
3

different C , A, B, C, 3.31 ,

AM + MB = C
1 1

M = C1 No solution
1 1
1 1

1 1 M = C 2 M = 3 for ny

A ( ) = ( + 1) 2

B ( ) = 1

the eigenvalues of the lyapunov equation are

1 = 2 = 1 + 1 = 0 the lyapunov equation is

singular because it has zero eigenvalue if C lies in the range space of the lyapunov equation , then
solution exist and are not unique
,
3.33 check to see if the following matrices are positive definite or senidefinite

2 3 2 0 0 1 a1 a1
0 0 0 a a

, 3 1 0
2 1

2 0 2 1 0 2 a3 a1

a1 a 2
a2 a2
a3 a 2

a1 a3
a 2 a3
a3 a3

2 3 2
2 3

1. 7 det
= 2 9 = 7 < 0 3 1 0 is not positive definite , nor is pesitive
3
1

2 0 2
semidefinite

0 1

0 = ( 1 + 2 )( 1 2 ) it has a negative eigenvalue 1 2 ,


2. det 0

1 0 2
so the second matrix is not positive definite , neither is positive demidefinte ,,
3

the third matrix s prindipal minors a1 a1 0, a 2 a 2 0, a3 a3 0, `

a a
det 1 1
a 2 a1

a1 a 2
a a
= 0 det 1 1

a2 a2
a3 a1

a1 a3
a a
= 0 det 2 2

a3 a3
a3 a 2

a1 a1
a 2 a3
= 0 det a 2 a1

a3 a3
a3 a1

a1 a 2
a2 a2
a3 a 2

that is all the principal minors of the thire matrix are zero or positive , so the matrix is positive
semidefinite ,
3.34 compute the singular values of the following matrices ,

1 0 1 1 2
2 1 0 2 4


1 2
1 0 1
2 2
2 1 0 0 1 = 2 5

1
0

( ) = ( 2)( 5) 4 = 2 7 + 6 = ( 1)( 6)

1 2
1 0 1
0 1 are 6 and 1 , thus the singular values of
the eigenvalues of

2 1 0 1
0

1 0 1
2 1 0 are

6 and 1 ,

1 2 1 2 5 6
2 4 2 4 = 6 20

( ) = ( 5)( 20) 36 = (

25 3
25 3
+
41)(

41)
2 2
2 2

1 2 1 2
25 3
2 4 2 4 are 2 + 2 41 and

the eigenvalues of

25 3

41 , thus the singular values of


2 2

a1 a3
a 2 a3 = 0
a3 a3

1
1 2
25 3
2
2 4 are ( 2 + 2 41) = 4.70 and

1
25 3
41) 2 = 1.70

2 2

3.35 if A is symmetric , what is the ralatimship between its eigenvalues and singular values ?
A ?
If A is symmetric , then AA = AA = A 2 LET qi be an eigenvector of A associated with
eigenvaue i that is , Aqi = i qi , (i = 1,3, 3 n) thus we have
2

A 2 qi = Ai qi = i Aqi = i qi (i = 1,3, 3 n)
Which implies

i 2 is the eigenvalue of A 2

So the singular values of A are | i | for

a1

2
3.36 show det I n + [b1

a n

b2

for i = 1,2, n , (n is the order of A )

i = 1,2, n where i is the eigenvalue of A

bn ] = 1 + a m bm
m =1

a1
a
2
let A =


a n

B = [b1

b2 bn ] A is n 1 and B is 1 n

use (3.64) we can readily obtain

a1

det I n + 2 [b1

a n

b2

bn ] = det I 1 + [b1

= 1 + a m bm
m 1

3.37 show (3.65) (3.65)


proof: let

b2

a1
a

bn ] 2


a n

sIm
N =
0

sIm
A
Q=
sIn
B

sIn

sIm
P=
B

sIn

then we have

sI m
sI AB 0
NP = m
QP
=

sB
sI n

sA

sI n BA

because

det( NP ) = det N det P = s det P


det(QP) = det Q det P = s det P
we have det(NP)=det(QP)
And

det( NP ) = det( sI m AB) det( sI n ) = S n det( sI m AB)


det(QP) = det( sI m ) det( sI n BA) = S m det( sI n BA)

3.38 Consider A x = y , where A is m n and has rank m is ( AA)


under what condition will it be a solution? Is A( AA)
A m , ( AA)

mn

A y a solution ? if not ,

y a solution ?

A y A x = y ? ,

, ? A( AA)

y ?

A is m n and has rank m so we know that m n , and AA is a square matrix of m m


rankA=m , rank( AA) rankA = m n

So if m n ,then rank( AA )<n , ( AA) 1 does mot exist m and ( AA)

A y isnt a

solution if A x = y
If m=n , and rankA=m , so A is nonsingular , then we have rank( AA )=rank(A)=m , and
A ( AA)

A y =A A 1 ( A) 1 A y = y that is ( AA) 1 A y is a solution ,

RankA=M
Rank( AA )=m AA is monsingular and ( AA) 1 exists , so we have

AA( AA) 1 y = y , that is , A( AA) 1 y is a solution of A x = y ,

PROBLEMS OF CHAPTER 4
4.1 An oscillation can be generated by

0 1
X =
X
1 0

cos t sin t
X (0)
sin t cos t

Show that its solution is X (t ) =

At

Proof: X (t ) = e X (0) = e
Let h( ) = 0 + 1 .If

0 1

t
1 0

X (0)

,the eigenvalues of A are j,-j;

h( ) = e t ,then on the spectrum of A,then

h( j ) = 0 + 1 j = e jt = cos t + j sin t
h( j ) = 0 1 j = e

jt

0 = cos t
1 = sin t

then

= cos t j sin t

1 0
0 1 cos t sin t
+ sin t

0 1
1 0 sin t cos t

so h( A) = 0 I + 1 A = cos t

At

X (t ) = e X (0) = e

0 1

t
1 0

cos t sin t
X (0) =
X (0)
sin t cos t

4.2 Use two different methods to find the unit-step response of

1
0
1
X =
X + U

2 2
1
Y = [2 3]X
Answer: assuming the initial state is zero state.
method1:we use (3.20) to compute

( sI A)
then e

At

s 1
=

2 s + 2

s + 2 1
1

s + 2s + 2 2 s
2

sin t t
cos t + sin t
= L1 (( sI A) 1 ) =
e
cos t sin t
2 sin t
Y ( s ) = C ( sI A) 1 BU ( s ) =

5s
5
= 2
( s + 2 s + 2) s ( s + 2 s + 2)
2

then y (t ) = 5e t sin t
method2:

and

for t>=0

y (t ) = C e A(t t ) Bu (t )dt
0

= C e A(t t ) dtB =CA 1 (e At e 0 ) B


0

1 0.5 e t cos t + 2e t sin t 1


= C

0 e t cos t 3e t sin t 1
1
= 5 sin te t
for t>=0
4.3 Discretize the state equation in Problem 4.2 for T=1 and T= . 4.3
T 1
Answer:
T
X [k + 1] = e AT X [k ] + e A d BU [k ]
0

Y [k ] = CX [k ] + DU [k ]

For T=1,use matlab:


[ab,bd]=c2d(a,b,1)
ab =0.5083
0.3096
-0.6191 -0.1108
bd =1.0471
-0.1821

0.3096
0.5083
1.0471
X [k + 1] =
X [k ] +

U [k ]
0.6191 0.1108
0.1821
Y [k ] = [2 3]X [k ]
for T= ,use matlab:
[ab,bd]=c2d(a,b,3.1415926)
ab =-0.0432
0.0000
-0.0000 -0.0432
bd =1.5648
-1.0432

0
0.0432

1.5648
X [k + 1] =
X [k ] +

U [k ]
0.0432
0

1.0432
Y [k ] = [2 3]X [k ]
4.4 Find the companion-form and modal-form equivalent equations of

0
1
2 0

X = 1
0
1 X + 0U
1
0 2 2
Y = [1 1 0]X

Answer: use [ab ,bb,cb,db,p]=canon(a,b,c,d,companion)


We get the companion form
ab = 0
0
-4
1
0
-6
0
1
-4
bb = 1
0
0
cb = 1
-4
8
db =0
p =1.0000
1.0000
0
0.5000
0.5000 -0.5000
0.2500
0 -0.2500

1
0 0 4

X = 1 0 6 X + 0U
0 1 4
0
Y = [1 4 8]X
use use [ab ,bb,cb,db,p]=canon(a,b,c,d) we get the modal form
ab = -1
1
0
-1
-1
0
0
0
-2
bb = -3.4641
0
1.4142
cb = 0
-0.5774
0.7071
db = 0
p = -1.7321 -1.7321 -1.7321
0
1.7321
0
1.4142
0
0

0
3.4641
1 1
X = 1 1 0 X + 0 U
1.4142
0 0 2
Y = [0 0.5774 0.7071]X
4.5 Find an equivalent state equation of the equation in Problem 4.4 so that all state variables have
their larest magnitudes roughly equal to the largest magnitude of the output.If all signals are
required to lie inside 10 volts and if the input is a step function with magnitude a,what is the
permissible largest a? 4.4
10 a
a
Answer: first we use matlab to find its unit-step response .we type

a=[-2 0 0;1 0 1;0 -2 -2]; b=[1 0 1]';c=[1 -1 0]; d=0;


[y,x,t]=step(a,b,c,d)
plot(t,y,'.',t,x(:,1),'*',t,x(:,2),'-.',t,x(:,3),'--')
1.2
1.1
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
-0.1
-0.2
-0.3
-0.4
-0.5
-0.6

x2

x1

x3

so from the fig above.we know max(|y|)=0.55, max(|x1|=0.5;max(|x2|)=1.05,and max(|x3|)=0.52


for unit step input.Define x1 = x1 , x 2 = 0.5 x 2 , x3 = x3 , then

0
1
2 0

X = 0.5 0 0.5 X + 0U
1
0 4 2
Y = [1 2 0]X
the largest permissible a is 10/0.55=18.2

b
0
X + 1 U

0
b2

4.6 Consider X =

Y = [c1

c1 ]X

where the overbar denotes complex conjugate.Verify that the equation can be transformed into

X = A X + B u
with

0
A=

1
+

y = C1 X

0
B = C1 = 2 Re( b1c1 ) 2 Re(b1c1 ) ,by using the
1

b1

transformation X = QX with Q =

b1

b1

b1

b1 b1
,we get
b1 b1

Answer: let X = QX with Q =

b
0
QX =
QX + 1 U

0
b2

Y = [c1 c1 ]QX

Q 1 =

b1

( )b1b1 b1
1

b1

b1

so

b
0
X = Q 1
QX + Q 1 1 U

0
b2
=

b1

( )b1b1 b1
1

b1 0 b1

b1 0 b1

b1
b1
b1 b1
1

U
X +
( )b1b1 b1 b1 b2
b1
0
b1

1
X +
( )b b U
( )b1b1
b1
1 1

b1 b1 b1

( )b1b1 2 b1 2 b1 b1
1
0
0
=
X + U = A X + B U

+
1
=

Y = [c1

c1 ]QX = [c1

b1
c1 ]
b1

b1
X = b1c1 b1c1
b1

c1b1 + c1b1 X = C1 X

4.7 Verify that the Jordan-form equation

0
X =

y = [c1

c2

c3

c1

c2

b1

b3

X + U
b 1

b2
1


b3
c3 ]X

can be transformed into

X = 0
0

I2
A
0

B
0


I 2 X + B u
B
A

y = C1

C2

C3 X

where A , B , and C i are defined in Problem 4.6 and I 2 is the unit matrix of order 2.
A , B , and C i 4.6 I 2
PROOF: Change the order of the state variables from [x1 x2 x3 x4 x5 x6] to [x1 x4 x2 x5 x3 x6]
And then we get

x1
x

4
x
X = 2 =
x 5
x 3

x 6
y = [c1

c1

c2

1
1

c2

c3

x1 b1
x b
4 1 A
x 2 b2
1
+ = 0
1 x5 b2
0
x3 b3


x6 b3

c3 ]X = C1

C2

I2
A
0

B1
0


I 2 X + B2 u
B3
A

C3 X

4.8 Are the two sets of state equations

1
2 1 2

X = 0 2 2 X + 1U
0
0 0 1

y = [1 1 0]X

and

1
2 1 1

X = 0 2 1 X + 1U
0
0 0 1

y = [1 1 0]X

equivalent?Are they zero-state equivalent?


Answer:
1

s 2 1 2 1

1
G 1 ( s ) = C ( sI A) B = [1 1 0] 0
s 2 2 1
0
0
s 1 0
( s 1)
2( s 1) 1
( s 2)( s 1)
[
1 1 0]
0
( s 2)( s 1) 2( s 2) 1
=
2

( s 2) ( s 1)

0
0
( s 2) 2 0
=

1
( s 2) 2
1

1 1
s 2 1

1
G 2 ( s ) = C ( sI A) B = [1 1 0] 0
s 2 1 1
0
s + 1 0
0
( s + 1)
( s 1) 1
( s 2)( s + 1)
[
1 1 0]
0
( s 2)( s + 1) ( s 2) 1
=
2

( s 2) ( s + 1)

0
0
( s 2) 2 0
=

1
( s 2) 2

obviously, G 1 ( s ) = G 2 ( s ) ,so they are zero-state equivalent but not equivalent.


4.9 Verify that the transfer matrix in (4.33)has the following realization:

1 I q
I
2 q

X =

r 1 I q
r Iq

Iq
0

0
Iq

Y = Iq

0
N1

N
0
2
X + U

Iq
N r 1
N r
0

0 0 0X

This is called the observable canonical form realization and has dimension rq.It is dual to (4.34).
(4.33) rq(4.34)
Answer:

define
C ( sI A) 1 = [ Z 1
then
C = [ Z1

Z2 Zr ]

Z 2 Z r ]( sI A)

we get
Z i 1
, i = 2 r
s
r
r
Z
sZ 1 = I q Z i i =I q 1i 1i then
i =1
i =1 s

sZ i = AZ i , i = 2 r Z i =

s r 1
s r 2
1
Z1 =
Iq , Z2 =
I q ,, Z r =
Iq
d ( s)
d ( s)
d ( s)
then
1
C ( sI A) 1 B =
( s r 1 N 1 + + N r )
d (s)
this satisfies (4.33).
4.10 Consider the 1 2 proper rational matrix

G ( s ) = [d1

d2 ]+

1
s + 1s + 2 s 2 + 3 s + 4
4

11 s 3 + 21 s 2 + 31 s + 41

12 s 3 + 22 s 2 + 32 s + 42 ]

Show that its observable canonical form realization can be reduced from Problem 4.9 as
4.9

1

X = 2
3

1
0
0
0

0
1
0
0

0
11

0
X + 21
31
1

0
41

y = [1 0 0 0]X + [d1

12
22
32

42

d 2 ]U

Answer: In this case,r=4,q=1,so

I q = 1, N 1 = [ 11

12 ], N 2 = [ 21

22 ], N 3 = [ 31

32 ], N 4 = [ 41

42 ]

its observable canonical form realization can be reduced from 4.9 to 4.10
4.11 Find a realization for the proper rational matrix

G ( s ) = s + 1
s 2
s + 1
=

2s 3 2
( s + 1)( s + 2) = s + 1

s
3
s+2
s + 1

2s 3
( s + 1)( s + 2) + 0 0

2
1 1
s+2

2
2 4 3 0 0
1
s
+

+
s + 3s + 2 3 2 6 2 1 1
2

so the realization is

3
0

X =
1

0
3
0
1

2
0
0
0

2
Y =
3

2
2

4
6

0
1 0

0 1
2
U
X +
0 0
0

0
0 0
3
0 0
+
X
1 1U
2

4.12
Find
a
realization
for
each column of

G ( s ) in Problem
4.11,and

then

connect them.as shown in fig4.4(a).to obtain a realization of G ( s ) .What is the dimension of this
realization ?Compare this dimension with the one in Problem 4.11. 4.11 G ( s )
4.4(a) G ( s ) 4.11
Answer:

1 2 0
1 2
=
G 1 ( s ) =

+
s + 1 s 2 s + 1 3 1
X = X + U
1

2
0
YC1 = X 1 + U 1
3
1
G 2 ( s ) =

2 3 0
2 s 3 0
1
1
+ =
+
s
+

( s + 1)( s + 2) 2s 2 1 ( s + 1)( s + 2) 2 2 1

3 2
1
X 2 =
X 2 + U 2

0
1
0
2 3
0
YC 2 =
X 2 + U 2

2 2
1
These two realizations can be combined as

0
1 0
1 0

X = 0 3 2 X + 0 1U
0 0
1
0
0
2 3
2
0 0
Y =
X +

U
3 2 2
1 1
the dimension of the realization of 4.12 is 3,and of 4.11 is 4.
4.13 Find a realization for each row of G ( s ) in Problem 4.11 and then connect them,as shown in
Fig.4.4(b),to obtain a realization of G ( s ) .What is the dimension of this realization of this
realization?Compare this dimension with the ones in Problems 4.11 and 4.12. 4.11

G ( s ) 4.4(b) G ( s )
4.114.12
Answer:

G 1 ( s ) =

1
[2s + 4 2s 3] = 2 1 [s[2 2] + [4 3]]
( s + 1)( s + 2)
s + 3s + 2

3 1
2 2
X 1 =
X1 +

U 1
2 0
4 3
YC1 = [1 0]X 1 + [0 0]U 1
G 2 ( s ) =

1
1
[ 3( s + 2) 2( s + 1)] + [1 1] =
[s[ 3 2] + [ 6 2]] + [1 1]
( s + 1)( s + 2)
( s + 1)( s + 2)

3 1
3 2
X 2 =
X2 +

U 2
2 0
6 2
YC 2 = [1 0]X 2 + [1 1]U 2
These two realizations can be combined as

3
2

X =
0

1 0
0 0
0 3
0 2

0
2
2

0
4 3

X+
U
3 2
1

0
6 2
1 0 0 0
0 0
Y =
X +

U
0 0 1 0
1 1
the dimension of this realization is 4,equal to that of Problem 4.11.so the smallest dimension is
of Problem 4.12.

(12 s + 6)
4.14 Find a realization for G ( s ) =
3s + 34

22 s + 23
3s + 34

(12 s + 6)

22 s + 23

3s + 34

G ( s ) =
3s + 34
Answer:

1
(12 s + 6) 22 s + 23
[
]
[130 / 3 679 / 9]
=

+
G ( s) =
4
22
/
3
3s + 34
s + 34 / 3
3s + 34
34
X = X + [130 / 3 679 / 9]U
3
Y = X + [ 4 22 / 3]U
4.16 Find fundamental matrices and state transition matrices for

0 1
X =
X
0 t

and

1 e 2t
X =
X
0 1

Answer:
t

x1 = x 2 x1 (t ) = x 2 (t )dt + x1 (0)

for the first case:

x 2 = tx 2 d ln x 2 (t ) = tdt x 2 (t ) = x 2 (0)e 0.5t

1
1
we have X (0) = X (t ) =
0
0

and

t e 0.5t 2 dt
0

X (0) = X (t ) = 0
2
e 0.5t
1

The two initial states are linearly independent.thus


1
X (t ) =
0

e
0

1
(t , t 0 ) =
0

0.5t 2 dt

e 0.5t

e 0.5t
e

and

dt

0.5t 2

t0

e 0.5t

e 0.5t0

dt

1 e 0.5t0 2 dt t e 0.5t 2 dt
t0

=
0.5 ( t 2 t 0 2 )

0
e

for the second case:

x1 = x1 + e 2t x 2 (t ) = x1 + e t x 2 (0)
t

t
dt
dt
x1 (t ) = e 0 [ x 2 (0)e t e dt + c] = 0.5 x 2 (0)(e t e t ) + x1 (0)e t
0

x 2 = x 2 x 2 (t ) = x 2 (0)e t
we have

e t
1
X (0) = X (t ) =
0
0

and

0.5(e t e t )
0
X (0) = X (t ) =

e t
1

the two initial states are linearly independent;thus

e t
X (t ) =
0

0.5(e t e t )

e t

e t
(t , t 0 ) =
0
e t 0 t
=
0
4.17 Show

and

0.5(e t e t ) e t0

e t
0

0.5(e t0 e t0 )

e t0

0.5e t (e t0 e 3t0 ) + 0.5(e t e t )e t0

e t0 t

(t 0 , t ) / t = (t 0 , t ) A(t ).

(t 0 , t ) / t

= (t 0 , t ) A(t ).

Proof: first we know

(t , t 0 )
= A(t )(t , t 0 )
t
then
(t 0 , t )
(t , t ) ((t , t 0 )(t 0 , t )) (t , t 0 )
=
=
( t 0 , t ) + (t , t 0 )
t
t
t
t
(t 0 , t )
(t 0 , t )
= A(t )(t , t 0 )(t 0 , t ) + 1 (t 0 , t )
= A(t ) + 1 (t 0 , t )
t
t
=0
(t 0 , t ) / t = (t 0 , t ) A(t )

a11 (t ) a12 (t )
,show de ( , 0 ) = exp ( a11 ( ) + a 22 ( ))d

0
a 21 (t ) a 22 (t )

4.18 Given A(t ) =

A(t), de ( , 0 ) = exp

(a11 ( ) + a 22 ( ))d

Proof:

11 12 a11
=


21 22 a 21
so

a12 11 12
a 22 21 22

(de ) = (11 22 2112 ) = 11 22 2112 + 1122 2112

= (a1111 + a12 21 ) 22 (a1112 + a12 22 ) 21 + 11 (a 2112 + a 22 22 ) 12 (a 2111 + a 22 21 )


= (a11 + a 22 )(11 22 2112 ) = (a11 + a 22 ) de
so

de ( , 0 ) = ce

0 ( a11 ( ) + a22 ( ))d

wih ( 0 , 0 ) = I

we ge
c =1

hen de ( , 0 ) = e

0 ( a11 ( ) + a22 ( ))d

4.20 Find the state transition matrix of


.
sin t
x=
0

0
x
cos t

Answer:

x1 = sin t x1 x1 (t ) = x1 (0)e cos t 1


x 2 = cos tx 2 x 2 (t ) = x 2 (0)e sin t
then we have

e 1+ cos t
1
X (0) = X (t ) =

0
0

0
0
X (0) = X (t ) = sin t
1
e

and

the two initial states are linearly independent;thus

e 1+ cos t
X (t ) =
0

0
and
e

sin t

0 e 1+ cos t0

e sin t 0

e 1+ cos t
(t , t 0 ) =
0

0
sin t 0
e

e cos t cos t0
=
0

0
e

sin t + sin t 0

4.21 Verify that X (t ) = e At Ce Bt is the solution of X (t ) = e At Ce Bt

X = AX + XB
Verify:first we know

x(0) = C

At
e = Ae At = e At A ,then
t

X = (e At Ce Bt ) = ( Ae At )Ce Bt + e At C (e Bt B) = AX + XB
t
X (0) = e 0 Ce 0 = C
Q.E.D
4.23 Find an equivalent time-invariant state equation of the equation in Problem 4.20.
4.20
Answer: let P (t ) = X
Then A (t ) = 0

e1cos t
(t ) =
0

0
and
e
sin t

x (t ) = P(t ) x(t )

x (t ) = 0

4.24 Transform a time-invariant ( A, B, C ) into (0, B (t ), C (t )) is by a time-varying equation


transformation.(A,B,C) (0, B (t ), C (t ))
Answer: since (A,B,C)is time-invariant,then

X (t ) = e At

P(t ) = X 1 (t )
B (t ) = X 1 (t ) B = e At B
C (t ) = CX (t ) = Ce At
(t , t 0 ) = X (t ) X 1 (t 0 ) = e A(t t0 )
4.25 Find a time-varying realization and a time-invariant realization of the inpulse response

g (t ) = t 2 e t .
Answer:

g ( , ) = g ( ) = ( ) 2 e ( ) = ( 2 2 + 2 )e

= 2 e

2e

e
2 e

the time-varying realization is

e t
0 0 0

x (t ) = 0 0 0 x(t ) + te t u (t )
t 2 e t
0 0 0

y (t ) = t 2 e t

3l

x = 1
0

e t x(t )

2
s 3ls + 3ls l3
(4.41), we get the time inviant realization :

g ( s ) = L[t 2 e lt ] =
u sin g

2te t

3l2
0
1

y = [0 0 2]x

l3

0 x + 0u (t )
0
0

4.26 Find a realization of g ( , ) = sin (e

( )

) cos . Is it possible to find a time-invariant

state equation realization? g ( , ) = sin (e ( ) ) cos

Answer: clearly we can get the time-varying realization of

g ( , ) = sin (e ( ) ) cos = (sin e )(e cos ) = M ( ) N ( )


we get the realization.

x (t ) = N (t )u (t ) = e t cos tu (t )
y (t ) = sin te t x(t )

using Theorem 4.5

but we cant get g(t) from it,because g ( , ) g ( ) ,so its impossible to find a
time-invariant state eqution realization.

5.1
Is the network shown in Fing5.2 BIBO stable ? If not , find a bounded input that will excite an unbound output
5.2 BIBO ?. .
From Fig5.2 .we can obtain that
x=uy
y=x

Assuming zero initial and applying Laplace transform then we have

y ( s ) =

y( s )
s
= 2
u ( s ) s + 1

g (t ) = L1 [ g ( s )] = L1 [
because

s
] = cos t
s +1
2

x
2
0

2 k +3

2
2 k +1

k =0
2

| g (t ) | dt = | cos t | dt = cos tdt +


0

=1+
=1+
=1+

(1)

k +1

[sin(2 + 32 ) sin( k + 12 )

(1)

k +1

(1) k [sin 32 sin 12 ]

(1)

2 k +1

k =0

k =0

(1) k +1 cos tdt

(1) 2 k +1
2

k =0

=1+2

1 =
k =0

Which is not bounded .thus the network shown in Fin5.2 is not BIBO stable ,If
y(t)=

u(t)=sint ,then we have

cos( ) sin d = cos sin( ) d = 2 sin d = 2 sin


0

we can see u(t)is bounded ,and the output excited by this input is not bounded

5.2
consider a system with an irrational function y ( s ) .show that a necessary condition for the system to be BIBO stable is
that |g(s)| is finite for all Res 0
S BIBO Res 0. |g(s)|
Proof let s= + j, then

g ( s ) = g ( + j ) = g (t )e t e jt dt =
0

If the system is BIBO stable ,we have

| g (t ) | dt M < +

which implies shat

. for some constant M

g (t )dt is convergent .

For all Res= 0, We have

g (t )e t cos tdt j g (t )e t sin tdt


0

|g(t) e t cos t | | g (t ) |
|g(t0 e t sin t || g (t ) |
so

g (t )e t cos tdt

and

g (t )e t sin tdt

are also convergent , that is .

g (t )e t cos t = N < +
g (t )e t sin tdt = L < +

where N and L are

finite constant. Then


1

2
2
2
2
| g ( s ) |= ( N + ( L) ) 2 = ( N + L ) 2

is finite ,for all Res 0 .

Another Proof, If the system is BIBO stable ,we have


.
For all Res 0 . We obtain | g ( s ) |=
That is .

| g (t ) | dt M < +. for some constant M .And

| g (t )e (Re s )t dt | g (t 0 | dt M < + .
0

| g ( s ) | is finite for all Res 0

5.3
Is

a system with impulse g(t)=

1
t
BIBO stable ? how about g(t)=t e for t 0 ?
t +1

1
t
BIBO ? g (t ) = te , t 0 ?
t +1

1
1
0 | t + 1 | dt = 0 t + 1d t = ln(t + 1) =

g(t)=
Because

| te t | dt = te t dt = (te t e t ) | 0 = 1 < +
0

the system with impulse response g(t)=


for t 0
54

is BIBO stable.

1
t
is not BIBO stable ,while the system with impulse response g(t)= te
t +1

Is a system with transfer function g ( s ) =

g ( s ) =

e 2 s
BIBO
( s + 1)

Laplace transform has the property .


If
We know

g (t ) = f (t a ).
1
] = e t , t 0 .
L1 [
s +1

| g (t ) | dt = = e (t 2 ) dt = 1 < +
2

the system is BIBO stable

g ( s ) = e s f ( s )

then
Thus

1
] = e (t 2 ) , t 2
s +1
. for t 2

L1 [e s

So the impulse response of the system is g (t ) = e ( t 2 )

e 2 s
BIBO stable
( s + 1)

5.5
Show that negative-feedback shown in Fig . 2.5(b) is BIBO stable if and only if and the gain a has a
magnitude less than 1.For
a=1 ,find a bounded input r (t ) that will excite an unbounded output.
2.5(b) BIBO a 1 a=1 r(t),

Poof ,If r(t)= (t ) .then the output is the impulse response of the system and equal

g (t ) = a(t 1) a 3 (r 3) a 4 (t 4) + = (1) i +1 a i (t i )
t =1

the impulse is definded as the limit of the pulse in Fig.3.2 and can be considered to be positive .thus we have

| g (t ) |= | a |i (t i )
t =1

and

| g (t ) | t = | a |i
t =1

if
(t i )t = | a |i = |a|
(1|a|) < + if
t =1

| a | 0
| a |< 1

which implies that the negative-feedback system shown


in Fig.2.5(b) is BIBO stable if and only if the gain a has
magitude less than 1.
For a = 1 .if we choose r (t ) = sin(t). clearly it is bounded the output excited by r (t ) = sin(t) is

i +1

i =1

i =1

i =1

y ( ) = g ( )r ()d = (1) i +1 sin( i) = (1) i +1 sin( i) = (1) i +1 sin( i) = (1) i +1 (1) i sin()
0

= sin()1
i =1

And

y(t) is not bounded

consider a system with transfer function g ( s ) =

5.6

( s 2)
what are the steady-state responses by
( s + 1)

t 0 .and by u ( t )=sin2t. for t 0 ?


( s 2)
u (t ) = 3. t 0 u (t ) = sin 2t. t 0
g ( s ) =
( s + 1)

u ( t )=3.for

g (t ) = L1 [

The impulse response of the system is


And we have

3
s2
] = L1 [`
] = (t ) 3e t .
s +1
s +1

t 0

| g (t ) | t = | (t ) 3e t | t (| (t ) | + | 3e t )t = (t )t + 3e t t = 1 + 3 = 4 < +
0

so the system is BIBO stable


using Theorem 5.2 ,we can readily obtain the steady-state responses
if u ( t )=3for t 0 .,then as t , y (t ) approaches g (0) 3 = 2 3 = 6
if u ( t )=sin2t for t 0, then ,as t . y (t ) approaches

| g ( jt) | sin( tt + g ( jt)=

2 10
sin( tt + arctan 3) = 1.26 sin( tt + 1.25)
5

1 10
2
x + u

0 1
0
3] x 2u

consider x =

5.7

is it BIBO stable ?

y=[ -2
BIBO
The transfer function of the system is

1
s +1
s + 1 10 2
[
]
g ( s ) = [ 2 3]

2
2
3

1 0
0
0

4
2s + 2
=
2=
s +1
s +1
1

The pole of

10

( s + 1)( s 1) 2 2

1
0
s 1

is 1.which lies inside the left-half s-plane, so the system is BIBO stable

5.8

consider a discrete-time system with impulse response


g[k ] = k (0.8) k for k 0, is the system BIBO stable ?
k

g[ k ] = k (0.8) k 0, BIBO
Because

1
(1 0.8) k (0.8) k
0.2
k =0
k =0
k =0

1
1
1
0.8
[ k (0.8) k k (0.8) k +1 ] =
0.8 k =

= 20 < +

0.2 k =0
0.2 k =0
0.2 1 0.8
k =0
G[k]is absolutely sumable in [0, ]. The discrete-time system is BIBO stable .

| g[k ] |= k (0.8) k =

5.9

Is the state equation in problem 5.7 marginally stable? Asymptotically stable ? 5.7

The characteristic polynomial

+ 1 10
( ) = det(I A) = det
= ( + 1)( 1)
1
0

the eigenralue Ihas positive real part ,thus the equation is not marginally stable neither is Asymptotically stable.

5.10

1 0 1

Is the homogeneous stable equation x = 0 0 0 x .marginally stable ? Asymptotically stable?

0 0 0

1 0 1

x = 0 0 0 x

0 0 0

+ 1 0 1
The characteristic polynomial is
() = det(I A) = det 0
0 = 2 ( + 1)
0
0
And the minimal polynomial is ( ) = ( + 1) .
The matrix has eigenvalues 0 , 0 . and 1 . the eigenvalue 0 is a simple root of the
stable
The system is not asymptotically stable because the matrix has zero eigenvalues

5 11

minimal .thus the equation is marginally

1 0 1

Is the homogeneous stable equation x = 0 0 0 x marginally stable ? asymptotically stable ?

0 0 0

1 0 1

x = 0 0 0 x

0 0 0

+ 1 0 1
2
The characteristic polynomial
+ 1.
() = det 0
1 =
0
0
And the minimal polynomial is ( ) = 2 ( + 1) ..the matrix has eigenvalues 0. 0. and 1 . the eigenvalue 0 is a
repeated root of the minimal polymial ( ). so the equation is not marginally stable ,neither is asymptotically
stable .
5.12

Is the discrete-time homogeneous state equation

marginally stable ?Asymptotically stable ?

0.9 0 1
x[k + 1] = 0 1 0 x[k ]
0 0 1

0.9 0 1

x[k + 1] = 0 1 0 x[k ]
0 0 1
2
The characteristic of the system matrix is ( ) = ( 1

0.9), .and the minimal polynomial is


the equation is not asymptotically stable because the matrix has eigenvalues 1.
() = ( 1
0.9),
whose magnitudes equal to 1 .
the equation is marginally stable because the matrix has all its eigenvalues with magnitudes less than or to 1 and
the eigenvalue 1 is simple root of the minimal polynomial . ( ) .

5.13

Is the discrete-time homogeneous state equation

asymptotically stable ?

0.9 0 1
x[k + 1] = 0 1 0 x[k ]
0 0 1

marginally stable ?

0.9 0 1

x[k + 1] = 0 1 0 x[k ]

0 0 1
2
Its characteristic
polynomial is
() = ( 1

0.9), and its minimal polynomial is


2
() = ( 1
0.9), the matrix has eigenvalues 1 .1 .and 0.9 . the eigenvalue 1 with magnitude equal to
1 is a repeated root of the minimal polynimial ( ) . .thus the equation is not marginally stable and is not
asymptotically stable .

5.14

1
0

0.5 1

Use theorem 5.5 to show that all eigenvalues of A =

5.5 A .
Poof :For any given definite symmetric matrix

a b
with a>0 .and
b c

where N =

The lyapunov equation AM + MA = N can be written as

have negative real parts .

ac b 2 > 0

1 a b
0 0.5 m11 m12 m11 m12 0
+

1 1 m

21 m22 m21 m22 0.5 1 b c


0.5(m12 + m21 ) m11 m12 0.5m22 a b
that we have
=

m11 m21 0.5m22 m12 + m21 2m22 b c


m11 = 1.5a + 0.25c b
m12
m
thus
is the unique solution of the
M = 11
m12 = m21 = a

m21 m22
m22 = a + 0.5c
a>0
a>0
lyapunor equation and M is symmetric and

c>0
ac b 2 > 0
1
1
1
(1.5a + 0.25c) ac = [(6a + c) 2 16ac] = (36a 2 + c 2 4ac) = [(2a c) 2 + 32a 2 ] 0
16
16
16
2
2
(1.5a + 0.25c) ac > b
m11 = 1.5a + 0.25c b > 0
a > 0, c > 0

m 12
m
det 11
= m 11 m 22 m 12 m 21 = (1.5a + 0.25c b)(a + .5c) a 2

m 21 m 22

1
= (4a 2 + 8ac + c 2 8ab 4bc)
8
1
= [(2a 2b) 2 + (c 2b) 2 + 8(ac b 2 )] > 0
8

we see is positive definite


because for any given positive definite symmetric matrix N, The lyapunov equation AM + MA = N has a
unique symmetric solution M and M is positive definite , all eigenvalues of A have negative real parts ,

5.15 Use theorem 5.D5show that all eigencalues of the A in Problem 5.14 magnitudes less than 1.
Poof : For any given positive definite symmetric matrix N , we try to find the solution of discrete Lyapunov
equation M

M AMA = N
and

a b
. where a>0
b c

Let N =

is

assumed

ac b 2 > 0

and

m
M = 11
m21

as

m12
m22

,then

m12 0 0.5 m11 m12 0


1
m
M AMA = 11

m21 m22 1 1 m21 m22 0.5 1


m12 + 0.5(m21 m22 )
m11 0.25m22
=
m11 + m12 + m21

5 m12 m22 )
m21 + 0
a b
=

b c
this

m11 =

8
3
4
4
4
2
12
12
16
a + c b, m12 = m21 = a + c b, m22 = a + c b
5
5
5
5
5
5
5
5
5
m12
m
M = 11
is the unique symmetric solution of the discrete Lyapunov equation
m21 m22

M AMA = N , And we have

(8a + 3c) 2 16ac = 64a 2 + 9c 2 + 32ac

a>0
a>0

ac b ca > 0

(8a + 3c) 2 > 16ac > 16b 8a + 3c 4b > 0 m11 =


m
det 11
m21

8
3
4
a+ c b>0
5
5
5

m12
1
= m11 m22 m12 m21 = [(8a + 3(4b)(12a + 12c 16b) (4a + 4c 2b) 2 ]

m22
25
4
(4a 2 + c 2 + 3b 2 + 5ac 8ab 4bc)
5
4
= [(2a 2b) 2 + (c 2b) 2 + 5(ac b 2 )] > 0
5
=

we see M is positive definite .use theorem 5.D5,we can conclude that all eigenvalues of A have magnitudes less
than 1.
5.16

i and any nonzero real ai ,show that the matrix

For any distinct negative real

a1

2 1

a 2 a1

M =
2 + 1
a3 a1

+
3
1

a1 a3

1 + 3
a 2 a3

2 + 3
a2
3
2 3

a1 a 2
1 + 2
a2
2
2 2
a3 a 2

3 + 2

is positive definite .

M i ai

i , i = 1,2,3. are distinct negatire real numbers and ai are nonzero real numbers , so we have

Poof :
2
1

a
>0
21
2

a1
aa

1 2

2 2
a12 a 22
1
1
2 1
1 + 2 a1 a 2
(2) . det
=

= a12 a 22 (

2
2
a 2 4 1 2 ( 1 + 2 )
a 2 a1
4 1 2 ( 1 + 2 ) 2

+
2 2
1
2
1 + 22 4 1 2 = 21 + 22 + 2 1 2 4 1 2 = 1 22
(1),

1 + 22 > 4 1 2
2

a1

2 1
det
a 2 a1
+
1
2

(3)

and

a1 a 2

1 + 2
a2
2
2 2

1
1
>
4 1 2 ( 1 + 2 ) 2

= a1 a 2 (

1
1
) >0

4 1 2 ( 1 + 2 ) 2


1

2 1

1
det M = a12 a 22 a32
2 + 1

3 + 1
1

2 1
= a12 a 22 a32 det 0

1
1 + 2
1

2 2
1

3 + 2

1 + 3
1

2 + 3
1

2 3

1
1

1 + 2
1 + 3

2 3
2 1
1
1

2
2 2 2 + 1
1 + 3
2 + 3 ( 1 + 2 ))

2 1
2 1
1
1

2 3 1 + 22
1 + 3
2 + 3 ( 1 + 2 ))

1
2 1
2 1
2 1
1
1
1 2 2 2
2

a1 a 2 a3 det

2
2 + 3 ( 1 + 2 ))
1 + 3
2 3 3 + 12
2 1
2 2 2 + 1

1
4 1
1
1 2 2 2
1

a1 a 2 a3 det

2 1
1 + 3
1= 2 + 3
( 2 + 3 ) 2 ( 1 + 2 ))
3 1 + 2 2 ( 1 + 3 )
4 2 3

1 2 2 2
a1 a 2 a3 > 0,
2 1

1
1
1
1
> 0,

> 0,
> 0.
2 ( 1 + 3 )
4 2 3

3 1 + 2
3 1 + 2

4 1
> 0.
1 + 3
1= 2 + 3
( 1 + 2 ))
det M < 0

From (1), (2) and (3) , We can conclude that

M is positive definite

5.17 A real matrix M (not necessarily symmetric ) is defined to be positive definite if X M X > 0
for any non zero X .Is it true that the matrix M is positive definite if all eigenvalues of Mare real
and positive or you check its positive definiteness ?
M X X M X > 0 M
M / M M

If all eigenvalues of M are real and positive or if all its leading pricipal minors are positive ,the matrix
M may not be positive definite .the exampal following can show it .

1 8
, Its eigenvalues 1,1 are real and positive and all its leading pricipal minors are
0 1

positive , but for non zero X = [1 2] we can see X MX = 11 < 0 . So M is not positive

Let M =

definite .
Because

X MX

is a real number . we have

( X M X ) = X M X = X M X , and

1
1

1
X M X = X M X + X M X = X [ M + M )] X
2
2
2
1

This X M X > 0, if and only if X [ M + M )] X >0, that is , M is positive and the matrix
2
1
M + M ) is symmetric , so we can use theorem 3.7 to cheek its positive definiteness .
2
5.18 show that all eigenvalues of A have real parts less than < 0 if and only if ,for any given

positive definite symmetric matrix N , the equation AM + MA + 2M = N has a unique


symmetric solution M and M is positive definite .
A < 0 N

AM + MA + 2M = N M M
Proof : the equation can be written as ( A + I ) + M ( A + I ) = N
Let B = A + I , then the equation becomes B M + MB = N . So all eigenvalues of B have

megative parts if and only if ,for any given positive definite symmetric matrix N, the equation
B M + MB = N has a unique symmetric solution m and is positive definitive .
And we know B = A + I , so det(I B ) = det(I A I ) = det (( ) I A) , that is , all
eigenvalues of A are the eigenvalues of B subtracted . So all eigenvalues of A are real parts less
than < 0 .if and only all eigenvalues of B have negative real parts , eguivalontly if and only if ,

for any given positive symmetric matrix N the equation AM + MA + 2M = N has a unique
symmetric solution M and M is positive definite .
5.19

show that all eigenvalues of A have magnitudes less than if and only if , for any given

positive definite symmetric matrix N ,the equation M AMA = N . Has a unique symmetric
solution M and M is positive definite .
A N
2

2 M AMA = 2 N M M
1
1
Proof : the equation can be written as M (c)M ( A) = N let B = A , then
det(I B ) = det(I 1 A) = 1 det(I A). that is ,all eigenvalues of A are the eigenvalues
B multiplied by ,so all eigenvalue of B have msgritudes less than 1 . equivalently if and only if ,
for any given positive definite symmetric matrix N the equation 2 M AMA = 2 N has a unique
symmetric solution M and M is positive definite .
520

Is a system with impulse response g ( , ) = e

g (t , t) = sin(e

2 | | | |

, for , BIBO stable ? how about

(t t)

) cos t ?
2 | | | |
(t t)
BIBO g (t , t) = sin(e
g ( , ) = e
) cos t ?
BIBO

|e
0

2 | | | |

| d = e 2 | | | | d = e | | d

e 2t (e t0 e t )

= e 2 t ( 2 e t0 e t )
e 2t (2 e t0 e t )

if

t0 0
t 0 < 0 and t > 0

if
if

t0 < 0

and

2 <+

t<0

| sin(e ( t t ) ) cos t |=| sin t e ( t t ) | cos t || sin t | (1 e t0 t )

| sin | (e

( )

)d =| sin | e e d =| sin | (1 e ( 0 ) )
0

t0 t,

| sin | (e

( )

| sin (e

( )

t 0 t

( 0 ,1]

)d 0 < +

so

) cos | | sin | e | sin | e ( 0 ) 1 < +


0

Both systems are BIBO stable .

y = e t

5.21 Consider the time-varying equation x = 2tt + u


marginally stable ? asymptotically stable ?

y = e t

x = 2tt + u

x (t 0 = 2tx(t ) x(t ) = x( 0 ) e t2

x( t ) = e t2
2

Its impulse response is g ( , ) = e e

is a fundamental matrix

t0

= e

g ( , )d = e d e d = 2e d = e d = 2
2

so the equation is BIBO stable .

| ( t , t 0 ) |=| e t

t0

is the equation stable ?

BIBO

its stable transition matrix is ( t , t 0 ) = e t

= < +
2

|= e t t0 ,
t t 0 is not bounded ,that is . there does not exist a finite
constant M such that ( t , t 0 ) M < +. so the equation is not marginally stable and is not

asymptotically stable .
5.22

show that the equation in problem 5.21 can be transformed by using


t 2

t 2

with
p (t ) = e , into x = 0.x + e u
y = x is the equation BIBO
stable ?marginally stable ? asymptotically ? is the transformation a lyapunov transformation ?

x = p (t ) x,

5 21 x = p (t ) x,
2
x = 0.x + e t u

with

p (t ) = e t

y = x BIBO

lyapunov ?
t2

proof : we have found a fundamental matrix of the equation in problem 5.21 .is x(t ) = e . so
1

from theorem 4.3 we know , let pt ) = x (t ) = e

t 2

. and x = p (t ) x,

, then we have

A(t ) = 0
1

B
t ) = x (t ) B
t ) = e t .

C
t ) = C (t ) B
t ) = e t e t = 1
. D
t ) = D(t ) = 0
2

t
and the equation can be transformed into
x = 0.x + e u
y=x

the impulse response is invariant under equivalence transformation , so is the BIBO stability .
that is marginally stable . x = 0.x x(t ) = x(0) x(t ) = 1

is a fundamental matrix

the state transition matrix is ( t , t 0 ) = 1


| ( t , t 0 ) |= 1 1 < + so the equation is
marginally stable . t , | ( t , t 0 ) | 0 ()so the equation is not asymptotically

stable .
5.23

from theorem 5.7 we know the transformation is mot a lyapunov transformation

1
3t
e

is the homogeneous equation x =

marginally stable ? asymptotically stable?

1
3t
e

x =

0
x ,
0

0
x
0

for t 0 0

t 0 0

x1 (t ) = x1 (0)e t
x1 = x1
0
x

1
1
x 2 (t ) = x1 (0)e 4t + x 2 (0) x1 (0)
x 2 = e 3t x1
0
4
4
e t
0

x = 1 4t
1
4 e

e t +t0
0
1
is the state
is a fundamental matrix of the equation . (t ) = X (t ) X (t 0 ) = 1 4t +t0
1
e

4
1 4 t + t 0 1 3t 0
t + t
transition of the equation. || ( t , t 0 ) || = max e 0 ,1 + e
e for all t 0 0
4
4

1 1
1
1
and t t 0
0 e t +t0 1; e 3t0 0, so
0 e t +t0 1; 0 e 4t +t0
4 4
4
4
3
1 4 t + t 0 1 3t 0 5
5
.
1+ e
e

|| ( t , t 0 ) || < + the equation is marginally


4
4
4
4
4
stable .if t t 0 is fixed to some constant , then || ( t , t 0 ) || 0 () as t ,

1
x = 3t
e

so the equation is not asymptotically stable .

for all t and t 0 with t t 0 . we have 0 e

t + t 0

1; 0

( t , t 0 ) is bounded , so the equation is marginally stable .


because the (2,2)th entry of ( t , t 0 ) does not approach zero as
asymptotically .

1 4t +t0 1
. every entry of
e
4
4
t , the equation is not

6.1

1
0
1
0

x = 0
0
1 x + 0
is the stable equation
0
1 3 3
y = [1 2 1]x

controllable ? observable ?

1
0
0

0
1 = 3
([ B AB A B]) = ( 0
1 3 3
thus the state equation is controllable ,
2
0
1
1
0= ( 1 2 1= 1
1
0
2
1
2

but it is not observable .

0 1
0 1 0

x = 0 0 1 x + 1 0u
6.2 is the state equation
0 0
1 2 1
y = [1 0 1]x

controllable ? observable?

( B ) = 2. n 2 = 3 2 = 1, using corollary , we have

([ B

0 1 1 0
AB]) = ( 1 0 0 0 ) = 3 ,Thus the state equation is controllable
0 0 2 0

1
1 0
C

( CA ) = ( 0 3 1 ) = 3 . Thus the state equation is observable


0 2 4
CA 2

6.3 Is it true that the rank of [B AB A n1 B ] equals the rank of [ AB

A 2 B A n B] ? If

not ,under what condition well it be true ?

( [B A [ AB

A 2 B A n B] B A n1 B] ]= ()

It is not true that the rank of

[B AB A n1 B ] equals the rank of [ AB

If A is nonsingular {( ( [ AB

A 2 B A n B] .

A 2 B A n B] )= (A [B AB A n1 B] ]= ( [B

AB A n1 B ] (see equation (3.62) ) then it will be true

A11
A21

6.4 show that the state equation x =

A12
B1
+
x
0 u is controllable if and only if the pair
A22

( A22 A21 ) IS controllable ( A22 A21 )


Proof :

6.5 find a state equation to describe the network shown in fig.6.1 and then check its controllability
and observability
6.1 ,??
The state variables x1 and

x1 + x1 = u
x 2 are choosen as shown ,then we have x 2 + x 2 = 0

thus

y = x 2 + 2u
a

state

equation

describing

the

x1 1 0 x1 1
x = 0 1 x + 0U
2
2
checking
x1
y [0 1] + 2u
x2

network

the

can

be

controllability

es

and

pressed

as

observability

1 0
AB] = (
) = 1
0 1
The equation is neither controllable nor obserbable
C
1 0
( ) = (
) = 1
CA
0 1
([ B

6.6 find the controllability index and obserbability index of the state equation in problems 6,1 and 6.2
6.1 6.2 .

x1 1 0 x1 1
x = 0 1 x + 0U
2
2
the equation is controllable ,so
The state equation in problem 6.1 is
x1
y [0 1] + 2u
x2
we have n p m min(n, n p + 1) n p + 1. where

n p n p + 1. ie 3 3

=3

C
C

([c ]) = C A = 1, V = 1
C A C A 2

n = 3, p = rank (b) = 1

The controllability index and cbservability index of the state eqyation in problem 6.1 are 3 and 1 ,

0
x = 0
respectively .the state equation in problem 6.2 is
0
y [1
The

state

equation

in

is

p p + 1. ad

both

0
0 1

0 1 x + 1 0u
0 0
2 1
0 1]x

controllable

and

observable

,so

we

have

p p +1

where n = 3, p = rank ( B) = 2 . q = rank c = 1


3 2 2 .3 3 = 2, = 3
The controllability index and cbservabolity index of the state equation in problem 6.2 are 2 and
3
,respectively
rank(c)=n
.the
state
equation
is
controllable
so

n p n p + 1. where p = rank ( B) = n = 1

6.7 what is the controllability index of the state equation x = A X + IU WHERE I is the unit matrix ?
x = A X + IU
Solution . C = [ B

AB

C is an n n

A 2 B A n 1 B ] = [ I

A A 2 A n 1 ]

matrix. So rank(C) n . And it is clearly that the first n columns of C are

linearly independent. This rank(C) n .and u i = 1 for i = 0, 1, n


The controllability index of the state equation x = A X + IU is u = max(u1 , u 2 , , u n ) = 1

6.8

1 4
1
x + u

3 1
1

reduce the state equation x =

y = [1 1]x . To a controllable one . is the

reduced equation observable ?

Solution :because (C ) = ([B

1 3
AB ]) = (
) = 1 < 2
3 3

The state equation is not controllable .

choosing Q = P

1 1
=
and let x = p x then we
1 0

have A = PAP

0 1 1 4 1 1 3 4
=

1 1 4 1 1 0 0 5

0 1 1 1
1 1
B = PB =
= . C = CP 1 = [1 1]

= [2 1]
1 1 1 0
1 0
thus the state equation can be reduced to a controllable subequation

x c = 3 xc + u

y = 2 xc

and this reduced equation is observable

6.9 reduce the state equation in problem 6.5 to a controllable and observable equation . 6.5
.
Solution : the state equation in problem 6.5 is

1 0
1
x =
x + U

0 1
0
y [0 1]x + 2u

it is neither controllable nor observable

x1 1 0 x1 1
x = 0 1 x + 0U
2
2
from the form of the state equation ,we can reaelily obtain
x
y [0 1] 1 + 2u
x2
thus the reduced controllable state equation can be independent of x c .

so the equation can be further

reduced to y=2u .

6.10

1
0

x = 0
reduce the state equation

0
0

1
1
0
0
0

0
1
1
0
0

0
0
0
2
0

y = [0 1 1 0 1]x

0
0
0
1


x + 0 u

to a controllable and

0
1
2

observable equation
Solution the state equation is in Jordan-form , and the state variables associated with 1 are
independent of the state variables associated with 2 thus we can decompose the state equation into
two independent state equations .and reduce the two state equation controllable and observable equation ,
respectively and then combine the two controllable and observable equations associated with 1 and

2 . Into one equation ,that is we wanted .


1 1 0
0
x1 = 0 1 1 x1 + 1 u
0
Decompose the equation .
0 0 1
1

0
x 2 = 2
x 2 + u

1
0 2

x1
,
x 2

Where x =

y = [0 1 1]x1
y = [0 1]x 2

y = y1 + y 2

Deal with the first sub equation .c


Choosing

0 1
Q = P := 1 l 1
0 0

0
0 and let
1
1

A1 = PA1 P

C 1 0 1
= 1 0 0 0
0 0 1 0

11
B1 = PB1 = 0
0

1
0

1 0 0 1
0 0 1 = 0
0 1 0 0

x1 = p x1

0 0 1
1 1 1
1 0 0

then

0 1 0
= [0 1 1]1 C 0 = [1 1 1]
0 0 1

Thus the sub equation can be reduced to a controllable one

0 21
1
x 1c =
x 1c + u
0
1 1
y1 = [1 2 1 ]x 1c
1
(O) =
1

1
=1< 2
21
1 1
.and let ,
0 1

the reduced controllable equation is not observable choosing P =

x 1c

have

0
1

0 0 21

0 = 1 2 1
1 0
0

C1 = C1 P 1

we

1 1 0 21 1 1 1
A1 =
=

1 1
0 1 1 2 1 0
1 1 1 1
= p x 1c then we have B 1 =
=
0 1 0 0
1 1
C1 = [1 1 ]
= [1 0]
1
0

0
1

thus we obtain that reduced controllable and observable equation of the sub equation associated
with

x1co = 1 x1c 0 + u

1 :

y1 = x1c 0
deal with the second sub equation :

0 1
(C 2 ) = (
) = 2
1 2

it is controllable

0 1
(O2 ) = (
) = 1 < 2
0 2

It

is

not

observable

choosing

0 1
p=

1 0

and

let

x 2 = p x 2 then we have
1 2
0 0

1 0 1 2
=
2 1 0 1
1 0 1
=
0 1 0
0 1
C 2 = [0 1]
= [1 0]
1 0
0
A2 =
1
0
B2 =
1

0
2

thus the sub equation can be reduced to a controllable and observable

x ico = 2 xico + u
y 2 = xico
combining the two controllable and observable state equations . we obtain the reduced controllable and
observable state equation of the original equation and it is

x co = 1
0

0
1
x co + u

2
1

y = [1 1]x co

6.11 consider the n-dimensional state equation

x = A x + Bu
the rank of its controllability matrix is
y = c x + ou

assumed to be n1 < n . Let Q1 be an n n1 matrix whose columns are any n1 linearly independent
columns of the controllability matrix let p1 be an n1 n matrix such that p1Q1 = In1 ,where In1
is the unit matrix of order n1 .show that the following n1 -dimensional state equation

x 1 = P1 AQ1 x 1 + P1 Bu
y = CQ1 x 1 + Du
is controllable and has the same transfer matrix as the original state equation
n

x = A x + Bu
n1 < n Q1 n1
y = c x + ou

n n1 p1 P p1Q1 = In1 In1 n1 n1

Let P = Q

x 1 = P1 AQ1 x 1 + P1 Bu
y = CQ1 x 1 + Du

P
:= 1 , where P1 is an n1 n matrix and P2 is an (n n1 ) n one ,and we
P2

have

QP = Q1 P1 + Q2 P2 = I n

PQ
PQ = 1 1
P2 Q1

P1Q2 I n1
=
P2 Q2 0

0
= In
I n n1

that is

P1Q1 = I n1
P2 Q2 = I n n1

Then the equivalence trans formation x = p x will transform the original state equation into

x c p1
x c p1
= A[Q1 Q2 ] + Bu
x c p2
x c p2
P AQ P1 AQ2 x c p1 B
= 1 1
+
u
P2 AQ1 P2 AQ2 x c p 2 B
A
= C
0

A12 x c Bc
+ u
AC x c 0

where A c = P1 AQ1 .
state equation

x
Q2 ] c + Du
x c
x
= [CQ1 CQ2 ] c + Du
x c
x
= C c C c c + Du
x c

y = C [Q1

B c = P1 B. C C = CP1 , and

x 1 = P1 AQ1 x 1 + P1 Bu
y = CQ1 x 1 + Du

A c is n1 n1 the reduced n1 -dimensional

is controllable and has the sane transfer matrix as the original

state equation

6.12 In problem 6.11 the reduction procedure reduces to solving for P1 in P1Q1 = I . how do you
solve P1 ?
6.11 P1Q1 = I . P1 , P1 ?
Solution : from the proof of problem 6.11 we can solve P1 in this way

(C)= [ AB

A 2 B A n B] = n1 < n

we fprm the n < n matrix Q = ([q1 q n1 q n ] , where the first n1 columns are any n1 linearly
independent columns of C , and the remaining columns can aribitrarily be chosen as Q is nonsingular .
Let P = Q

P
:= 1 , where P1 is an n1 n matrix and p1Q1 = In1
P2

6.13 Develop a similar statement as in problem 6.11 for an unobservable state equation .
6.11 .
Solution : consider the n-dimensional state equation

x = A x + Bu
the rank of its observability matrix
y = C x + Du

is assumed to be n1 < n .let P1 be an n1 n matrix whose rows are any n1 linearly independent
rows of the observability matrix . let Q1 be an n n1 matrix such that p1Q1 = In1 .where In1 is
the unit matrix of order n1 , then the following n1 -dimensional state equation

x 1 = P1 AQ1 x 1 + P1 Bu
y = CQ1 x 1 + Du
is observable and has the same transfer matrix as the original state equation

6.14 is the Jordan-form state equation controllable and observable ?


Jordan-form

2
0

x = 0
0

0
0

1
2
0
0
0
0
0

0
0
2
0
0
0
0

0
0
0
2
0
0
0

0
0
0
0
1
0
0

0
0
0
0
1
1
0

0
2
2
0

1
0

0 x + 3
1
0

0
1

1
1

2 1 1

solution : ( 1 1 1 ) = 3

3 2 1
so

the

state

equation

and
is

1 0
1 1
1 1
2 2 1 3 1 1 1

2 1u y = 1 1 1 2 0 0 0 x
0 1 1 0 1 1 0
0 1

0 1
0 0

1 0 1
(
) = 2
1 0 0

controllable

,however

,it

is

not

observable

because

2 1 3
( 1 1 2 ) = 2 3
0 1 1
6.15 is it possible to find a set of

bij and a set of

cij such that the state equation is

controllable ? observable ? bij cij

1
0

x = 0

0
0

1 0 0 0
b11 b12

1 0 0 0
21 b22
0 1 1 0 x + b31 b32 u

0 0 1 0
b41 b42
b51 b52
0 0 0 1

c11
y = c 21
c31

c12

c13

c14

c 22

c 23

c 24

c32

c33

c34

c15
c 25 u
c35

solution : it is impossible to find a set of bij such that the state equation is observable ,because

b21
( b41
b51

b22
b42 2 < 3
b52

it is possible to find a set of cij such that the state equation is observable .because

C11
( C 21
C 31

C13
C 23
C 33

The equality may hold for some set of cij

C11

observable if and only if ( C 21

C 31

C13
C 23
C 33

C15
C 25 3
C 35
C11

such as C 21

C 31

C15
C 25 = 3 6
C 35

6.16 consider the state equation

1
0

x = 0

0
0
y = [C1

0
0
0

1
b1
0
0

b1
1
0
0

2
b2

C11

C12

C 21

0
b1
b

0
11
0 x + b12 u

b2
b21
b22
1
C 22 ]x

C13
C 23
C 33

C15
C 25 = I the state equation is
C 35

it is the modal form discussed in (4.28) . it has one real eigenvalue and two pairs of complex
conjugate eigenvalues . it is assumed that they are distinct , show that the state equation is
controllable if and only if b1 0;
and only if c1 0;

bi1 0 or bi 2 0

ci1 0 or ci 2 0

for i = 1 , 2 it is observable if

for i = 1 , 2

4.82

b1 0; bi1 0 or bi 2 0

for i = 1 , 2

c1 0; ci1 0 or ci 2 0

for i = 1 , 2

proof:controllability and observability are invariant under any equivalence transformation . so we


introduce a nonsingular matrix is transformed into Jordan form.

0
0
0
1 0
0 0.5 0.5 j 0
0

p = 0 0.5 0.5 j
0
0 ,

0
0.5 0.5 j
0 0
0 0
0
0.5 0.5 j

A = PAP 1

1
0

=0

0
0

C = CP 1 = [C1

0
1 + J1
0
0
0
C11 + jC12

p 1

1
0

= 0

0
0

0
0
1 J1
0
0
C11 jC12

0
1

0
0
i i 0 0

0 0 1 1
0 0 i j

0
0
0
2 + J 2
0
C 21 + jC 22

0
1

0
0

0
0
0
0

2 J 2

0.5( j )
11
12

B = PB = 0.5(11 + j12 )

0.5(211 j22 )
0.5(21 + j22 )

C 22 jC 22 ]

Using theorem 6.8 the state equation is controllable if and only if

b1 0;

0.5(b11 jb12 ) 0;

b1 0;

bi1 0; or

0.5(b21 jb22 ) 0; equivalently if and only if

bi 2 0;

for i = 1, 2

The state equation is observable if and only if

c11 0; c11 jc12 0; c 21 jc 22 0;


c1 0; ci1 0 or ci 2 0

equivalently , if and only if

for i = 1 , 2

6.17 find two- and three-dimensional state equations to describe the network shown in fig 6.12 .
discuss their controllability and observability
6.12
solution: the network can be described by the following equation :

u + x1
= x 2 2 x1 ;
2

x 2 = 3 x 3 ;

x3 = x1 x 2 ;

y = x3

they can be arranged as

2
2
x1 u
11
11
3
3
x 2 =
x1 + u
22
22
1
1
x 3 =
x1 + u
22
22
y = x3 = x1 x 2
x1 =

so we can find a two-dimensional state equation to describe the network as

2
x1 11 0 x1 11
following : =
+ 3 u
x 2 3

0 x 2
22

22

y = x3 = x1 x 2

2
2
2

11 ) = 1 < 2
and it is not controllable because (c) = ( 11
2
3
3
22 11 22

1 1
) = 2
22 0

however it is observable because (o) = ( 1

we can also find a three-dimensional state equation to describe

2
0
0

11
x

x1 11
3
1
3

u
x2 +
it : x 2 =
0
0
22
22
x3 1
x 3 1
0 0
22

22

observable because ( B

AB

y = [0 0 1]x and it is neither controllable nor

A2 B ) = 1 < 3

C
( CA ) = 2 < 3
CA 2

618 check controllability and observability of the state equation obtained in problem 2.19 . can uou
give a physical interpretation directly from the network
2.19
Solution : the state equation obtained in problem 2.19 is

x1 1 0 0 x1 1
x = 0 0 1 x + 1u
2
2
x 3 0 1 1 x3 0

y = [0 1 0]x

It is controllable but not observable because

( B

1 1 1

A B ) = 1 0 1 = 3
0 1 1

AB

0 1
0
C

( CA ) = 0 0 1 2
0 1 1
CA 2

A physical interpretation I can give is that from the network , we can that if u = 0
and

x1 (0) = a 0

x 2 (0) = 0 then y 0 that is any x(0) = [a 0 *] and u (t ) 0 yield the same output
T

Y (t ) 0 thus there is no way to determine the initial state uniquely and the state equation describing
the network is not observable and the input is has effect on all of the three variables . so the state
equation describing the network is controllable
6.19 continuous-time state equation in problem 4.2 and its discretized equations in problem 4.3 with
sampling period T=1 and . Discuss controllability and observability of the discretized equations .
4.2 4.3 T=1 and .

1
0
1
x + u

2 2
1

solution : the continuous-time state equation in problem 4.2 is x =

y = [2 3]x

the discretized
equation :

(cos T + sin T )e T
x[k + 1] =
T
2 sin Te

3 3

sin Te T
( conT + sin T )e T

[
]
x
k
u[k ]
+
2 2
2

1 + (2 sin T + conT )e T
(cos T sin T )e T

y[k ] = [2 3]x[k ]
0.3096
0.5083

0.6191 0.1108

T=1 Ad =

0
0.0432
0.0432
0

T= : Ad =

1.0471
Bd =

0.1821
1.5648
Bd =

1.0432

The xontinuous-time state equation is controllable and observable , and the sustem it describes is
single-input so the drscretized equation is controllable and observable if and only if

| I m [ i j ] | 2m / 2

for

m = 1, 2 whenever Re [ i j ] = 0

1+j and 1-j . so | I m [ i j ] |= 2

for T=1 . 2m 2

for

the eigenvalues of a are

any m = 1, 2 .so the

discretized equation is neither controllable nor observable . for T= 2m=1 .so the descretized
equation is controllable and observable .

0 1
0
x + u

0 t
1

y = [0 1]x

1
the determinant of the matrix [M 0
t

0 1
M1] =
is 1. which
1 t

6.20 check controllability and observabrlity of x =

Solution : M 1 = A(t ) M 0

is nonzero for all t . thus the state equation is controllable at every t

1 e 0.5 2 e 0.5 2 d
0
and
Its state transition matrix is ( , 0 ) =
0.5 ( 2 02 )

0
e

1 e 0.5 t2 t e 0.5 t2 d
2)
2
t0
= 0 e 0.5( t t0
C (t) (t , t 0 ) = [0 1]
2)
2

0
e 0.5( t t0

1
0
2
2
W0 ( 0 , 1 ) = 0.5( 2 02 ) 0 e 0.5( 0 ) d
0 e

1 0
0
=
d
2 02
0 0
e

We see . W0 (t 0 , t1 ) is singular for all t 0 and t , thus the state equation is not observable at any t 0 .

0 0
1
x + t u

0 1
e

6.21 check controllability and observability of x =

y = 0 e t x solution :

x 2 (t ) = x 2 (0)e t its state transition matrix is

x1 (t ) = x1 (0)

0
1 0 1 0 1
1
(t , t 0 ) = XtX (t 0 ) =
and
=
t0
t + t 0
t
0 e 0 e 0 e

0 1 1
1
(t , t) B (t) =
= t
t + t t
0 e e e
0
1
= 0 e 2 t+t0
C (t) (t, t 0 ) = 0 e t
t+ t0
0 e

WE compute

1
1
Wc ( 0 , ) = 1 e d =
0 e
0 e

Its determinant is identically zero for all to and

0
e ( 0 )
e
d

2
e 2
e ( 0 ) e ( 0 )

t 0 , so the state equation is not controllable at any t 0


0
1
0
Wc ( 0 , ) = 2 + 0 1 e 2 +0 d =
d its determinant is identically zero for all

4
+ 20
0 e
0 0

t 0 and t1 , so the state equation is not observable at any t 0 .


6.22 show shat ( ( A(t ), B (t )) is controllable at t 0 if and only if ( A(t ), B (t )) is observable at t 0
( A(t ), B (t )) t 0 ( A(t ), B (t )) t 0
Proof: (t , t 0 ) (t 0 , t ) = I where (t , t 0 ) is the transition matrix of x (t ) = A(t ) x(t ))

(t , t 0 )
(t 0 , t )

(t , t 0 ) (t 0 , t ) =
(t 0 , t ) + (t , t 0 )
t
t
t

= A(t ) (t , t 0 ) (t 0 , t ) + (t , t 0 ) (t 0 , t ) = A(t ) + (t , t 0 ) (t 0 , t ) = 0
t
t

There we have

(t 0 , t )
= A(t ) + (t 0 , t )
t

(t 0 , t )
= (t 0 , t ) + A(t ) and.
t

So (t 0 , t ) is the transition matrix of x (t ) = A(t ) x(t )) ( A(t ), B (t )) is controllable at t 0 if and only


if there exists a finite t1 > t 0 such that Wc ( 0 , ) =

(1 , )B() B () (1 , )d is nonsingular

( A(t ), B (t )) is observable at t 0 if and only if there exists a finite t1 > t 0 such that
1

Wc ( 0 , ) = ( 0 , )B() B () ( 0 , )d is nonsingular.
0

For time-invariant systems , show that A,Bis controllable if and only if (-A,B) is controllable , Is true
foe time-varying systems ?

A,B-A,B
Proof: for time-invariant system (A,B) is controllable if and only if

(C1 ) = B

AB A n 1 B = n

(assuming a is n n

(-A,B) is controllable if and only if (C 2 ) = B

AB A n 1 B = n

we know that any column of a matrix multiplied by nonzero constant does not change the rank of the
matrix . so (C1 ) = (C 2 ) the conditions are identically the same , thus we conclude that (A,B)is
controllable if and only if (-A,B)is controllable .for time-systems, this is not true .

s 1
. Find a three-dimensional controllable realization check its
s 1)( s + 2

7.1 Given g ( s ) =

observalility

s 1

s 1)( s + 2

g (s) =

Solution

s 1
s 1
using (7.9) we can find a
g ( s) = 2
2
s 1)( s + 2
s + 2 s 2 s 2

three-dimensional controllable realization as following

1
2 1 2
x = 1 0 0 x + 0u
0
0 1 0

y = [0 1 1]x this realization is not observable because

(s-1)and s 2 1)( s + 2are not coprime


7.2 find a three-dimensional observable realization for the transfer function in problem 7.1
check its controllability

s 1

s 1)( s + 2

7.1 g ( s ) =

Solution : using (7.14) we can find a 3-dimensional observable realization for the transfer

0
2 1 0

function : x = 1 0 1 x + 1 u

1
2 0 0

y = [1 0 0]x and this tealization is not

controllable because (s-1) and s 2 1)( s + 2 are not coprime


7.3
Find an uncontrollable and unobservable realization for the transfer function in
problem 7.1 find also a minimal realization
7.1

s 1
1
1
=
= 2
s 1)( s + 2 ( s + 1)( s + 2) s + 3s + 2

Solution : g ( s ) =

3 2
1
x =
x + u

0
1
0

y = [0 1]x

1
3 2 0

0 0 x + 0u
is x = 1

0
0
0 1

a minimal realization is ,

an uncontrollable and unobservable realization

y = [0 1 0]x

74 use the Sylvester resultant to find the degree of the transfer function in problem 7.1
Sylvester resultant 7.1
solution :

0
0
0
2 1 0
1 1 2 1 0
0

0 1 1 2 1
s=

0
2
0 1 1
1
0
0
1
0
2
0

0
0
0
1
0
0
rank(s)=5. because all three D-columns of s are linearly independent . we conclude that s has only
two linearly independent N-columns . thus deg y ( s ) = 2

7.5

use the Sylvester resultant to reduce

2s 1
to a coprime fraction Sylvester resultant
4s 2 1

2s 1

4s 2 1
0
1 1 0
0
2 1 1
Solution : s =
rank ( s ) = 3,
4 0
0
2

0
4 0
0

1
null ( s1 ) =
2
2s 1
=
4s 2 1

7.6

1
2
s+

1
2

1
2
=

0 1

N ( s) =

thus we have

s1 = s

1
+ 0s
2

D( s) =

1
+s
2

and

1
2s + 1
s+2
by arranging the coefficients of
s + 2 s

form the Sylvester resultant of g ( s ) =

N (s ) and

u =1

D(s ) in descending powers of s and then search linearly independent columns in

order from left to right . is it true that all D-columns are linearly independent of their LHS
columns ?is it true that the degree of g ( s ) equals the number of linearly independent N-columns?

s+2
N (s )
g (s) = 2
s + 2 s

D(s ) s Sylvester resultant

D- LHS g ( s )
N-

s+2
D( s)
g ( s ) = 2
:=
s + 2 s N ( s )

Solution

deg g ( s ) = 1

D( s ) = D2 S 2 + D1 S + D0
N (s) = N 2 S 2 + N1 S + N 0
from the Sylvester resultant :

1
2
s=
0

0
1
2
0

0
1
2
0

0
0
1

rank ( s ) = 3 u = 2 (the number of linearly independent

N-columns

7.7

1 s + 2
D( s)
and its realization
=
s + 1 s + 2 N ( s )

consider g ( s ) =


x = 1
1

2
1
x + u

0
0

y = [1 2 ]x

show that the state equation is observable if and only if the Sylvester resultant of

D( s ) and


x = 1
1

s + 2
D( s)

=
N ( s ) is nonsingular . g ( s ) = 2 1
s + 1 s + 2 N ( s )
2
1
x + u

0
0

D( s ) and

N ( s ) Sylvester resultant

1
1

proof: x =

y = [1 2 ]x

2
1
x + u

0
0

y = [1 2 ]x is a controllable canonical form realization of

g ( s ) from theorem 7.1 , we know the state equation is observable if and only id D( s ) and

N (s)

are coprime
from the formation of Sylvester resultant we can conclude that D ( s )

and

N ( s ) are coprime if

and only if the Sylvester resultant is nonsingular thus the state equation is observable if and only if the
Sylvester resultant of D ( s )

and

N ( s ) is nonsingular

7.8 repeat problem 7.7 for a transfer function of degree 3 and its controllable-form realization ,

3 7.7

1 s 3 + 1 s 2 + 2 s + 3 N ( s )
=
s 3 + 1 s 2 + 2 s + 3 D( s )

solution : a transfer function of degree 3 can be expressed as g ( s ) =


where D ( s )

and

N ( s ) are coprime

0 1 0 ) s 2 + ( 2 2 0 ) s + 2 + 3 0

, we can obtain a controllable


writing g ( s ) as g ( s ) =
s 3 + 1 s 2 + 2 s + 3

1
1 2 3

x = 1
0
0 x + 0u
canonical form realization of g ( s )
0
0
1
0
y = [1 1 0 2 2 0 3 3 0 ]x + 0 u
the Sylvester resultant of D ( s )

and

N ( s ) is nonsingular because D( s ) and

N ( s ) are

coprime , thus the state equation is observable .

7.9 verify theorem 7.7 fir g ( s ) =

verification : g ( s ) =

1
1
. g ( s ) =
7.7
2
( s + 1)
( s + 1) 2

1
is a strictly proper rational function with degree 2, and it can be
( s + 1) 2

expanded into an infinite power series as

g ( s ) = 0 s 1 + s 2 2 s + s 4 4 s 5 + 5s 6 + s 1

rT (1,1) = r ([0]) = 0 rT (1,2) = rT (2,1) = 1


rT (2,2) = rT (2 + K ,2 + L ) = 2 for veyer k ,.l = 1,2
7.10 use the Markov parameters of g ( s ) =

g ( s ) =

1
to find on irreducible companion-form realization
( s + 1) 2

( s + 1) 2

g ( s ) = 0 s 1 + s 2 + (2) s 3 + 3s 4 + (4) s 5 + 5s 6 + 3
solution

0 1
=2
1 2

T (2,2) =

deg g ( s ) =2

1 2 0 1
~
A = T (2,2)T 1 (2,2) =

2 3 1 2
0
b=
c = [1 0]
1

1
1 2 2 1 0
=
=

2 3 1 0 1 2

the triplet ( A, b, c) is an irreducible com-panion-form realization

7.11 use the Markov parameters of g ( s ) =

1
to find an irreducible balanced-form realization
( s + 1) 2

g ( s ) .

0 1

1 2

Solution : T (2,2) =

t = [0 1;1 2]tt = [1 2; 2 3]
[k , s, l ] = svd (t ), sl = sqrt ( s);
Using Matlab I type o = k * sl ; c = sl * l ;
A = inv(o) * * inv(c)
b = c(; ,1), c = O(1,1)
This yields the following balanced realization

1.7071 0.7071
0.5946
x =
x+

0.7071 0.2929
0.5946
y = [ 0.5946 0.5946]x
7.12
Show

that

the

two

2 0
1
x =
x + u

1 1
2

state

equation

2 1
1
x =
x + u

0 1
0

y = [2 0]x are realizations of (2 s + 2)

y = [2 2]x

) , are they

(s 2 s 2

minimal realizations ? are they algebraically equivalent?

Proof :

(2 s + 2)

) ,??

(s 2 s 2

(2 s + 2)
2( s + 1)
2
=
=
2
( s s 2) ( s + 1)( s 2) s 2

and

1
1
s 2

s
2
1
1

=
[2 2]
[
]
2
2

s 1 0
0
0

( s 1)( s 2) 1 = 2

1
0 s 2
s 1

s2
0 1
s2
[2 0]
2 = [2 0]
1
+
s
1
1

( s + 1)( s 2)

0 1

2
1 2 = s 2

s + 1

thus the two state equations are realizations of

(2 s + 2)

) the degree of the transfer function

(s 2 s 2

is 1 , and the two state equations are both two-dimensional .so they are nor minimal realizations .
they are not algebraically equivalent because there does not exist a nonsingular matrix p such that

2 1 1 2 0
P
P = 1 1
0 1

1 1
P =
0 0

[2
7.13

2]P 1 = [2 0]

find the characteristic polynomials and degrees of the following proper rational matrices

G 1 = s
1

s + 3

s + 3
s +1
s

s +1

1
( s + 1) 2
G 1 =
1
s + 2

( s + 1)( s + 2)

( s + 1)( s + 2)

1
2

G 3 = ( s + 1)
1
s + 2

s+3 1
s + 2 s + 5

s +1 1
s + 4 s

note that each entry of G s ( s ) has different poles from other entries
G 1 ( s ) , G 2 ( s ) G 3 ( s ) .

1
Solution : the matrix G 1 ( s ) has ,
s

s+3
,
s +1

1
,
s+3

s
and det G 1 ( s ) =0 as its minors
s +1

thus the characteristic polynomial of G 1 ( s ) is s(s+1)(s+3) and G 1 ( s ) =3 the matrix

G 2 ( s ) has

s2 s +1
1
1
1
1

,
,
,
. det G 2 ( s ) =
as
( s + 1) 3 ( s + 2) 2
( s + 1) 2 ( s + 1)( s + 2) s + 2 ( s + 1)( s + 2)
its minors , thus the characteristic polynomial of G 2 ( s ) = ( s + 1) 3 ( s + 2) 2 and

dG 2 ( s ) = 5

Every entry of G 3 ( s ) has poles that differ from those of all other entries , so the characteristic
2
2
polynomial of G 3 ( s ) is s ( s + 1) ( s + 2)( s + 3) ( s + 4)( s + 5) and

G 3 ( s ) = 8

s 1 1
7.14 use the left fraction G ( s ) =
to form a generalized resultant as in (7.83), and
s s 1
then search its linearly independent columns in order from left to right ,what is the number of linearly
independent N-columns ?what is the degree of G ( s ) ? Find a right coprime fraction lf G ( s ) ,is the
given left fraction coprime? G ( s ) (7.83),
. N-> G ( s ) ? G ( s ) .
?
1

s 1 1
Solution : G ( s ) =
=: D 1 ( s ) N ( s )

s s 1

s 1 0 1 1 0
D (s) =
=
+
s
s s 0 0 1 1
Thus we have
1
N (s) =
1
And the generalized resultant

0
0

1
S=
-1
0

1 1 0
0 -1 0

0
0

0
1
0
0

1
0
0
1

0
0
0
0

0
0
1
-1

0
0
1
rank s=5 ,
-1
0

the number of linearly independent N-colums is l. that is u=1,

null ( s ) = [ 1 0 0 0 0 1]
N 0 D0 N 1 D1
So we have

D( S ) = 0 + 1 S = S
+ 1 0
+ 1
N (S ) = + s =
0 0
0
1
G ( s ) = s 1
0
deg = G ( s ) = u = 1
the given left fraction is not coprime .
7.15 are all D-columns in the generalized resultant in problem 7.14 linearly independent .pf their LHS
columns ?Now in forming the generalized resultant ,the coefficient matrices of D (S ) and N (S ) are

arranged in descending powers of s , instead of ascending powers of s as in problem 7.14 .is it true that
all D-columns are linearly independent of their LHS columns? Does he degree of G ( s ) equal the
number of linearly independent N-columns ?does theorem 7.M4hold ? 7.14
D- LHS ? D (S ) N (S ) S .
. D- LHS ? G ( s )
N-? 7.M4 ?
Solution : because D1 0, ALL THE D-columns in the generalized resultant in problem7.14 are
linearly independent of their LHS columns
Now forming the generalized resultant by arranging the coefficient matrices of D (S ) and N (S ) in
descending powers of s :

D1

S = D0
0

N1

N0
0

D1
D0

1
1
0
0
N1 =
0
N 0
0

0 0
0
1 0
0
1 1
1
0 1 1
0 0
0
0 0
0

0
0
0
1

0
0
0
0

rank

1 1

0 1

S =5

we see the D0 -column in the second colums block is linearly dependent of its LHS columns .so it is
not true that all D-columns are linearly independent of their LHS columns .
the number of linearly independent N -columns is 2 and the degree of G ( s ) is I as known in problem
7.14 , so the degree of G ( s ) does not equal the number of linearly independent N -columns , and the
theorem 7.M4 does not hold .
7.16, use the right coprime fraction of G ( s ) obtained in problem 7.14 to form a generalized tesultant as
in (7.89). search its linearly independent rows in order from top to bottom , and then find a left coprime
fraction of G ( s ) 7.14 G ( s ) ,(7.89),
, G ( s ) .

1
Solution : G ( s ) = s 1
0

The generalized resultant

D( s) = s

1
N ( s) =
0

0
1

0
T =
0
0

1 0
0 0
0 0

0 1
1 0

0 0

rank T = 3 1 = 1, 2 = 0

0 1 0

mull ( 1 0 0 ) = [0 0 1]
0 0 0
0 1 0
1 0 0
) = [ 1 0 0 1]
mull (
0 0 1

0 1 0

1 0 0 0 1 0
D0 ] =

0 0 1 0 0 0
0 0 1 0
s 0
+
D (s) =
s=

0 1 0 0
0 1
+ 1
N (s) =
0

[ N

D0

N0

s 2 + 0.5s 0.5s
s 0 1
thus a left coprime fraction of
is G ( s ) =

s 0.5
0 1 0
0.5
s2 +1
3
7.17 find a right coprime fraction of G ( s ) = s
s+2
s 2

2 s + 1

s 2 snd then a minimal realization


2
s

G ( s ) ]

s2 +1
3
solution : G ( s ) = s
s+2
s 2

where

s 3
D (s) =
0

2 s + 1

s 2 =: D 1 ( S ) N ( S )
2
s

0 0 0 0 0 0 0 2 1 0 3
=
+
s +
s + 0 0 s
s 2 0 0 0 0 0 1

s 2 + 1 s (2 s + 1) 1 0 0 1 1 2 2
N (s) =
=
+
s +
s
2 s 2 0 1 2 0 0
s+2

the generalized resultant is

0
0

0
0

0
S =
1

0
0

0
0

0 1 0 0 0 0 0 0 0 0 0
0 2 0 0 0 0 0 0 0 0 0
0 0 1 0 0 1
0 0 0 0

0 1 2 0 0 2 0 0 0 0 0
0 1 2 0 0 0 1 0 0 1 0

1 0 0 0 0 1 2 0 0 2 0
0 0 0 0 0 1 2 0 0 0 1

0 0 0 0 1 0 0 0 0 1 2
0 0 0 1 0 0 0 0 0 1 2

0 0 0 0 0 0 0 0 1 0 0

0 0 0 0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 0 0 0 0

rank s = 9,

1 = 2, 2 = 1

the monic null vectors of the submatrices that consist of the primary dependent N 2 -columns and

N 1 -columns are , respectively

Z 2 = [ 2.5 2.5 0 0.5 0 0 0.5 1]


Z1 = [ 0.5 2.5 0 0.5 1 1 0.5 0 0 1]
N 0
D
0

N 1 0.5 2.5 0 0.5 1 1 0.5 0 0 0 1 0



=

D1 2.5 2.5 0 0.5 0 0 0.5 1 0 0 0 0


N2

D2
0 0.5 0.5
0
1 0 2 s 2 + 0.5s 0.5s
D(=
s)
s
+
+


0 0 s=
1
s 0.5
0.5 0.5 0

0.5
s + 0.5 2.5
0.5 2.5 1 0
N ( s )=
s=
+

2.5 2.5 1 0
s + 2.5 2.5
s + 0.5 2.5 s 2 + 0.5s 0.5s

s 0.5
s + 2.5 2.5 0.5

thus a right coprime fraction of G ( s ) is G ( s ) =

s 2
we define H ( s ) =
0

s 0
L( s ) = 1 0
0 1

then we have

0
0
1 0.5
0.5
=
D( s)
H (s) +

L( s )
0 1
0 0.5 0.5
1 0.5 2.5
N (s) =
L( s )
1 2.5 2.5
1

1 0.5
1 0.5
=
=
D
0 1
0
1

0
0 0.5 0.25 0.25
1 0.5 0.5
=
Dhc1 Dic =

1 0 0.5 0.5 0 0.5 0.5


0
1
hc

thus a minimal realization of G ( s ) is

0.5 0.25 0.25


1 0.5

x 1
0
0 x + 0
0 u
0
0
0.5
0.5
1
1 0.5 2.5
y=
x
1 2.5 2.5

2 1
1
x + u

1 1
2

y = [1 1]x

8.1 given x =

find the state feedback gain k so that the feedback system has 1 and 2 as its eigenvalues .
compute k directly with using any equivalence transformation , ,
k -1,-2.
Solution : introducing state feedback u = r [k1

2 1
1
x =
x [k1

1 1
2

k 2 ]x , we can obtain

1
k 2 ]x + r
2

the new A-matrix has characteristic polynomial

f ( s ) = ( s 2 + k1 )( s 1 + 2k 2 ) (1 2k1 )(1 k 2 )
= s 2 + (k1 + 2k 2 3) s + (k1 5k 2 + 3)
the desired characteristic polynomial is ( s + 1)( s + 2) = s 2 + 3s + 2 ,equating

k1 + 2k 2 3 = 3 and k1 5k 2 + 3 = 2 , yields
k1 = 4 and

k 2 = 1 so the desired state feedback gain k is k = [4 1]

8.2 repeat problem 8.1 by using (8.13), (8.13) 8.1.


solution :

D ( s ) = ( s 2)( s 1) + 1 = s 2 3s + 1
D f ( s ) = ( s + 1)( s + 2) = s 2 + 3s + 2
k = [3 (3) 2 3] = [6 1]

1 3
=

0 1

D = [b

1 3
C=

0 1

1
1 4
1
7
=
Ab] =
C

2
1
2 7

7 C is non sin gular , so ( A, b) is can trollable


1
7

using (8.13)

1 3 1 7 4 7
= [4 1]
k = k CC 1 = [6 1]

0 1 2 7 1 7
8.3 Repeat problem 8.1 by solving a lyapunov equation
lyapunov 8.1.

solution : ( A, b) is controllable

1 0
and k = [1 1] then
0 2

selecting F =

0 13
9 1
AT TF = b k T =
T 1 =

9
13 0

1
13
9 1
k = k T 1 = [1 1]
= [4 1]
13 0

1
1 1 2

8.4 find the state foodback gain for the state equation x = 0 1 1 x + 0 u

1
0 0 1
so that the resulting system has eigenvalues 2 and 1+j1 . use the method you think is the
simplest by hand to carry out the design .
-2 -1+j1 /
solution : ( A, b) is controllable

( s ) = ( s 1) 3 = s 3 3s 2 + 3s 1
f ( s ) = ( s + 2)( s + 1 + j1)( s + 1 j1) = s 3 + 4s 2 + 6s + 4
k = [4 (3) 6 3 4 (1)] = [7 3 5]

1 3
1 3 3

C = 0 1 3
C = 0 1
0 0
0 0
1
1
1
1 1 2
2 C 1 = 2 3
C = 0 1
1
1 1
1
2
1

6
3
1
0
2
1

1
0
1 3 6 1

Using(8.13) k = k C C [7 3 5]0 1 3 2 3 2 = [15 47 8]


0 0 1 1
2 1
1

8.4 Consider a system with transfer function g ( s ) =

change the transfer function to g f ( s ) =

( s 1)( s + 2)
is it possible to
( s + 1)( s 2)( s + 3)

( s 1)
by state feedback? Is the resulting
( s + 2)( s + 3)

system BIBO stable ?asymptotically stable?


g ( s ) g f ( s ) ? BIBO ??
Solution : g f ( s ) =

( s 1)
( s 1)( s + 2)
=
( s + 2)( s + 3) s + 2) 2 ( s + 3)

We can easily see that it is possible to change g ( s ) to g f ( s ) by state feedback and the resulting
system is asymptotically stable and BIBO stable .

8.6 consider a system with transfer function g ( s ) =


change the transfer function to g f ( s ) =
stable? Asymptotically stable ?

( s 1)( s + 2)
is it possible to
( s + 1)( s 2)( s + 3)

1
by state feedback ? is the resulting system BIBO
s+3

g ( s ) g f ( s ) ? BIBO ??
Solution ; g f ( s ) =

1
( s 1)( s + 2)
=
s + 3 ( s 1)( s + 2)( s + 3)

It is possible to change g ( s ) to g f ( s ) by state feedback and the resulting system is


asymptotically stable and BIBO stable .. however it is not asymptotically stable .
8.7

consider the continuous-time state equation

1
1 1 2

x = 0 1 1 x + 0u
1
0 0 1

y = [2 0 0]x

let u = pr k x , find the feedforward gain p and state feedback gain k so that the resulting
system has eigenvalues 2 and 1 j1 and will track asymptotically any step reference input
, u = pr k x p k
-2 1 j1 , .
Solution :

1
s 1

g ( s ) = c( sI A)b = [2 0 0] 0

3
2
f ( s ) = s + 4 s + 6 s +4
P=

1
( s 1) 2
1
s 1
0

1
1

3
( s 1)
( s 1) 2 1

2
1
0 = 2 s 8 s + 8
s 3 3s 2 + 3s 1
( s 1) 2
1
1

s 1

3 4
= = 0.5
b3 8

k = [15 47 8]

(obtained at problem 8.4)


8.8 Consider the discrete-time state equation

1
x[k + 1] = 0
0
y[k ] = [2 0

1 2
1

1 1 x + 0u[k ]
1
0 1
0]x[k ]

find the state feedback gain so that the resulting system has all eigenvalues at x=0 show that for
any initial state the zero-input response of the feedback system becomes identically zero for

k 3
, Z=0,
,( k 3 )
solution ; ( A, b) is controllable

( z ) = z 3 3s + 3 z 1
f ( z) = z 3

k = [3 3 1]
1
0
1 3 6 1

k = C C k = [3 3 1]0 1 3 2 3 2 = [1 5 2]
0 0 1 1
2 1
1

The state feedback equation becomes

1
0 4 4
1
1 1 2 1

x[k + 1] = 0 1 1 0[1 5 2]x[k ] + 0 r[k ] = 0


1
1 x[k ] + 0 r[k ]
1
1 5 1
1
0 0 1 1
y[k ] = [2 0 0]x[k ]
denoting the mew-A matrix as A , we can present the zero-mput response of the feedback system
k

as following y zi [ k ] = C A x[0]

compute A

0 1 0
A = Q 0 0 1Q 1
0 0 0
K

0 1 0
k
A = Q 0 0 1 Q 1
0 0 0
using the nilpotent property
K

0 1 0
0 0 1 = 0

0 0 0

for

k 3

so we can readily obtain y zi [k ] = C 0 x[0] = 0 for k 3

consider the discret-time state equation in problem 8.8 let for u[k ] = pr[k ] k x[k ]

where p

is a feedforward gain for the k in problem 8.8 find a gain p so that the output will track any step
reference input , show also that y[k]=r[k] for k 3 ,thus exact tracking is achieved in a finite
number of sampling periods instead of asymptotically . this is possible if all poles of the resulting
system are placed at z=0 ,this is called the dead-beat design
8.8 , u[k ] = pr[k ] k x[ k ] , p , 8.8

k , p , k 3 y[k]=r[k].
,. Z=0 ,
,[.
Solution ;

2 z 2 8s + 8
( z ) = z 3
z 3 3s 2 + 3z 1
z 2 8z + 8
g f ( z ) = p
z3
g ( z ) =

u[k ] = pr[k ] k x[k ]

( A, b) is controllable all poles of g f ( z ) can be assigned to lie inside the section in fig 8.3(b)
under this condition , if the reference input is a step function with magnitude a , then the output
y[k] will approach the constant

g f (1) a as k +

thus in order for y[k] to track any step reference input we need g f (1) = 1

g f (1) = 2 p = 1 p = 0.5
the resulting system can be described as

0.5
0 4 4
x[k ] = 0
1
1 x[k ] + 0 r[k ]
0.5
1 5 1
= A x[k ] + b r[k ]
y[k ] = [2 0 0]x[k ]
the response excited by r[k] is
K

y[k ] = C A x[0] +

K 1

C A

K 1 m

b r[m]

M =0

as we know,

A =0

for

k 3, so
2

y[k ] = cb r[k 1] + c Ab r[k 2] + c A b r[k 3]


= r[k 1] 4r[k 2] + 4r[k 3] for k 3
for any step reference input r[k]=a the response is
y[k]=(1-4+4) a =a =r[k] for k 3

2
0
8.10 consider the uncontrollable state equation x =
0

1 0 0
0

1
2 0 0
x + u is it possible
1
0 1 0


0 0 1
1

to find a gain k so that the equation with state feedback u = r k x has eigenvalues 2, -2 ,
-1 ,-1 ? is it possible to have eigenvalues 2, -2 ,2 ,-1 ? how about 2 ,-2, 2 ,-2 ? is the equation
stabilizable?, k u = r k x
-2. 2, -1, -1, ?-2, -2, -2,-1?-2,-2,-2,-2??
Solution: the uncontrollable state equation can be transformed into

0
x c 1
=
xc 0

0 4 0
1
0 0
0 xc 0
u
+
1 3
0 x c 0


0 0 1
0
A
0 xc bc
= c
+ u
0 Ac xc 0

Where the transformation matrix is P

0 1
1 2
=
1 1

1 1

4
4
1
1

0
0
0

( AC , b c ) is controllable and ( AC ) is stable , so the equation is stabilizable . the eigenvalue of


( AC ) ,say 1 , is not affected by the state feedback ,while the eigenvalues of the controllable
sub-state equation can be arbitrarily assigned in pairs .
so by using state feedback ,it is possible for the resulting system to have eigenvalues 2 ,-2 , -1, -1,
also eigenvalues 2, -2, -2, -1.
It is impossible to have 2, -2, -2, -2, as it eigenvalues because state feedback can not affect the
eigenvalue 1.
8.11 design a full-dimensional and a reduced-dimensional state estimator for the state equation in
problem 8.1 select the eigenbalues of the estimators from

{ 3

2 j 2}

8.1 , .

{ 3

2 j 2}.

Solution : the eigenvalues of the full-dimensional state estimator must be selected as 2 j 2 ,


the A-, b and c matxices in problem 8.1 are

2 1
A=

1 1

1
b=
2

c = [1 1]

the full-dimensional state estimator can be written as x = ( A l c) = x + bu + l y

l1
let l = then the characteristic polynomial of x = ( A l c) is
l 2

( s ) = ( s 2 + l1 )( s 1 + l 2 ) + (1 + l 2 )(1 l1 )
= s 2 + (l1 + l 2 3) s + 3 2l1 l 2

= ( s + 2) 2 + 4 = s 2 + 4s + 8
l1 = 12
l 2 = 19

thus a full-dimensional state estimor with eigenvalues 2 j 2 has been designed as following

13
14
1
12
x =
x + u +

y
20 18
2
19
designing a reduced-dimensional state estimator with eigenvalue 3 :

(1) select |x| stable matrix F=-3


(2) select |x| vector L=1, (F,L) IS controllable
(3) solve T=: TA-FT= l c

T = [t1

t2 ]

2 1
t 2 ]
+ 3[t1
1 1
4
3
T =
21 21

let T : [t1

t 2 ] = [1 1]

(4) THE l-dimensional state estimutor with eigenvalue 3 is

z = 3 x +

13
u+ y
21

1
x = 5
21

1 y 4 21 y
4 =
5 21 z
z
21

8.12 consider the state equation in problem 8.1 .compute the transfer function from r to y of the
state feedback system compute the transfer function from r to y if the feedback gain is applied to
the estimated state of the full-dimensional estimator designed in problem 8.11 compute the
transfer function from r to y if the feedback gain is applied to the estimated state of the
reduced-dimensional state estimator also designed in problem 8.11 are the three overall transfer
functions the same ?
8.1 , r y . 8.11
, r y ,
8.11 , r y ,
?
Solution

x = A x + bu

y = cx u = r k x

(1) y x y = c( sI A + b k ) b
1

s+2
= [1 1]
9

( s + 1)( s + 2)

0 1

3s 4
1 2 = ( s + 1)( s + 2)

s + 1

2
x =
1
2
=
1

1 1
2 1 1 1
x + u =
x + r [4 1]x
1 2
1 1 2 2
1 4 1 1
x
x +
r
1 8 2 2
13 1 12
14
x =
x + u +
y
(2)
20 18 2 19
13 1 1
14
12
=
x + r [4 1]x +

[1 1]x
20 18 2 2
19
12 12 12 1
10
=
x+
r
x +
19 2
28 20 19

y = [1 2]x

1
4
1
2
1

1
1
8
2

+ 2 r
=
12 x 1
x 12 12 10

2
19 28 20
19
x
y = [1 1 0 0]
x

g r y

s 2 1
1
s 1
= [1 1 0 0]
12 12

19 19
3s 3 + 8s 2 + 8s 32
= 4
s + 7 s 3 + 22s 2 + 32s + 16

4
8

1
2
12

s + 20

1
2

1
10

28
2
(3s 4)( s 2 + 4 s + 8)
3s 4
=
=
2
( s + 1)( s + 2)( s + 4 s + 8) ( s + 1)( s + 2)

2
x =
1
2
=
1

(3)

2
=
1
13
=
21

1
1
2 1
4 1
1 1
x + u =
x
x + r r
1
2
1 1
8 2
2 2
1
4 1 4 21 y 1
x

x
+ r
1
8 2 5 21 z 2
1
11
63
1

+
[
]
1
1
x
x
z
22
126
2 r
1



12
63
1
x + z + r

23
126
2

4 21 y
13
(r [4 1]
) + [1 1]x
21
5 21 z
13
164 164
=
x 42 z + r

21
21 21
y = [1 1]x

z = 3 z +

13 12

x

= 21 23
z 164 164

21 21
x
y = [1 1 0]
z

1
63
x

126 + 2 r
z 13
42
21

s 13
g r y ( s ) = [1 1 0] 21
164

21
2
3s + 5s 12
= 3
s + 6 s 2 + 11s + 6

1
1 2
63

s 23 126 + 2

13
164
s + 42

21

21

(3s 4)( s + 3)
3s 4
=
=
( s + 1)( s + 2)( s + 3) ( s + 1)( s + 2)

these three overall transfer functions are the same , which verifies the result discussed in section
8.5.

0
0
8.13 let A =
3

1
0
1
1

0
1
2
0

0
0

0
0
B=
1
3

0
0

0
0
find two different constant matrices k such that
2

(A-BK) has eigenvalues 4 3 j and 5 4 j (A-BK)

43j 5 4 j

0
4 3 0
3 4 0
0

solution : (1) select a 4 4 matrix F =


0
0 5 4

0
4 5
0
the eigenvalues of F are 4 3 j and 5 4 j

1 0 0 0
, ( F , k1 ) is observable
0 0 1 0

(2)if k1 =

0.0013 0.0059 0.0005 0.0042


0.0126 0.0273 0.0193 0.0191

AT1 T1 F = B K 1 T1 =
0.1322 0.0714 0.1731 0.0185

0.0006 0.0043 0.2451 0.1982


606.2000 168.0000 14.2000 2.0000
1
K 1 = K 1T1 =
119.1758
14.8747
2.2253
371.0670
1 1 1 1
( F , k 2 ) is observable
0 0 1 0

(3)IF K 2 =

0.0046 0.0071 0.0024 0.0081


0.0399 0.0147 0.0443 0.0306

AT2 T2 F = B K 2 T2 =
0.2036
0.0607
0.3440
0.0245

0.2007
0.0048 0.0037 0.2473
252.9824 55.2145 0.0893 3.2509
1
K 2 = K 2T2 =
59.8815
7.4593
2.5853
185.5527

Вам также может понравиться