Академический Документы
Профессиональный Документы
Культура Документы
OpenSIUC
Articles and Preprints
Department of Mathematics
6-19-2010
This Article is brought to you for free and open access by the Department of Mathematics at OpenSIUC. It has been accepted for inclusion in Articles
and Preprints by an authorized administrator of OpenSIUC. For more information, please contact opensiuc@lib.siu.edu.
Abstract
In this paper we address the problem of state (resp. feedback) linearization of nonlinear single-input control systems
using state (resp. feedback) coordinate transformations. Although necessary and sufficient geometric conditions have
been provided in the early eighties, the problems of finding the state (resp. feedback) linearizing coordinates are subject
to solving systems of partial differential equations. We will provide here a solution to the two problems by defining
algorithms allowing to compute explicitly the linearizing state (resp. feedback) coordinates for any nonlinear control
system that is indeed linearizable (resp. feedback linearizable). Each algorithm is performed using a maximum of
n 1 steps (n being the dimension of the system) and they are made possible by explicitly solving the Flow-box or
straightening theorem. We illustrate with several examples borrowed from the literature.
1. Introduction and Preliminaries
In the late seventies and early eighties the problem of
transforming a nonlinear control system, via change of coordinates and feedback, into a linear one, has been introduced and is known today as feedback linearization. The
feedback classification was applied first to linear systems
for which a complete picture has been made possible. The
controllability, observability, reachability, and realization
of linear systems have been expressed in very simple algebraic terms. A crucial property of linear controllable
systems is that they can be stabilized by linear feedback
controllers. Because of the simplicity of their analysis and
design; because several physical systems can be modeled
using linear dynamics, and due to the observation that
some nonlinear phenomena are just hidden linear systems,
it is thus not surprising that the linearization problems
were (and still are) of paramount importance and have
attracted much attention. Uncovering the hidden linear
properties of nonlinear control systems turns out to be
useful in analyzing the latter systems though some global
properties might be lost during the operation. This paper
proposes a way of finding the linearizing coordinates. To
give a brief description of the linearization problems we
will start first by recalling some basic facts about linear
systems.
1.1. Linear Systems
We consider linear systems of the form
m
x = F x + Gu = F x + P Gi ui ,
:
i=1
y = Hx
where x Rn , F x and G1 , . . . , Gm are, respectively, linear
and constant vector fields on Rn , Hx a linear vector field
Preprint submitted to Systems & Control Letters
x
= F x
+ Gv,
x
y = H
= T GL and H
= HT 1 .
with F x
= T (F + GK)T 1 , G
It is shown in the literature [2], [14] that the dimension
of Cn and the rank of On , (hence the controllability and
observability), are two invariants of the feedback classification of linear systems. The problem of feedback classification for linear systems is to find linear state coordinates
w = T x and linear feedback u = Kx + Lv that map into
It is a classical result of the
a simpler linear system .
linear control theory (see, e.g., [2], [14]) that any linear
controllable system is feedback equivalent to the following
Brunovsk
y canonical form (single-input case):
Br : w = Aw + bv, w Rn , v R,
April 28, 2010
where
0
0
..
.
A=
0
0
1
0
..
.
0
1
..
.
..
.
0
0
0
0
..
.
1
0
0
0
..
.
b=
0
1
: w = F w + Gu = F w +
iBr : w i = Ai wi + bi vi , wi Ri , vi R,
with A = diag {A1 , . . . , Am } and b = diag {b1 , . . . , bm } .
For a complete description and geometric interpretation
of the Brunovsk
y controllability indices we refer to the literature [2], [11], [12] , [13], [14], [25] and references therein.
1.2. Nonlinear Systems and Linearization Problems.
Consider a smooth (resp. analytic) control-affine system
i=1
bi vi , w Rn , v Rm ?
lability, Brunovsk
y or Kronecker indices) such that Br is
a cascade of single-input linear systems 1Br , . . . , m
Br :
: x
= f(
x) + g(
x)v = f(
x) +
m
X
i=1
i=1
Gi ui , w Rn , u Rm ?
i=1
m
X
m
X
gi (
x)vi , x
Rn
i=1
problem 2 without solving the corresponding partial differential equations. We will provide an algorithm giving explicit solutions in each case. Recall that we have
previously obtained explicit solutions for few subclasses
of control-affine systems, namely strict feedforward forms,
strict-feedforward nice and feedforward forms, for which
linearizing coordinates were found without solving the corresponding PDEs (see [28], [29], [31]). Indeed, for those
subclasses we exhibited algorithms that can be performed
using a maximum of n(n+1)
steps each involving compo2
sition and integration of functions only (but not solving
PDEs) followed by a sequence of n + 1 derivations. What
played a main role in finding those algorithms were the
strict feedforward form structure, that is, the fact that
each component of the system depended only on higher
variables. In this paper we consider general control-affine
systems for which we provide a state and a feedback linearizing algorithms that can be implemented each using
a maximum of n steps. Those algorithms are, in part,
based on the explicit solving of the flow-box theorem [32]
and differ completely from those outlined in [28], [31] (see
also [18], [19]). Another approach was proposed in [24]
based on successive integrations of differential 1-forms. It
relies on successive rectification of vector fields via the
characteristic method using quotient manifolds in order
to reduce, at each step, the dimension of the system by
one. The difference between our approach and the later
is two fold: (a) explicit formulas are given in term of
convergent series without solving any PDE or ODE; (b)
the algorithm provides a sequence of control-affine systems without restriction on any manifold or performing a
quotient on some direction. We will address here the single input case; the generalization to multiple-input control
systems is in consideration and expected to appear somewhere. Let us mention that the linearization techniques
have been very useful and are still of interest nowadays.
If Problem 1 or Problem 2 is solvable with a controllable
pair (A, b), then the equilibrium of can be stabilized
n
P
by the feedback law u = (x)1 ((x) +
kj j (x)),
n
n
P
j=1
actuator constraints, obtained. Recall that Mayers problem consists of determining u(t) and x(t) with t [t0 , tf ]
that minimize a functional cost J = (x(tf ), tf ) subject
to the dynamics x = f (x) + g(x)u and inequality constraints s(x, u) 0, c(x) 0 when initial states are given
and terminal states satisfy (x(t0 ), x(tf )) = 0. In all these
problems however, either the dynamics are assumed to be
already linear or a linearizing coordinate is known through
the natural outputs. Let mention that due to the difficulty
of solving the partial differential equations in one part, and
the fact many systems are not feedback linearizable, the
exact feedback linearization has been extended in various
ways. The notions of partial linearization, approximate
linearization, pseudo-linearization, extended linearization,
etc, have been introduced in the literature [3], [5], [6], [8],
[9], [15], [17], [21], [36] to off-set the difficulties associated
with exact linearization. Partial linearization is thought
when the system fails to satisfy the integrability conditions
and relies on the idea of finding the largest subsystem that
can be linearizable. Approximate linearization was first
developed in [17] and later generalized in [15] using Taylor
series expansions up to some degree. The changes of coordinates and feedback obtained in this case are polynomial
that linearizes the system up to some degree, and their
obtention relies on a step-by-step algorithm or by the use
of outputs of the system defining a relative degree for the
system. In many of the methods proposed, the integrability conditions are either weakened or they are applied to a
specific class of systems (we refer the interested reader to
[5] for a survey and the references therein). The paper is
organized as following. In Section 1.3 we give some definitions and notations to be used throughout the paper. The
first main result on state linearization is given in Section
2 where an algorithm is presented, and the feedback case
considered in Section 4. Illustrative examples follow each
section and are given in Section 3 and Section 5. A constructive solution of the flow box theorem as well as the
convergence of the series is presented in Section 6 followed
by a conclusion.
j=1
kj
j1
is Hurwitz.
: x = f (x) + g(x)u, x Rn , u R.
The case of multi-input systems is more involved and will
be addressed somewhere else. Let 0 k n 1 be an
integer.
y k-linear if
Definition 1.2 We say that is Brunovsk
g(x) = b, adf g(x) = Ab, . . . , adnk1
g(x) = Ank1 b,
f
where (A, b) is the Brunovsk
y canonical pair.
We will denote hereafter the coordinates in which the system is Brunovsk
y k-linear by the bolded variables xk =
4
Br
k :
w 1 = 1 w1 + w2
w 2 = 2 w1 + w3
: w = A w + bu ,
w n1 = n1 w1 + wn
w n = n w1 + u,
The condition (2.1) remains the main criteria for the linearizing algorithm; it is a simplified version of Theorem
1.1 (S2). It barely means that the nonlinear vector field
Fk (xk1 , . . . , xkk+1 ) should be affine with respect to the
variable xkk+1 . At each step, we need to check if that condition is satisfied, then proceed if yes and stop otherwise.
The proof of this theorem relies mainly on the flow-box
theorem for which we gave recently explicit solution [32]
(see below) and on Theorem 1.1 (S2).
k = Fk (xk1 , . . . , xkk+1 ) + A
Br
xk + bu, xk Rn ,
k :x
k = (0, . . . , 0, xkk+2 , xkk+3 , . . . , xkn )T is a vector
where x
whose first k + 1 components are zero. The Brunovsk
y
k-linear forms will play a crucial role in the state linearization algorithm. For the feedback linearization algorithm
in Section 4, the Brunovsk
y k-linear forms are replaced by
the feedback k-forms defined as following.
Definition 1.3 A control-affine system : x = f (x) +
g(x)u is said to be in (F B)k -form, and we denote it FB
k ,
if in some coordinates xk = (xk1 , . . . , xkn )T , it takes the
form
j (x)
= xj +
X
(1)s xs
s=1
k (x)
X
(1)s+1 xs
s!
s=1
j (z) = zj +
The first result is as follows and states that any Slinearizable system can be transformed into a linear form
via a sequence of explicit coordinates changes each giving
rise to a Brunovsk
y k-linear system.
X
zs
s=1
k (z) =
X
zs
s=1
s!
s!
s1
X
s!
s1
X
i=0
Ls1
k (k j )(x)
(2.2)
Ls1
k (k )(x)
!
(1)i Csi zi k Lsi1
(j )(z)
i=0
(2.3)
for any 1 j n, j 6= k, is the inverse of z = (x), that
(z)
= ((z)).
is, such that
zk
1
Above, () =
(z))(1 (z)) is the mapping of
x (
tangent space induced by the diffeomorphism z = (x),
and we have adopted the following notation
zk =
h
ih
, zk h =
, . . . , zi k h = i , i 2.
zk
zk
zk
It follows that f should be affine with respect to the variable xn . If this condition fails then the system is not Slinearizable and the algorithm stops. Otherwise, the vector
field f decomposes uniquely as
k1
A. (S)-Algorithm. Consider a linearly controllable system
: x = f (x) + g(x)u, x Rn , u R.
k = Fk (xk1 , . . . , xkk+1 ) + A
Br
xk + bu, xk Rn ,
k :x
k = (0, . . . , 0, xkk+2 , xkk+3 , . . . , xkn )T , and the
where x
last n k components of the vector field Fk (xk ) are zero.
Once again reset the variable x , xk and denote Br
k simply by : x = f (x) + g(x)u with g(x) = b and
f (x) = Fk (x1 , . . . , xk+1 ) + A
xk
where x
k = (0, . . . , 0, xk+2 , xk+3 , . . . , xn )T . Notice that in
these coordinates
(Sk+1 ) ,
n1 =Fn1 (xn1 ) + A
Br
xn1 + bu, xn1 Rn ,
n1 : x
n1 0 and Fn1 (xn1 ) = f(xn1 ) = n (f ).
where x
Step 1. Reset the variable x , xn1 and , Br
n1 :
f (x) + g(x)u with g(x) = b and f (x) = Fn1 (x1 , . . . , xn ).
For to be S-linearizable, Theorem 1.1 (S2) should be
satisfied, which is equivalent to
[adqf g, adrf g] = 0, 0 q, r n 1.
2 Fk
2f
=
= 0.
2
xk+1
x2k+1
(2.4)
j (x)
= xj + j (x1 , . . . , xk ), 1 j n.
2f
(Sn ) ,
= 0.
x2n
(2.5)
z1 = 1 (x)
A
z = zk+1 zk + (0, . . . , 0, zk+2 , zk+3 , . . . , zn , 0)T
=
(1)s
(2 ) = 0, s 3, yielding
Likewise, L (2 ) = 2 and Ls1
where = (1 , . . . , n )T .
z2 = 2 (x) =
x2 +
(1)s
s=1
This ends the general step and shows that a sequence of explicit coordinates changes n (xn ), . . . , 1 (x1 ) can be constructed whose composition z = 1 n (xn ) takes the
original system into the linear form .
B. Summary of Algorithm. Start with a system
xs3 s1
(L 2 )(x)
s!
: x = f (x) + g(x)u, x Rn , u R.
z1 = z2 2z2 z3
z2 = z3
: z = f (z) + g(z)u ,
z3 = u,
Step 0.
Normalize the vector field g 7 g =
(0, . . . , 0, 1)T . Apply a linear change of coordinates to
transform the linearization such that f
x (0) = A .
Step n k. If the condition
(Sk+1 ) ,
xs3 s1
(L 1 )(x),
s!
s=1
= x1 4x2 x23 4x43 + 2x2 x23 + 4x43 x43
= x1 2x2 x23 x43 .
= x1 +
where g(z) = (0, 0, 1)T and f(z) = (z2 2z2 z3 , z3 , 0)T . The
vector field f(z) = (z2 2z2 z3 , z3 , 0)T decomposes
2f
=0
x2k+1
The next step is to rectify (x) = (2z2 , 1, 0)T . Theorem 2.2 with k = 2 and 2 (z) = 1 yields
w1
w2
=
=
z2s s1
L (1 )(z)
s!
s=1
z1 z2 (2z2 ) + (1/2!)z22 (2) = z1 + z22
z2
w3
z3 .
z1 +
(1)s
The system is then transformed, under these change of coordinates, to the linear Brunovsk
y form Br . The linearizing coordinates for the original system are thus obtained
as a composition of the two-step coordinate changes
x 1 = x2 2x2 x3 + x3 + 4x2 x3 u
: x = f (x)+g(x)u , x 2 = x3 2x3 u
x 3 = u
w1
w2
w3
= x2 + x23
= x3 .
X
(1)s xs3 s1
x 1 = x2 + ((1/2)x2 (1/12)x3 x4 ) u
(x)
=
x
+
L (2 )(x)
2
2
s!
x 2 = x3 + (1/2)x3 u
s=1
: x = f (x)+g(x)u ,
x 3 = x4 + x4 u
(1)s xs3
= x2 x3 2 (x) +
(x2 + x3 + 1)
x 4 = u.
s!
s=2
= (x2 + x3 + 1)ex3 1.
Because of the strict feedforward structure, we showed in
[28] (using a 4-step algorithm) that the change of coordinates
z = x (1/2) x x (1/3)x3
2
2
3 4
4
z = (x) ,
2
z
=
x
(1/2)x
3
3
z4 = x4
(3.1)
linearizes the system. We can recover such coordinates
directly by applying the algorithm given in the proof. Denote by f (x) = (x2 , x3 , x4 , 0)T and
(1 )(z) = 0
To find the inverse first notice that zi 3 Lsi1
i=0
From zi 3 Lsi1
(2 )(z) = 0 for all i 2, we deduce
s1
X
(2 )(z)
(1)i Csi zi 3 Lsi1
i=0
= Ls1
(2 )(z) sz3 Ls2
(2 )(z) = z2 + z3 + 1 s.
2
(z) = z2 +
(1) Cs z3 L
(2 )(z)
s!
L (1 ) = (1/2) (x3 /2)(1/12) x4 + x3 = (1/6)x3 (1/12)x24 , 2
s=1
i=0
x1 x4 1 (x) + (1/2)x24 L (1 )
z2 +
P
s=1
z3s
s! (z2
= (z2 + 1)e
z3
+ z3 + 1)
P
s=1
z3s
s! s
z3 1.
x1 = 1 (z) = z1 + (1/2)z3
2
3 2
2 (x) = x2 x4 2 (x) + (1/2)x4 L (2 ) (1/6)x4 L (2 ) x = 1 (z) ,
x2 = 2 (z) = (z2 + 1)ez3 z3 1
3
3
(x) = (x3 )x1 + x3 , x R3 , where is a flat function, that is, and all its derivatives are zero at x3 = 0. A
3 (x) = x3 x4 3 (x) + (1/2)x24 L (2 )
well-known example is the function defined by (0) = 0,
2
2
2
= x3 x4 + (1/2)x4 = x3 (1/2)x4 .
and (x3 ) = exp(1/x23 ) if x3 6= 0. It is straightforward to
Because 4 (x) = 1, we get 4 (x) = x4 and the change
check that Ls1
(1 )(x) = (s1) (x3 ) for all s 1, where
(k)
of coordinates (3.1) rectifies the control vector field g and
(x3 ) is the kth derivative of . Should (2.2) have been
linearizes the system. Notice that the algorithm described
a series around 0 or at xk = 0 the straightening diffeomorin [28] allowed only to find (3.1) by computing one compophism would have been identity:
1 (x) = x1 +
ing from 3 then 2 and finally 1 and updating the sysL (1 )(0) = x1
s!
(x)
=
x
+
z = (x) ,
2
2
pute those components independently to each other.
.
s!
s=1
X
Example 3.3 Consider (x) = x3 x1 + (x2 + x3 )x2 + x3
(1)s1 xs3 s1
3
s1
(x)
=
L (1)(0) = x3
3
in R . Here L (1 ) = 1 and L (1 ) = 0 for s 3 and
s!
s=1
s1
L (2 ) = x2 + x3 + 1 for all s 2. It follows that
which is impossible.
However we can verify easily that
R x3
X
(1)s xs3 s1
(x)
=
x
(u)
du which coincides with
1
1
L (1 )(x)
1 (x) = x1 +
0
s!
s=1
X
(1)s xs3 (s1)
= x1 x3 1 (x) + (1/2!)x23 L (1 )(x)
(x3 ).
(x)
=
x
+
1
1
s!
= x1 (1/2)x23
s=1
Also L (2 )= 21 x4 , L2 (2 )= 12 and Ls (2 )=0, s 3 implies
Indeed,
R x3
0
(u) du =
X
(1)s xs
x
= Fkn1 (xk1 , . . . , xkn )
kn1
s!
s=1
the two functions coincide when x3 = 0 and it is enough to
verify that their derivatives are also equal. The derivative
of the right hand side gives after simplification
X
(1)s xs1
3
s=1
(s 1)!
(s1) (x3 )
X
(1)s xs
s!
s=1
= z+
P
s=1
z3s
s!
i=0
s1
X
!
(1)i Csi
i=0
X
(1)s z s
z1
s=1
=
where k = k.
s!
B
Assume it is F-linearizable (let , F
and x , xn ).
n
There exists a sequence of explicit coordinates changes
n (xn ), n1 (xn1 ), . . . , 2 (x2 ) that gives rise to a seFB
FB
quence of (F B)k -forms FB
n1 , n2 , . . . , 1 such that for
FB
FB
any 2 k n we get k1 = (k ) k . Moreover, in the
coordinates z , 2 (x2 ) the system (actually FB
1 ) takes
the feedback form (F B).
(s1)
z2
z3
Z
It clearly follows that (z) = z1 +
(z3 )
z3
(s) ds, z2 , z3
((x))v,
where z = (x) is the diffeomorphism taking
the transinto the feedback form (F B), and = (,
, )
formation taking (F B) into to the Brunovsky form Br .
FB
...
n1 :
z1 = f1 (z1 , z2 )
z2 = f2 (z1 , z2 , z3 )
(F B)
w = (z),
u=
(z) + (z)v, where
1 (z) = h(z),
2 (z) = Lf(h),
f
(z)
Lnf(h)
Lg Ln1
(h)
f
and (z)
=
Lg Ln1
(h)
f
9
We deduce from (4.2) that the first k 1 components depend only on the variables z1 , . . . , zk and the kth component depends on z1 , . . . , zk+1 . In the other hand (4.1)
shows that the jth component (j = k + 1, . . . , n) depends
on the variables z1 , . . . , zj+1 . We thus conclude that
(
fj (z1 , . . . , zk ),
1j k1
fj (z) =
fj (z1 , . . . , zj+1 ), k j n,
k = Fk (xk ) + bu, xk Rn ,
FB
k :x
where (recall that k = k)
if 1 j k 1
x
=
F
k1k
k1k (xk11 , . . . , xk1k+1 )
FB
k1 :
...
This completes the induction an the algortihm; consequently, we can construct a sequence of explicit coordinates changes n (xn ),n1 (xn1 ),. . . ,2 (x2 ) whose composition z = 2 n (xn ) takes the original system
into the (F B) form.
B. Summary of Algorithm. Start with a system : x =
f (x) + g(x)u, x Rn , u R.
Step 0. Normalize the vector field g 7 g = (0, . . . , 0, 1)T
and apply a linear feedback to put the linearization in
Brunovsk
y form (not necessary but very recommended).
Step n k. If the condition
(Fk+1 ) ,
fails (nk (x) not the same for first k components) then
system is not feedback linearizable and algorithm stops. If
(Fk+1 ) is satisfied, then decompose the first k components f1 , . . . , fk as following (see (6.1))
k
P
f(z) = f (z) =
fj (x1 , . . . , xk+1 )xj
j=1
n
P
j=k+1
Apply Theorem II.2 ([33]) to construct a change of coordinates z = (x) Rn to rectify the nonsingular vector field
n
n
P
P
fj (x1 , . . . , xj+1 )xj =
fj ((z))zj . (4.1)
j=k+1
j=k+1
k
P
fj (x)xj
=
j=1
k
P
j=1
Fj (x1 , . . . , xk )xj
k
P
j=1
2 fj
fj
= nk (x)
, 1jk
x2k+1
xk+1
x 1 = x2 (1 + x3 )
x 2 = x3 (1 + x1 ) x2 u
: x = f (x) + g(x)u ,
x 3 = x1 + (1 + x3 )u
k
P
Fj (z1 , . . . , zk )zj + ((z))zk
j=1
(4.2)
10
It follows that
z2 = 2 (x) = x3 +
X
(1)s xs
s!
s=1
Ls1
3 (3 2 )(x) = x2 (1 + x3 ).
X
(1)s+1 xs
s=1
s!
1
s
Ls1
3 (3 )(x)
x2 +
X
(1)s xs
s=1
s
x3
1 + x3
s=1
Z
X x3 s1 x3 0
= =
d x3
1 + x3
1 + x3
s=1
Z
1
d x3 = ln(1 + x3 ).
=
1 + x3
=
x 1 = x2 x24
x
= x4 + 2x21 x4 + 2x4 u
2
: x = f (x) + g(x)u ,
x 3 = x21
x 4 = x1 + x24 + u
x2 +
s!
X
(1)s xs
s=1
s!
Ls1
(2 )(x)
Ls1
(2x4 )(x) = x2 x24 .
z1 = z2
g (z)u ,
z3 = z12
z4 = z1 + z42 + u
z1 = z2
z2 = (1 + z1 )ez3 (ez3 1) + z1 z2 ez3 where g = (0, 0, 0, 1)T and
z = f(z)+
g (z)u ,
z3 = z1 ez3 + u.
f(z) = (z2 , z4 2z1 z4 + 2z12 z4 2z43 , z12 , z1 + z42 )T .
The system is in (F B)-form and can be put into the linear
Clearly,
Brunovsk
y form Br : w 1 = w2 , w 2 = w3 , w 3 = v via
f
2 f
w = h(z)
= z1
z4
z4
w2 = Lfh(z)
= z2
f
2 f
w3 = L2fh(z)
= (1 + z1 )ez3 (ez3 1) + z1 z2 ez3
from which we deduce that z2j = 1 z4j , 1 j 3 fails.
v = L3 h(z)
w1 = x1
w2 = x2 (1 + x3 )
w3 = x3 (1 + x1 )(1 + x3 ) + x1 x2
x2
x 1 = e u
x 2 = x1 + x22 + ex2 u
: x = f (x) + g(x)u ,
x 3 = x1 x2
x23 )
x2 (1 + x3 )(x2 + x3 +
+ x1 (1 + x1 )(1 + 3x3 )
+[(1 + x1 )(1 + x3 )(1 + 2x3 ) x1 x2 ]u.
11
x1 +
X
(1)s xs
s!
s=1
Ls1
2 (2 1 )(x)
x1 x2 (2 1 )(x) = x1 x2 .
s1 x2
To compute 2 notice that Ls1
e
for
2 (2 ) = (1)
all s 2. It thus follows that
z2 = 2 (x) =
X
(1)s+1 xs
s!
s=1
X
xs
2 x2
s=1
s!
Ls1
2 (2 )(x)
= 1 ex2 .
z1
z2
z3
(Fk+1 ) ,
2 fj
fj
= nk (x)
, 1 j k.
2
xk+1
xk+1
(6.1)
z1
(F B)
z2
z3
f (x) =
= z2
=
z2 + ln(1 z32 ) (ln(1 z3 ))2
= (1 z3 )[
z2 ln(1 z32 ) + (ln(1 z3 ))2 ] + u
=
=
=
=
z1
z2
=
=
x3
x1 x2
w3
v
=
=
x1 x22
2x2 (x1 + x22 ) (1 + 2x2 )ex2 u
j=1
k
X
n
X
j=k+1
fn1
fn
xn1
x = n1 (x)xn1 +n1 (x)
xn
xn n
j
Q
i=1
fni
xni+1
and nj (x) j . In
Below we first give a brief proof of the constructive approach for rectifying nonsingular vector fields (Theorem
j=1
2.2) and we later address the convergence of the series.
i
n h
P
+
fj (x1 , . . . , xj+1 )xj , k+1 xk+1 + k+1 Proof of Theorem 2.2 (i). Notice that for any diffeomorphism z = (x) the two following conditions are equivaj=k+1
lent.
k
X fj
(a) ()(z) = zn .
= k+1 (x)
xj + k ,
xk+1
(b) L (j )(x) = 0 and L (n )(x) = 1 for 1 j n 1.
j=1
For that reason we will show that condition (b) holds.
X
(1)s xsn s1
nk1
nk
Ln (n j )
L (j )(x) = L (xj ) +
L
(i) adf
g
;
s!
s=1
adnk
g
f
i
k h
P
fj (x1 , . . . , xk+1 )xj , k+1 xk+1 + k+1
(iv) [
nk
nk
s=1
.
=
adnk
g, adnk1
g
f
f
j=k
L n (x) =
=
L Ls1
n (n )
(1)s1 xs1
n
n (x)Ls1
n (n )
(s
1)!
s=1
X
(1)s1 xsn
n (x)Lsn (n )
s!
s=1
where
Z
exp
X
(1)s xs
s!
n (x)Lsn (n )
n (x)n (x) = 1.
j = j (x) = xj , j 6= k, j 6= n,
x
x
k = k (x) = xn ,
(x) ,
x
n = n (x) = xk .
s!
s=1
Notice that nk = nk (x1 , . . . , xk+1 ) depends exclusively on the variables x1 , . . . , xk+1 since the components
fj depend only on such variables. A double integration
shows that there exist functions Fj (x) and j (x), 1 j k
such that
xk+1
X
(1)s1 xs
+n (x)n (x) +
(1)s xsn
n (x)Lsn (n j ) = 0.
s!
s=1
k
k
X
X
2 fj
fj
=
(
)
xj ,
xj
k+1 nk
2
x
x
k+1
k+1
j=1
j=1
(x) =
n (x)Lsn (n j )
X
(1)s1 xsn s1
L
Ln (n )
s!
s=1
s=1
(Fk+1 ) ,
s!
n
i X
nk1
adnk
g,
ad
g
=
nj adnj
g = nk adnk
g + k
f
f
f
f
(k+1 )2
j (x) +
L Ls1
n (n j )
n (x)Ls1
n (n j )
X
(1)s xs
j (x)
(s 1)!
s=1
k
X
2 fj
2k+1 (x)
xj + k (x),
2
x
k+1
j=1
s!
s=1
X
(1)s xs1
n
j (x) +
X
(1)s xs
nk (x1 , . . . , xk , s)ds dt
(6.2)
Convergence. Let us first introduce some useful notation. For any x Rn we put x = (x1 , . . . , xn ). For
the subset Nn Rn of n-tuples of integers we use a
bolded variable to denote its elements. Given two n-tuples
m = (m1 , . . . , mn ) and = (1 , . . . , n ) we say that
m if and only if mi i for all 1 i n and
n
1
we denote by m! = m1 ! mn ! and m = m
1 mn .
n
m
By extension, for x = (x1 , . . . , xn ) R we put x =
mn
1
xm
and put |m| = m1 + + mn . Let f be an
1 xn
P
analytic function with f (x) =
fm xm its Taylor se-
j
j
(z, zn ) +
(z, zn ),
zn
w
zn )) = j ((z)).
= j ((z,
w)),
((z,
w))(z,
w)
((z,
(x, y)
w))
w))
((z,
((z,
(x, y)
w)).
(L )((z,
1 f
ries expansion where fm = m!
xm (0) are constant coefficients, and = 1 x1 + + n xn an analytic vector field. For any > 0 we define the norm || || by
P
||f || =
|fm | |m| and extend the norm to vector fields
s
s
Define w
, w
s , and zn , z s for all s 1. Since on
n
the one hand side, w = zn + and on the other hand
side zn = zn , it follows that
s
w
=
s
X
(1)i Csi zi n si ,
j (x) = xj +
i=0
s
=
ws
w)
(1)i Csi zi n si (z,
i=0
= (1)s
s (z,w)
s
zn
s=1
X
(1)s xs
s1
X
w)).
(1)i Csi zi n Lsi1
(
)((z,
s!
Ls1
k (k j )(x).
14
||Ls1
k (f )||
s1
(s 1)!(
ln(/
))s+1 ||f || (())
s
(s 1)!(
ln(/
))s+1 (()) .
0) = (z, 0), we get
Taking (z,
(6.3)
s1
X
Hence
the
norm
of
the
series
(x)
can
be
approximated
j
s
=
(1)i Csi zi n Lsi1
(
)(z, 0).
by
ws w=0 i=0
s
P
s1
w) with respect to w at
||j (x)|| +
L
(f
)
s=1
w = 0 is
s1
P
1
s1
!
X
()/ ln(/
)
+ ()
s
s
X
w
z
s=1
w) =
(z,
(1)i Csi zi n Lsi1
(
)(z, 0)
+
s1
0
s!
P
s=1
i=0
()/ ln(/
)
+ ()
s=1
Let us define (z) by its components in the following way:
()
)
<
1,
that
is,
if
we
choose
<
e
.
s
s
variable y, it follows that L j = L j for all s 0. We
(ii) To prove the convergence of the series
then deduce that
s1
s1
!
!
X
X
zns X
zns X
i i i
si1
i i i
si1
j (z) = zj +
j (z) = zj +
(1) Cs zn L
(j )(z) .
(1) Cs zn L
(j )(z)
s! i=0
s! i=0
s=1
s=1
i=0
||j (z)|| +
Csi ||zi n Lsi1 (j )(z)||
s!
s=1
P
s=1
s
s!
s1
X
s+1
X
ln(/
)
s1
X
!
Csi ()si1
()
1+()
ln(/)
< 1,
11
nn (f )
mi i
f
1 ++n f
=
=
n .
1
x
x
1 xn
(ii)
n
n
X
X
f
j =
0 (f ) 1 (j1 )
x
j
j=1
j =1
mi i
fm0 x
and ji (x) =
m0 0
(ji )mi x
mi
(m0 )0 (ms )s (
/)|m|
||=s
(6.4)
where J = {j1 , . . . , js , 1 ji n} and the second summation is taken over some n-tuples i = (i1 , . . . , in ),
i = 0, 1, . . . , s with s = 0, |0 | 1 and |0 | + |1 | +
+ |s | = s. Let the Taylor expansions of the analytic
functions f, j1 , . . . , js be represented by
X
M sup
m0
||=s
s
M s! ln(/
)
s
||Ls (f )|| s! ln(/
)
||f || ||||s
(i)
M = sup
f (x) =
ln(/
)
s
||f ||
ln(/
) [(1 + ())s 1] .
()
s=1
(f ) =
J ||=s mi i
i=0
s+1
X
(1 + ())s 1
ln(/
)
s=1
P P
Csi (s 1)!(
ln(/
))s+1 ||f || ||||si1
s=1
+ ||f ||
Ls (f ) =
i=0
i=0
+ ||f ||
Consequently
s
(m0 )0 (ms )s |m0 | + + |ms | = ms
||=s
mi 0
sup
mi i
sup
mi 0
|m0 | + + |ms |
n
|m0 | + + |ms |
(
/)|m|
o
(
/)|m| .
m0 0
is
mi i
15
s
ln(/)
es .
s ||f || ||||s
mi i
||=s
homotopy operator approach, Proceedings of the American Control Conference, Baltimore, Maryland (1994), pp. 1690-1694.
[4] R. W. Brockett, Feedback invariants for nonlinear systems, in
Proc. IFAC Congress, Helsinski, 1978.
[5] G. O. Guardabassi, S. M. Savaresi, Approximate linearization
via feedbackan overview, Automatica vol 37(2001) pp. 1-15.
[6] K. Guemghar, B. Srinivasan, D. Bonvin, Approximate inputoutput linearization of nonlinear systems using the observability
normal form, European Conference Control (2003).
[7] Q. Gong, W. Kang and I. M. Ross, A pseudospectral method for
the optimial control of constrained feedback linearizable systems,
IEEE Trans. Autom. Contr., 51(2006), pp. 1115-1129.
[8] L. Guzzella, A. Isidori, On approximate linearization of nonlinear
control systems, International Journal of Robust and Nonlinear
Control vol 3(3) pp. 261-276.
[9] J. Hauser, S. Sastry, P. Kokotovic, Nonlinear control via approximate input-output linearization: The ball and beam example,
IEEE Trans. Autom. Contr., 37:3(1992), pp. 392-398.
[10] M-T. Ho, Y-W. Tu, and H-S. Lin, Controlling a Ball and Wheel
System using Full-State-Feedback Linearization: A Testbed for
Nonlinear Control Design, in IEEE Control Systems magazine,
October 2009, vol. 29(5) pp. 93-101.
[11] L. R. Hunt and R. Su, Linear equivalents of nonlinear time varying systems, in Proceedings of Mathematical Theory of Networks
& Systems, Santa Monica, CA, USA, (1981), pp. 119-123.
[12] A. Isidori, Nonlinear Control Systems, 3rd ed., Springer, London, 1995.
[13] B. Jakubczyk and W. Respondek, On linearization of control
systems, Bull. Acad. Polon. Sci. Ser. Math., 28, (1980), pp. 517522.
[14] T. Kailath, Linear Systems, Prentice Hall Information and System Sciences Series, USA, (1980).
[15] W. Kang, Approximate linearization of nonlinear control systems, Systems & Control Letters Vol 23:1(1994), pp. 43-52.
[16] A. J. Krener, On the equivalence of control systems and the
linearization of nonlinear systems, SIAM Journal on Control, 11
(1973) pp. 670-676.
[17] A. J. Krener, Approximate linearization by state feedback and
coordinate change, Systems & Control Letters, vol 5(3), 1984,
pp. 181-185.
[18] M. Krstic, Feedback linearizability and explicit integrator forwarding controllers for classes of feedforward systems, in IEEE
Trans. Automat. Control, 49, (2004), pp. 1668-1682.
[19] M. Krstic, Explicit Forwarding Controllers-Beyond Linearizable
Class, in Proceedings of the 2005 American Control Conference,
Portland, Oregon, USA, (2005), pp. 3556-3561.
s
||Ls (f )|| s! ln(/
)
||f || ||||s .
Formula (6.3) follows directly if we replace s by s 1, the
vector field by k and the function f by k j taking
into account that ||f || ().
(iii) Consider (6.4) where s is replaced by t i, that is,
Lti
(f ) =
XX
0 (f ) 1 (j1 ) ti (jti )
with ti = 0, |0 | 1 and
|0 | + |1 | + + |ti | = t i.
Differentiating i times with respect to xn we get
xi n Lti
(f ) =
XX
0
1
ti
(f )
(j1 )
(jti )
(6.5)
0 | 1 and |
0 | + |
1 | + + |
ti | = t.
with |
Following the same steps in Lemma 6.1 (ii) we get
||xi n Lti
ln(/
))t ||f || ||||ti
(f )|| t!(
. Notice that
the power t i on the last term is due to the fact there
are t i factors only that involve the components of the
vector field .
Conclusion
In this paper we provided algorithms allowing to compute (feedback) linearizing coordinates for single-input
control systems. The algorithms are based on a successive
rectification of one vector field at a time using explicit convergent power series of functions. The algorithms do not
require an a priori checking of the (feedback) linearization conditions of Theorem 1.1 (which are usually very
16
17