Вы находитесь на странице: 1из 18

Southern Illinois University Carbondale

OpenSIUC
Articles and Preprints

Department of Mathematics

6-19-2010

State and Feedback Linearizations of Single-Input


Control Systems
Issa Amadou Tall
Southern Illinois University Carbondale, itall@math.siu.edu

Follow this and additional works at: http://opensiuc.lib.siu.edu/math_articles


Published in Tall, I. A. (2010). State and feedback linearizations of single-input control systems.
Systems & Control Letters, 59(7), 449-451. doi: 10.1016/j.sysconle.2010.05.006
Recommended Citation
Tall, Issa A. "State and Feedback Linearizations of Single-Input Control Systems." ( Jun 2010).

This Article is brought to you for free and open access by the Department of Mathematics at OpenSIUC. It has been accepted for inclusion in Articles
and Preprints by an authorized administrator of OpenSIUC. For more information, please contact opensiuc@lib.siu.edu.

State and Feedback Linearizations of Single-Input Control Systems


Issa Amadou Tall
Southern Illinois University Carbondale,MC 4408, 1245 Lincoln Drive, Carbondale IL, 62901, USA, itall@math.siu.edu.

State and Feedback Linearizations of Single-Input Control Systems


Issa Amadou Tall
Southern Illinois University Carbondale,MC 4408, 1245 Lincoln Drive, Carbondale IL, 62901, USA, itall@math.siu.edu.

Abstract
In this paper we address the problem of state (resp. feedback) linearization of nonlinear single-input control systems
using state (resp. feedback) coordinate transformations. Although necessary and sufficient geometric conditions have
been provided in the early eighties, the problems of finding the state (resp. feedback) linearizing coordinates are subject
to solving systems of partial differential equations. We will provide here a solution to the two problems by defining
algorithms allowing to compute explicitly the linearizing state (resp. feedback) coordinates for any nonlinear control
system that is indeed linearizable (resp. feedback linearizable). Each algorithm is performed using a maximum of
n 1 steps (n being the dimension of the system) and they are made possible by explicitly solving the Flow-box or
straightening theorem. We illustrate with several examples borrowed from the literature.
1. Introduction and Preliminaries
In the late seventies and early eighties the problem of
transforming a nonlinear control system, via change of coordinates and feedback, into a linear one, has been introduced and is known today as feedback linearization. The
feedback classification was applied first to linear systems
for which a complete picture has been made possible. The
controllability, observability, reachability, and realization
of linear systems have been expressed in very simple algebraic terms. A crucial property of linear controllable
systems is that they can be stabilized by linear feedback
controllers. Because of the simplicity of their analysis and
design; because several physical systems can be modeled
using linear dynamics, and due to the observation that
some nonlinear phenomena are just hidden linear systems,
it is thus not surprising that the linearization problems
were (and still are) of paramount importance and have
attracted much attention. Uncovering the hidden linear
properties of nonlinear control systems turns out to be
useful in analyzing the latter systems though some global
properties might be lost during the operation. This paper
proposes a way of finding the linearizing coordinates. To
give a brief description of the linearization problems we
will start first by recalling some basic facts about linear
systems.
1.1. Linear Systems
We consider linear systems of the form

m
x = F x + Gu = F x + P Gi ui ,
:
i=1

y = Hx
where x Rn , F x and G1 , . . . , Gm are, respectively, linear
and constant vector fields on Rn , Hx a linear vector field
Preprint submitted to Systems & Control Letters

on Rp , and u = (u1 , . . . , um )> Rm . To any linear system


we attach two geometric objects: (a) the controllability
space
Cn = span [G F G F n1 G]
as a n (nm) matrix whose columns are those of the matrices F i1 G, i = 1, . . . , n, and (b) the observability space
On = span [H T (HF )T (HF n1 )T ]T ,
as a (np) n matrix whose rows are those of the matrices
HF i1 , i = 1, . . . , n. The system is controllable (resp.
observable) if and only if dim Cn = n (resp. rank On = n).
By a linear change of coordinates x
= T x and a linear
feedback u = Kx + Lv, where T , K, and L are matrices
of appropriate sizes, T and L being invertible, the system
is transformed into a linear equivalent one
(
:

x
= F x
+ Gv,
x
y = H

= T GL and H
= HT 1 .
with F x
= T (F + GK)T 1 , G
It is shown in the literature [2], [14] that the dimension
of Cn and the rank of On , (hence the controllability and
observability), are two invariants of the feedback classification of linear systems. The problem of feedback classification for linear systems is to find linear state coordinates
w = T x and linear feedback u = Kx + Lv that map into
It is a classical result of the
a simpler linear system .
linear control theory (see, e.g., [2], [14]) that any linear
controllable system is feedback equivalent to the following
Brunovsk
y canonical form (single-input case):
Br : w = Aw + bv, w Rn , v R,
April 28, 2010

where

0
0
..
.

A=

0
0

1
0
..
.

0
1
..
.

..
.

0
0

0
0
..
.

1
0

0
0
..
.

b=

0
1

Problem 1. When does there exist a local diffeomorphism


w = (x) defining new coordinates w = (w1 , . . . , wn )T in
which the transformed system takes the linear form

: w = F w + Gu = F w +

Problem 2. When did there exist a (local) feedback


transformation = (, , ) that takes into a linear
system
: w = Aw + Bv = Aw +

iBr : w i = Ai wi + bi vi , wi Ri , vi R,
with A = diag {A1 , . . . , Am } and b = diag {b1 , . . . , bm } .
For a complete description and geometric interpretation
of the Brunovsk
y controllability indices we refer to the literature [2], [11], [12] , [13], [14], [25] and references therein.
1.2. Nonlinear Systems and Linearization Problems.
Consider a smooth (resp. analytic) control-affine system

with ad0f gi = gi and adlf gi = [f, adl1


f gi ] for all l 1.
gi (x)ui , x Rn

Theorem 1.1 (i) A control system : x = f (x) + g(x)u


is locally state equivalent to a linear controllable system
: w = F w + Gu if and only if
(S1) dim span {g(x), adf g(x), . . . , adn1
g(x)} = n;
f

i=1

around an equilibrium (xe , ue ), that is, f (xe ) + g(xe )ue =


0. We assume that f, g1 , . . . , gm are smooth (resp. analytic) and (xe , ue ) = (0, 0) Rn Rm or simply f (0) = 0.
Let
m
X

bi vi , w Rn , v Rm ?

When Problem 1 (resp. Problem 2) is solvable,


then the system is called state linearizable, shortly
S-linearizable (resp. feedback linearizable, shortly, Flinearizable). Problem 1 was completely solved by
Krener [16] and Problem 2 partially by Brockett [4] for
m = 1 and constant. A generalization was obtained
independently by Hunt and Su [11], Jakubczyk and Respondek [13], who gave necessary and sufficient geometric
conditions in terms of Lie brackets of vector fields defining
the system. Indeed, attach to the sequence of nested
distributions D1 D2 Dn , where
n
o
Dk = adqf gi , 0 q k 1, 1 i m , k = 1, . . . , n

lability, Brunovsk
y or Kronecker indices) such that Br is
a cascade of single-input linear systems 1Br , . . . , m
Br :

: x

= f(
x) + g(
x)v = f(
x) +

m
X
i=1

i=1

: x = f (x) + g(x)u = f (x) +

Gi ui , w Rn , u Rm ?

i=1

When K = 0 and L = 1, that is, only a linear change


of coordinates is applied, the system Br is replaced by
: w = A w + bv, w Rn , v R, where A is the matrix
A with the first column replaced by = (1 , . . . , n )T . In
the case of multi-input linear control systems, we can find
m
P
positive integers 1 m ,
i = n (called control-

m
X

m
X

(S2) [adqf g, adrf g] = 0, 0 q < r n.


(ii) A control system : x = f (x) + g(x)u is locally
equivalent, via a feedback transformation = (, , ) to
a linear controllable system : w = Aw + bv if and only if
(F1) dim span {g(x), adf g(x), . . . , adn1
g(x)} = n;
f

gi (
x)vi , x
Rn

i=1

be another smooth (resp. analytic) control-affine system.


are called feedback equivalent if there
The systems and
exist
(
x
= (x)
:
u = (x) + (x)v

(F2) Dn1 is involutive, that is, [Dn1 , Dn1 ] Dn1 .


If the transformation = (, , ) linearizes , then
(P DEs) should hold with f((x)) = A(x), g((x)) = B.
Although the conditions (S1) and (S2) (resp. (F 1) and
(F 2)) provide a way of testing the state (resp. feedback)
linearizability of a system, they offer little on how to find
the state (resp. feedback) linearizing group except by
solving (P DEs) which is, in general, not straightforward.
Indeed, for the single-input case, the solvability of (P DEs)
is equivalent of finding a function h with h(0) = 0 such that

that is, such that


a transformation that maps into ,
(
d(x) (f (x) + g(x)(x)) = f((x))
(P DEs)
d(x) (g(x)(x)) = g((x)).

We will briefly write = (, , ) and put = .

When 0 and idm , then we say that and

are state equivalent, and we simply write = . The


following two problems were considered in the late 1970s
by Brockett [4], and Krener [16].

Lg (h) = 0, Lg Lf (h) = 0, . . . , Lg Ln2


(h) = 0, Lg Ln1
(h) 6= 0,
f
f
where for any vector field and any function h, L (h) =
h
x v(x) is the Lie derivative of h along . We propose
here to give a complete solution to both problem 1 and
3

problem 2 without solving the corresponding partial differential equations. We will provide an algorithm giving explicit solutions in each case. Recall that we have
previously obtained explicit solutions for few subclasses
of control-affine systems, namely strict feedforward forms,
strict-feedforward nice and feedforward forms, for which
linearizing coordinates were found without solving the corresponding PDEs (see [28], [29], [31]). Indeed, for those
subclasses we exhibited algorithms that can be performed
using a maximum of n(n+1)
steps each involving compo2
sition and integration of functions only (but not solving
PDEs) followed by a sequence of n + 1 derivations. What
played a main role in finding those algorithms were the
strict feedforward form structure, that is, the fact that
each component of the system depended only on higher
variables. In this paper we consider general control-affine
systems for which we provide a state and a feedback linearizing algorithms that can be implemented each using
a maximum of n steps. Those algorithms are, in part,
based on the explicit solving of the flow-box theorem [32]
and differ completely from those outlined in [28], [31] (see
also [18], [19]). Another approach was proposed in [24]
based on successive integrations of differential 1-forms. It
relies on successive rectification of vector fields via the
characteristic method using quotient manifolds in order
to reduce, at each step, the dimension of the system by
one. The difference between our approach and the later
is two fold: (a) explicit formulas are given in term of
convergent series without solving any PDE or ODE; (b)
the algorithm provides a sequence of control-affine systems without restriction on any manifold or performing a
quotient on some direction. We will address here the single input case; the generalization to multiple-input control
systems is in consideration and expected to appear somewhere. Let us mention that the linearization techniques
have been very useful and are still of interest nowadays.
If Problem 1 or Problem 2 is solvable with a controllable
pair (A, b), then the equilibrium of can be stabilized
n
P
by the feedback law u = (x)1 ((x) +
kj j (x)),
n

where the polynomial p() = +

n
P
j=1

actuator constraints, obtained. Recall that Mayers problem consists of determining u(t) and x(t) with t [t0 , tf ]
that minimize a functional cost J = (x(tf ), tf ) subject
to the dynamics x = f (x) + g(x)u and inequality constraints s(x, u) 0, c(x) 0 when initial states are given
and terminal states satisfy (x(t0 ), x(tf )) = 0. In all these
problems however, either the dynamics are assumed to be
already linear or a linearizing coordinate is known through
the natural outputs. Let mention that due to the difficulty
of solving the partial differential equations in one part, and
the fact many systems are not feedback linearizable, the
exact feedback linearization has been extended in various
ways. The notions of partial linearization, approximate
linearization, pseudo-linearization, extended linearization,
etc, have been introduced in the literature [3], [5], [6], [8],
[9], [15], [17], [21], [36] to off-set the difficulties associated
with exact linearization. Partial linearization is thought
when the system fails to satisfy the integrability conditions
and relies on the idea of finding the largest subsystem that
can be linearizable. Approximate linearization was first
developed in [17] and later generalized in [15] using Taylor
series expansions up to some degree. The changes of coordinates and feedback obtained in this case are polynomial
that linearizes the system up to some degree, and their
obtention relies on a step-by-step algorithm or by the use
of outputs of the system defining a relative degree for the
system. In many of the methods proposed, the integrability conditions are either weakened or they are applied to a
specific class of systems (we refer the interested reader to
[5] for a survey and the references therein). The paper is
organized as following. In Section 1.3 we give some definitions and notations to be used throughout the paper. The
first main result on state linearization is given in Section
2 where an algorithm is presented, and the feedback case
considered in Section 4. Illustrative examples follow each
section and are given in Section 3 and Section 5. A constructive solution of the flow box theorem as well as the
convergence of the series is presented in Section 6 followed
by a conclusion.

j=1

kj

j1

1.3. Notations and Definitions


For simplicity of exposition we first consider single-input
control systems

is Hurwitz.

This can be used to improve the dynamical behavior of


chaotic systems as it can be seen for the Lorenz control
system in [26]. Feedback linearization techniques have also
been applied to optimal control problems (e.g. minimizing time) and have regained some interest recently. In
[7] the authors used pseudospectral method to solve optimal control problem of feedback linearizable dynamics
subject to mixed state and control constraints. As mentioned by the authors, such problems frequently arise in
astronautical applications where stringent performance requirements demand optimality over feedback linearizing
controls. Mayers problem has also been considered in [1]
(see also [26]) and an optimal solution for globally feedback
linearizable time-invariant systems, subject to path and

: x = f (x) + g(x)u, x Rn , u R.
The case of multi-input systems is more involved and will
be addressed somewhere else. Let 0 k n 1 be an
integer.
y k-linear if
Definition 1.2 We say that is Brunovsk
g(x) = b, adf g(x) = Ab, . . . , adnk1
g(x) = Ank1 b,
f
where (A, b) is the Brunovsk
y canonical pair.
We will denote hereafter the coordinates in which the system is Brunovsk
y k-linear by the bolded variables xk =
4

(xk1 , . . . , xkn )T and the system by Br


k , where k = k. It
follows easily that a Brunovsk
y k-linear system takes the
form

x kj = Fkj (xk1 , . . . , xkk+1 ), if 1 j k

x kk+1 = Fkk+1 (xk1 , . . . , xkk+1 ) + xkk+2

Br
k :

x kn1 = Fkn1 (xk1 , . . . , xkk+1 ) + xkn

x kn = Fkn (xk1 , . . . , xkk+1 ) + u.

Moreover, in the coordinates w , 1 (x1 ) the system


(actually Br
0 ) takes the simpler linear form

w 1 = 1 w1 + w2

w 2 = 2 w1 + w3

: w = A w + bu ,

w n1 = n1 w1 + wn

w n = n w1 + u,

A more compact representation of Br


k is obtained as

The condition (2.1) remains the main criteria for the linearizing algorithm; it is a simplified version of Theorem
1.1 (S2). It barely means that the nonlinear vector field
Fk (xk1 , . . . , xkk+1 ) should be affine with respect to the
variable xkk+1 . At each step, we need to check if that condition is satisfied, then proceed if yes and stop otherwise.
The proof of this theorem relies mainly on the flow-box
theorem for which we gave recently explicit solution [32]
(see below) and on Theorem 1.1 (S2).

where 1 , . . . , n are constant real numbers.

k = Fk (xk1 , . . . , xkk+1 ) + A
Br
xk + bu, xk Rn ,
k :x
k = (0, . . . , 0, xkk+2 , xkk+3 , . . . , xkn )T is a vector
where x
whose first k + 1 components are zero. The Brunovsk
y
k-linear forms will play a crucial role in the state linearization algorithm. For the feedback linearization algorithm
in Section 4, the Brunovsk
y k-linear forms are replaced by
the feedback k-forms defined as following.
Definition 1.3 A control-affine system : x = f (x) +
g(x)u is said to be in (F B)k -form, and we denote it FB
k ,
if in some coordinates xk = (xk1 , . . . , xkn )T , it takes the
form

x kj = Fkj (xk1 , . . . , xkk+1 ), if 1 j k

kk+1 = Fkk+1 (xk1 , . . . , xkk+2 )


FB
...
k :

x kn1 = Fkn1 (xk1 , . . . , xkn )

x kn = Fkn (xk1 , . . . , xkn ) + u,

Theorem 2.2 Let be a smooth vector field on Rn ,


any integer 1 k n such that k (0) 6= 0 and
k (x) = 1/k (x).
(i) Define z = (x) by its components as following

where k = k. For simplicity we chose the coefficient of the


control input u to be 1 but this is not a restriction.

for any 1 j n, j 6= k. The diffeomorphism z = (x)


satisfies () = zk .
(ii) The diffeomorphism x = (z) given by its components

j (x)

= xj +

X
(1)s xs

s=1

k (x)

X
(1)s+1 xs

s!

s=1

2. Main Results: S-Linearizability

j (z) = zj +

The first result is as follows and states that any Slinearizable system can be transformed into a linear form
via a sequence of explicit coordinates changes each giving
rise to a Brunovsk
y k-linear system.

X
zs

s=1

k (z) =

X
zs

s=1

s!

s!

s1
X

s!

s1
X

i=0

Ls1
k (k j )(x)
(2.2)

Ls1
k (k )(x)

!
(1)i Csi zi k Lsi1
(j )(z)

(1)i Csi zi k Lsi1


(k )(z)

i=0

(2.3)
for any 1 j n, j 6= k, is the inverse of z = (x), that
(z)
= ((z)).
is, such that
zk

Theorem 2.1 Consider a controllable system


: x = f (x) + g(x)u, x Rn , u R.
Assume it is S-linearizable (denote , Br
and
n
x , xn ). There exists a sequence of explicit coordinates changes n (xn ), n1 (xn1 ), . . . , 1 (x1 ) that
gives rise to a sequence of Brunovsky k-linear systems
Br
Br
Br
Br
Br
n , n1 , . . . , 0 such that k1 = k (k ) for any
Br
1 k n. The Brunovsky k-linear system k is mapped
into the Brunovsky (k 1)-linear system Br
k1 if and only
if
2 Fk
= 0.
(2.1)
(Sk+1 ) ,
x2kk+1

1
Above, () =
(z))(1 (z)) is the mapping of
x (
tangent space induced by the diffeomorphism z = (x),
and we have adopted the following notation

zk =

h
ih

, zk h =
, . . . , zi k h = i , i 2.
zk
zk
zk

The following remarks are of paramount importance here.


R1. The expressions above are not series around the origin
or in the variable xk as the coefficients Lsk (k j )(x) are
5

evaluated at x = (x1 , . . . , xn ) and might well depend on


xk .
R2. If the vector field is independent of some variable xl
(l 6= k), then the diffeomorphism (x) is also independent
of the variable xl (except a linear dependence).
R3. If any of the components of (x) is zero, say
j (x) = 0, then j (x) = xj .
A proof of the theorem and the convergence of the series
will be given in Section 6. In Section 3 we illustrate with
few examples, in particular Example 3.4 will justify the
fact that the expressions (2.2)-(2.3) of Theorem 2.2 are
not Taylor series at the origin. For further details we refer
to [32].

It follows that f should be affine with respect to the variable xn . If this condition fails then the system is not Slinearizable and the algorithm stops. Otherwise, the vector
field f decomposes uniquely as

2.1. Linearizing Coordinates

n2 = (0, . . . , 0, xn2n )T and Fn2 (xn2 ) =


where x
n2 (Fn1 )
is
function
of
the
variables
xn21 , . . . , xn2n1 .
Step n k. Assume that Br
n has been taken, via a composition xk = k+1 n (x) of diffeomorphisms, into

f (x1 , . . . , xn ) = Fn1 (x1 , . . . , xn1 ) + xn (x1 , . . . , xn1 ).


Because g, adf g are linearly independent, then (0) 6= 0.
Apply Theorem 2.2 to define a change of coordinates
z = (x) such that () = zn1 . Denote z , xn2
and , n1 . The diffeomorphism xn2 = n1 (xn1 )
transforms Br
n1 into
n2 =Fn2 (xn2 ) + A
xn2 + bu, xn2 Rn ,
Br
n2 : x

In this section we define an algorithm that shows how to


compute the linearizing coordinates for the system. The
algorithm stands also as a proof of Theorem 2.1. Although
the algorithm generates a sequence of new coordinates
xn , xn1 , . . . , x1 as stated in Theorem 2.1, at each step,
say Step n k, we will reset the coordinates of the system
as x, i.e., set x = xk and take the coordinates of its transform as z, i.e., put z = xk1 . Moreover, the corresponding
system Br
= f (x) + g(x)u and
k will be renamed as : x
: z = f(z) + g(z)u.
its transform Br
by

k1
A. (S)-Algorithm. Consider a linearly controllable system
: x = f (x) + g(x)u, x Rn , u R.

k = Fk (xk1 , . . . , xkk+1 ) + A
Br
xk + bu, xk Rn ,
k :x
k = (0, . . . , 0, xkk+2 , xkk+3 , . . . , xkn )T , and the
where x
last n k components of the vector field Fk (xk ) are zero.
Once again reset the variable x , xk and denote Br
k simply by : x = f (x) + g(x)u with g(x) = b and
f (x) = Fk (x1 , . . . , xk+1 ) + A
xk
where x
k = (0, . . . , 0, xk+2 , xk+3 , . . . , xn )T . Notice that in
these coordinates

Without loss of generality take g(0) = b = (0, . . . , 0, 1)T .


This algorithm consists of n 1 steps.
T
Step 0. Set , Br
n and x , xn = (xn1 , . . . , xnn ) .
Apply Theorem 2.2 with = g(x) to construct a change
of coordinates z = (x) given by (2.2), such that
(g)(z) = zn . Such change of coordinates transforms
into

g = xn , adf g = xn1 , . . . , adnk1


g = (1)nk1 xk+1
f
Fk
which implies that adfnk g = (1)nk x
. For r = n
k+1
k 1 and q = r + 1, the condition [adqf g, adrf g] = 0 of
Theorem 1.1 (S1) is equivalent to

: z = f(z) + g(z)u = ( f )(z) + ( g)(z)u, z Rn ,

(Sk+1 ) ,

where g = b. Denote xn1 , z and n , . It follows that


the change of coordinates xn1 = n (xn ) transforms Br
n
into

If the condition fails to be satisfied, then the system is not


state linearizable and the algorithm stops. If satisfied this
means that Fk is affine with respect to the variable xk+1
and decomposes as

n1 =Fn1 (xn1 ) + A
Br
xn1 + bu, xn1 Rn ,
n1 : x
n1 0 and Fn1 (xn1 ) = f(xn1 ) = n (f ).
where x
Step 1. Reset the variable x , xn1 and , Br
n1 :
f (x) + g(x)u with g(x) = b and f (x) = Fn1 (x1 , . . . , xn ).
For to be S-linearizable, Theorem 1.1 (S2) should be
satisfied, which is equivalent to
[adqf g, adrf g] = 0, 0 q, r n 1.

2 Fk
2f
=
= 0.
2
xk+1
x2k+1

Fk (xk ) = fk (x1 , . . . , xk ) + xk+1 (x1 , . . . , xk ),


where is a nonsingular vector field in Rn that depends
exclusively on the variables x1 , . . . , xk . By Theorem 2.2
we can construct a change of coordinates z = (x) such
that ()(z) = zk . Moreover the components of are
such that

(2.4)

Taking q = 0 and r = 1 we get in particular [g, adf g] = 0


or equivalently (because g = xn )

j (x)

= xj + j (x1 , . . . , xk ), 1 j n.

This change of coordinates transforms into

2f
(Sn ) ,
= 0.
x2n

: z = f(z) + g(z)u = ( f )(z) + ( g)(z)u

(2.5)

where g(z) = ( g)(z) = (0, . . . , 0, 1)T and

with f (x) = (x2 2x2 x3 + x23 , x3 , 0)T and g(x) =


(4x2 x3 , 2x3 , 1)T . First rectify the vector field (x) , g(x)
1

f (z) = ( Fk )(z)+[zk+1 k+1 ( (z))]( )(z)+ (A


x)(z). by applying Theorem 2.2 with n = 3 and 3 (x) = 1. Since

Because the k first components of A


x are zero, then (2.5)
implies (A
x)(z) = (0, . . . , 0, zk+2 , . . . , zn , 0)T . We then
deduce that f(z) = Fk1 (z) + A
z , where Fk1 (z) =
( Fk )(z) k+1 (1 (z))zk depends exclusively on the
variables z1 , . . . , zk and

L (1 ) = 8x23 + 4x2 , L2 (1 ) = 24x3 , L3 (1 ) = 24,


we have Ls1
(1 ) = 0 for all s 5 and hence

z1 = 1 (x)

A
z = zk+1 zk + (0, . . . , 0, zk+2 , zk+3 , . . . , zn , 0)T
=

(0, . . . , 0, zk+1 , zk+2 , . . . , zn , 0)T

is such that the k first components are zero. Notice that


when k = 0, the expression above reduces simply to
F0 (z) = z1 ,

(1)s

(2 ) = 0, s 3, yielding
Likewise, L (2 ) = 2 and Ls1

where = (1 , . . . , n )T .

z2 = 2 (x) =

x2 +

(1)s

s=1

This ends the general step and shows that a sequence of explicit coordinates changes n (xn ), . . . , 1 (x1 ) can be constructed whose composition z = 1 n (xn ) takes the
original system into the linear form .
B. Summary of Algorithm. Start with a system

xs3 s1
(L 2 )(x)
s!

x2 x3 (2x3 ) + (1/2!)x23 (2) = x2 + x23 .

We apply the change of coordinates


z1 = x1 2x2 x23 x43 , z2 = x2 + x23 , z3 = x3

: x = f (x) + g(x)u, x Rn , u R.

to transform the original system into

z1 = z2 2z2 z3

z2 = z3
: z = f (z) + g(z)u ,

z3 = u,

Step 0.
Normalize the vector field g 7 g =
(0, . . . , 0, 1)T . Apply a linear change of coordinates to
transform the linearization such that f
x (0) = A .
Step n k. If the condition
(Sk+1 ) ,

xs3 s1
(L 1 )(x),
s!
s=1
= x1 4x2 x23 4x43 + 2x2 x23 + 4x43 x43
= x1 2x2 x23 x43 .

= x1 +

where g(z) = (0, 0, 1)T and f(z) = (z2 2z2 z3 , z3 , 0)T . The
vector field f(z) = (z2 2z2 z3 , z3 , 0)T decomposes

2f
=0
x2k+1

f(z) = (z2 , 0, 0)T + z3 (2z2 , 1, , 0)T .

fails, the algorithm stops: The system is not S-linearizable.


If (Sk+1 ) holds, then decompose the vector field f as

The next step is to rectify (x) = (2z2 , 1, 0)T . Theorem 2.2 with k = 2 and 2 (z) = 1 yields

f (x1 , . . . , xk+1 ) = F (x1 , . . . , xk ) + xk+1 (x1 , . . . , xk ).


Apply Theorem 2.2 to construct a change of coordinates
z = (x) Rn that rectifies the nonsingular vector field

w1

(x) = 1 (x)x1 + + n (x)xn ,

w2

=
=

z2s s1
L (1 )(z)
s!
s=1
z1 z2 (2z2 ) + (1/2!)z22 (2) = z1 + z22
z2

w3

z3 .

that is, such that ()(z) = zk . Find the transform


of the system in precedent step. For k = n 1, n 2, . . . , 2
repeat Step n k. End if system is linear or algorithm
fails.

z1 +

(1)s

The system is then transformed, under these change of coordinates, to the linear Brunovsk
y form Br . The linearizing coordinates for the original system are thus obtained
as a composition of the two-step coordinate changes

3. State Linearization: Examples


In what follows we illustrate with few examples.
Example 3.1 Consider a single-input control system

x 1 = x2 2x2 x3 + x3 + 4x2 x3 u
: x = f (x)+g(x)u , x 2 = x3 2x3 u

x 3 = u

w1

= x1 2x2 x23 x43 + (x2 + x23 )2 = x1 + x22

w2
w3

= x2 + x23
= x3 .

Of course, these linearizing coordinates could have been


obtained directly or by other methods. The emphasis here
is on the applicability of the method to any linearizable
system.
7

Example 3.2 We consider the following example


and

X
(1)s xs3 s1
x 1 = x2 + ((1/2)x2 (1/12)x3 x4 ) u

(x)
=
x
+
L (2 )(x)
2
2

s!
x 2 = x3 + (1/2)x3 u
s=1
: x = f (x)+g(x)u ,

x 3 = x4 + x4 u
(1)s xs3

= x2 x3 2 (x) +
(x2 + x3 + 1)
x 4 = u.
s!
s=2
= (x2 + x3 + 1)ex3 1.
Because of the strict feedforward structure, we showed in
[28] (using a 4-step algorithm) that the change of coordinates

z1 = x1 (1/24) 12x2 x4 4x3 x24 + x44

z = x (1/2) x x (1/3)x3
2
2
3 4
4
z = (x) ,
2

z
=
x

(1/2)x
3
3

z4 = x4
(3.1)
linearizes the system. We can recover such coordinates
directly by applying the algorithm given in the proof. Denote by f (x) = (x2 , x3 , x4 , 0)T and

(1 )(z) = 0
To find the inverse first notice that zi 3 Lsi1

if (i, s) 6= (0, 1), which yields


s1
!
s
X
P
z3
i i i
si1
1 (z) = z1 +
(1) Cs zn L
(1 )(z)
s!
s=1

i=0

z1 + (1/2!)z32 1 (z) = z1 + (1/2)z32 .

From zi 3 Lsi1
(2 )(z) = 0 for all i 2, we deduce

s1
X

(2 )(z)
(1)i Csi zi 3 Lsi1

i=0

= Ls1
(2 )(z) sz3 Ls2
(2 )(z) = z2 + z3 + 1 s.

(x) , g(x) = ((1/2)x2 (1/12)x3 x4 , (1/2)x3 , x4 , 1) .

By Theorem 2.2 (ii) we get the 2nd component of (z) as


The first step consists of rectifying the control vector field
!
s1
via Theorem 2.2. Since 3 = 1, hence 3 = 1 we have
s
X
P
z3
i i i
si1

2
(z) = z2 +
(1) Cs z3 L
(2 )(z)
s!
L (1 ) = (1/2) (x3 /2)(1/12) x4 + x3 = (1/6)x3 (1/12)x24 , 2
s=1
i=0

and L2 (1 ) = 16 x4 16 x4 = 0, i.e., Ls (1 ) = 0, s 2. Thus


1 (x) =
=

x1 x4 1 (x) + (1/2)x24 L (1 )

z2 +

P
s=1

z3s
s! (z2

= (z2 + 1)e

x1 (1/2)x2 x4 + (1/6)x3 x24 (1/24)x34 .

z3

+ z3 + 1)

P
s=1

z3s
s! s

z3 1.

It is straightforward to verify that the inverse is

x1 = 1 (z) = z1 + (1/2)z3
2
3 2
2 (x) = x2 x4 2 (x) + (1/2)x4 L (2 ) (1/6)x4 L (2 ) x = 1 (z) ,
x2 = 2 (z) = (z2 + 1)ez3 z3 1
3
3

= x2 (1/2)x3 x4 + (1/4)x4 (1/12)x4


x3 = 3 (z) = z3 .
= x2 (1/2)x3 x4 + (1/6)x34 .
Example 3.4 Consider the non singular vector field
Similarly L (3 ) = 1 and Ls1
(3 ) = 0, s 3. Hence

(x) = (x3 )x1 + x3 , x R3 , where is a flat function, that is, and all its derivatives are zero at x3 = 0. A
3 (x) = x3 x4 3 (x) + (1/2)x24 L (2 )
well-known example is the function defined by (0) = 0,
2
2
2
= x3 x4 + (1/2)x4 = x3 (1/2)x4 .
and (x3 ) = exp(1/x23 ) if x3 6= 0. It is straightforward to
Because 4 (x) = 1, we get 4 (x) = x4 and the change
check that Ls1
(1 )(x) = (s1) (x3 ) for all s 1, where

(k)
of coordinates (3.1) rectifies the control vector field g and
(x3 ) is the kth derivative of . Should (2.2) have been
linearizes the system. Notice that the algorithm described
a series around 0 or at xk = 0 the straightening diffeomorin [28] allowed only to find (3.1) by computing one compophism would have been identity:

nent at a time (holding other components identity), start


X
(1)s xs3 s1

1 (x) = x1 +
ing from 3 then 2 and finally 1 and updating the sysL (1 )(0) = x1

s!

tem after each step. A composition of different coordinates


s=1

changes gave (3.1). However, Theorem 2.2 allows to comX


(1)s xs3 s1
L (2 )(0) = x2

(x)
=
x
+
z = (x) ,
2
2
pute those components independently to each other.
.
s!

s=1

X
Example 3.3 Consider (x) = x3 x1 + (x2 + x3 )x2 + x3

(1)s1 xs3 s1

3
s1

(x)
=
L (1)(0) = x3
3
in R . Here L (1 ) = 1 and L (1 ) = 0 for s 3 and
s!
s=1
s1
L (2 ) = x2 + x3 + 1 for all s 2. It follows that
which is impossible.
However we can verify easily that

R x3
X
(1)s xs3 s1

(x)
=
x

(u)
du which coincides with
1
1
L (1 )(x)
1 (x) = x1 +
0
s!
s=1

X
(1)s xs3 (s1)
= x1 x3 1 (x) + (1/2!)x23 L (1 )(x)

(x3 ).

(x)
=
x
+
1
1
s!
= x1 (1/2)x23
s=1
Also L (2 )= 21 x4 , L2 (2 )= 12 and Ls (2 )=0, s 3 implies

Indeed,

R x3
0

(u) du =

X
(1)s xs

brings (F B) into the Brunovsk


y canonical form Br . Consider : x = f (x) + g(x)u and recall Definition 1.3
that is in (F B)k -form, if in some coordinates xk =
(xk1 , . . . , xkn ), it takes the form

x kj = Fkj (xk1 , . . . , xkk+1 ), if 1 j k

x kk+1 = Fkk+1 (xk1 , . . . , xkk+2 )


FB
...
k :

x
= Fkn1 (xk1 , . . . , xkn )

kn1

x kn = Fkn (xk1 , . . . , xkn ) + u,

(s1) (x3 ) because

s!
s=1
the two functions coincide when x3 = 0 and it is enough to
verify that their derivatives are also equal. The derivative
of the right hand side gives after simplification

X
(1)s xs1
3

s=1

(s 1)!

(s1) (x3 )

X
(1)s xs

s!

s=1

(s) (x3 ) = (x3 ).

Now to find the inverse of the normalizing coordinates, let


us apply Theorem 2.2 (ii) with n = 3 and k = 3. First we
have Ls = (s) (x3 )x1 for all s 1. We thus have
s1
!
s
X
P
z3
i i i
si1
(z) = z +
(1) Cs z3 (L
)(z)
s!
s=1

= z+

P
s=1

z3s
s!

i=0
s1
X

Theorem 4.2 Consider a linearly controllable system


: x = f (x) + g(x)u, x Rn , u R.

!
(1)i Csi

(s1) (z3 )z1

i=0

X
(1)s z s

z1

s=1
=

where k = k.

s!

B
Assume it is F-linearizable (let , F
and x , xn ).
n
There exists a sequence of explicit coordinates changes
n (xn ), n1 (xn1 ), . . . , 2 (x2 ) that gives rise to a seFB
FB
quence of (F B)k -forms FB
n1 , n2 , . . . , 1 such that for
FB
FB
any 2 k n we get k1 = (k ) k . Moreover, in the
coordinates z , 2 (x2 ) the system (actually FB
1 ) takes
the feedback form (F B).

(s1)

z2
z3

Z
It clearly follows that (z) = z1 +

(z3 )

z3

A direct consequence of this result is the following corollary.

(s) ds, z2 , z3

which was predictable directly by inverting z = (x).

Corollary 4.3 Consider a linearly controllable system


and assume it is F-linearizable. Then is linearizable by
the feedback transformation w = (x), u =
((x)) +

((x))v,
where z = (x) is the diffeomorphism taking
the transinto the feedback form (F B), and = (,

, )
formation taking (F B) into to the Brunovsky form Br .

4. Main Results: F -Linearizable Systems


Below we give our main result, that is, an algorithm
allowing to construct explicitly feedback linearizing coordinates. We first recall the following well-known result.

The proof of Theorem 4.2 follows from the algorithm below.


A. (F)-Linearizing Algorithm. Consider the system
: x = f (x) + g(x)u, x Rn , u R and assume it
is F-linearizable. Applying a linear feedback z = T x, u =
Kx + Lv, if necessary, we assume that f
x (0) = A and
g(0) = b, where (A, b) is the Brunovsk
y canonical pair.
The algorithm below consists of a maximum of n1 steps.
Step 1. Set , FB
and x , xn = (xn1 , . . . , xnn )T .
n
Apply Theorem II.2 ([33]) with = g(x) to construct a
change of coordinates z = (x) such that (g)(z) = zn .
If we denote xn1 , z and n , , it thus follows that
the change of coordinates xn1 = n (xn ) takes FB
n into

x n11 = Fn11 (xn11 , . . . , xn1n )

x n12 = Fn12 (xn11 , . . . , xn1n )

FB
...
n1 :

x n1n1 = Fn1n1 (xn11 , . . . , xn1n )

x n1n = Fn1n (xn11 , . . . , xn1n ) + u.

Theorem 4.1 A control system : x = f (x) + g(x)u is


locally F-equivalent to a linear controllable system if and
only if it is S-equivalent to a feedback form

z1 = f1 (z1 , z2 )

z2 = f2 (z1 , z2 , z3 )

(F B)

zn1 = fn1 (z1 , . . . , zn )

zn = fn (z1 , . . . , zn ) + gn (z1 , . . . , zn )u.


The proof of Theorem 4.1 is straightforward and can be
found in the literature (e.g. [11], [12], [13], [25]). Let f =

(f1 , . . . , fn ), g = (0, . . . , 0, gn ) and h(z)


= z1 . It follows
defined by
that the feedback transformation , (,

, )

w = (z),

u=
(z) + (z)v, where

. . . , n (z) = Ln1 (h)

1 (z) = h(z),
2 (z) = Lf(h),
f
(z)

Lnf(h)

Lg Ln1
(h)
f

and (z)
=

Remark that this first step is independent of whether is


F-linearizable or not. It depends only on the fact that the
vector field g is nonsingular, and hence, can be rectified.

Lg Ln1
(h)
f
9

Step n k. Assume that a sequence of explicit coordinates


changes n , . . . , k+1 were found whose composition xk =
k+1 n (xn ) takes FB
n into the (F B)k -form

We deduce from (4.2) that the first k 1 components depend only on the variables z1 , . . . , zk and the kth component depends on z1 , . . . , zk+1 . In the other hand (4.1)
shows that the jth component (j = k + 1, . . . , n) depends
on the variables z1 , . . . , zj+1 . We thus conclude that
(
fj (z1 , . . . , zk ),
1j k1
fj (z) =

fj (z1 , . . . , zj+1 ), k j n,

k = Fk (xk ) + bu, xk Rn ,
FB
k :x
where (recall that k = k)

Fkj (xk1 , . . . , xkk+1 ), 1 j k


Fkj (xk1 , . . . , xkj+1 ), k + 1 j n 1
Fkj (xk ) =

Fkj (xk1 , . . . , xkn ),


j = n.

where the last component fn depends only on z1 , . . . , zn .


Denote xk1 , z and k , . Thus the change of coordinates xk1 = k (xk ) brings the system FB
k into

x k1j = Fk1j (xk11 , . . . , xk1k )

if 1 j k 1

x
=
F
k1k
k1k (xk11 , . . . , xk1k+1 )
FB
k1 :

...

x k1n1 = Fk1n1 (xk11 , . . . , xk1n )

x k1n = Fk1n (xk11 , . . . , xk1n ) + u.

Once again reset the variable x , xk and denote FB


k
simply by : x = f (x) + g(x)u with g(x) = b and
(
fj (x1 , . . . , xk+1 ), 1 j k
fj (x) =
fj (x1 , . . . , xj+1 ), k + 1 j n,
where the last component fn depends only on x1 , . . . , xn .
We showed in Section 6 (6.1) that there exist smooth
functions (x) = (x1 , . . . , xk+1 ), Fj (x) = Fj (x1 , . . . , xk )
and j (x) = j (x1 , . . . , xk ) for 1 j k such that
fj (x1 , . . . , xk+1 ) = Fj (x) + j (x)(x) 1 j k with
k
(0) 6= 0. Moreover, k (0) 6= 0 because xfk+1
(0) 6= 0. Define the nonsingular vector field (x) = 1 (x)x1 + +
k (x)xk Rk and apply Theorem II.2 ([33]) to construct
a change of coordinates z = (x1 , . . . , xk ) Rk such that
()(z) = zk . Extend such change of coordinates in Rn
(still called ) by

This completes the induction an the algortihm; consequently, we can construct a sequence of explicit coordinates changes n (xn ),n1 (xn1 ),. . . ,2 (x2 ) whose composition z = 2 n (xn ) takes the original system
into the (F B) form.
B. Summary of Algorithm. Start with a system : x =
f (x) + g(x)u, x Rn , u R.
Step 0. Normalize the vector field g 7 g = (0, . . . , 0, 1)T
and apply a linear feedback to put the linearization in
Brunovsk
y form (not necessary but very recommended).
Step n k. If the condition

z = (x) = (1 (x), . . . , k (x), xk+1 , . . . , xn )T .


The inverse x = (z) = 1 (z) is also obtained by Theorem II.2 ([33]). Clearly, the inverse is of the form
x = (z) = (1 (z), . . . , k (z), zk+1 , . . . , zn )T .

(Fk+1 ) ,

The change of coordinates transforms the system into

fails (nk (x) not the same for first k components) then
system is not feedback linearizable and algorithm stops. If
(Fk+1 ) is satisfied, then decompose the first k components f1 , . . . , fk as following (see (6.1))

: z = f(z) + g(z)u = f (z) + g(z)u,

where g(z) = (0, . . . , 0, 1)T and

k
P
f(z) = f (z) =
fj (x1 , . . . , xk+1 )xj
j=1

n
P
j=k+1

fj (x1 , . . . , xk+1 ) = Fj (x) + j (x)(x) 1 j k.

Apply Theorem II.2 ([33]) to construct a change of coordinates z = (x) Rn to rectify the nonsingular vector field

fj (x1 , . . . , xj+1 )xj .

It is easy to see that the second term is equivalent to

n
n
P
P
fj (x1 , . . . , xj+1 )xj =
fj ((z))zj . (4.1)

j=k+1

(x) = 1 (x)x1 + + k (x)xk + 0 xk+1 + + 0 xn ,


that is, such that ()(z) = zk . Compute the
transform of precedent system. Repeat Step n k for
k = n 1, . . . , 2. End if system is in (FB) form or algorithm fails.

j=k+1

The first term rewrites

k
P
fj (x)xj
=
j=1

k
P
j=1

Fj (x1 , . . . , xk )xj

k
P

j=1

2 fj
fj
= nk (x)
, 1jk
x2k+1
xk+1

5. Feedback Linearization: Examples

(x)j (x1 , . . . , xk )xj

Example 5.1 Consider a single-input control system

x 1 = x2 (1 + x3 )
x 2 = x3 (1 + x1 ) x2 u
: x = f (x) + g(x)u ,

x 3 = x1 + (1 + x3 )u

k
P
Fj (z1 , . . . , zk )zj + ((z))zk

j=1

(4.2)
10

with f (x) = (x2 (1 + x3 ), x3 (1 + x1 ), x1 )T and g(x) =


(0, x2 , 1+x3 )T . We first rectify the vector field g(x). Put
(x) = g(x) and apply Theorem II.2 ([33]) with n = 3 and
3 (x) = (1 + x3 )1 , thus 3 = x2 (1 + x3 )1 x2 + x3 .
Since 1 = 0 and 2 (x) = x2 , we have 1 (x) = x1 in
one side, and

Such linearizing coordinates and feedback could have been


obtained by other methods. We want to point out that the
method is applicable to all feedback linearizable systems.

L3 (3 2 ) = 2x2 (1 + x3 )2 , L23 (3 2 ) = 6x2 (1 + x3 )3


in the other, and recurrently
s
s
Ls1
.
3 (3 2 ) = (1) s!x2 (1 + x3 )

It follows that
z2 = 2 (x) = x3 +

X
(1)s xs

s!

s=1

Ls1
3 (3 2 )(x) = x2 (1 + x3 ).

To calculate 3 (x), notice that L3 (3 ) = (1 + x3 )2


and L23 (3 ) = 2(1 + x3 )3 . Thus a simple recurrence
s1
shows that Ls1
(s 1)!(1 + x3 )s , for s 1
3 3 = (1)
which implies
z3 = 3 (x)

X
(1)s+1 xs

s=1

s!
1
s

with f (x) = (x2 x24 , x4 + 2x21 x4 , x21 , x1 + x24 )T and


g(x) = (0, 2x4 , 0, 1)T . This system is not feedback linearizable as it can be checked that [g, adf g]
/ span {g, adf g} .
We want to show that the algorithm provides such information without having to compute the involutivity of the
distributions. We first start by rectifying the control vector field g. Identify = g(x) with 4 = 1. We calculate
the component
2 (x) =

Ls1
3 (3 )(x)

x2 +

X
(1)s xs

s=1

s
x3
1 + x3
s=1
Z

X x3 s1 x3 0
= =
d x3
1 + x3
1 + x3
s=1
Z
1
d x3 = ln(1 + x3 ).
=
1 + x3
=

Example 5.2 Consider a single-input control system

x 1 = x2 x24

x
= x4 + 2x21 x4 + 2x4 u
2
: x = f (x) + g(x)u ,

x 3 = x21

x 4 = x1 + x24 + u

x2 +

s!

X
(1)s xs

s=1

s!

Ls1
(2 )(x)

Ls1
(2x4 )(x) = x2 x24 .

Since 1 , 3 , 4 are constants, then 1 (x) = x1 , 3 (x) = x3 ,


and 4 (x) = x4 . The change of coordinates z1 = x1 , z2 =
x2 x24 , z3 = x3 , z4 = x4 takes the system into

z1 = z2

z2 = z4 2z1 z4 + 2z12 z4 2z43


: z = f(z)+

g (z)u ,
z3 = z12

z4 = z1 + z42 + u

We apply the change of coordinates z1 = x1 , z2 = x2 (1 +


x3 ), z3 = ln(1 + x3 ) to transform the original system into

z1 = z2
z2 = (1 + z1 )ez3 (ez3 1) + z1 z2 ez3 where g = (0, 0, 0, 1)T and
z = f(z)+
g (z)u ,

z3 = z1 ez3 + u.
f(z) = (z2 , z4 2z1 z4 + 2z12 z4 2z43 , z12 , z1 + z42 )T .
The system is in (F B)-form and can be put into the linear
Clearly,
Brunovsk
y form Br : w 1 = w2 , w 2 = w3 , w 3 = v via

f
2 f
w = h(z)
= z1

= (0, 12z1 +2z12 6z42 , 0, 2z4 )T , 2 = (0, 12z4 , 0, 2)T


1

z4
z4

w2 = Lfh(z)
= z2

f
2 f

w3 = L2fh(z)
= (1 + z1 )ez3 (ez3 1) + z1 z2 ez3
from which we deduce that z2j = 1 z4j , 1 j 3 fails.

v = L3 h(z)

The algorithm ends: the system is not F-linearizable.


+ Lg L2fh(z)u.
f
The composition of the two-step changes of coordinates
gives linearizing coordinates

w1 = x1
w2 = x2 (1 + x3 )

w3 = x3 (1 + x1 )(1 + x3 ) + x1 x2

Example 5.3 Consider the single-input control system


[12]

x2

x 1 = e u
x 2 = x1 + x22 + ex2 u
: x = f (x) + g(x)u ,

x 3 = x1 x2

and feedback for the original system

with f (x) = (0, x1 +x22 , x1 x2 )T and g(x) = (ex2 , ex2 , 0)T .


We first rectify the vector field g(x). Denote (x) = g(x)
and apply Theorem II.2 ([33]) with n = 3 and 2 (x) =

x23 )

x2 (1 + x3 )(x2 + x3 +
+ x1 (1 + x1 )(1 + 3x3 )
+[(1 + x1 )(1 + x3 )(1 + 2x3 ) x1 x2 ]u.
11

ex2 , hence 2 = x1 + x2 . Since 3 = 0, then 3 (x) =


x3 . Because Ls1
2 (2 1 ) = 0 for all s 2, we obtain
z1 = 1 (x) =

x1 +

X
(1)s xs

s!

s=1

6. Appendix: Proofs of Results


Below we establish an equivalence between the involutivity conditions of Theorem 1.1 and a sequence of easily
computable conditions (Fn ), . . . , (F1 ) each stating the
fact that the second derivative of f with respect to some
variable is proportional to its first derivative with respect
to the same variable. This constitutes the core of the algorithm.
Simple Involutivity Conditions. Consider the system :
x = f (x) + g(x)u and assume without loss of generality
that g(x) = (0, . . . , 0, 1)T and
(
fj (x1 , . . . , xk+1 ) 1 j k
fj (x) =
fj (x1 , . . . , xj+1 ) k + 1 j n,

Ls1
2 (2 1 )(x)

x1 x2 (2 1 )(x) = x1 x2 .

s1 x2
To compute 2 notice that Ls1
e
for
2 (2 ) = (1)
all s 2. It thus follows that

z2 = 2 (x) =

X
(1)s+1 xs

s!

s=1

X
xs

2 x2

s=1

s!

Ls1
2 (2 )(x)

= 1 ex2 .

where 1 k n 1 and fn depends only on x1 , . . . , xn .


Claim: If the following distributions
n
o
Dj (x) = span g(x), adf g(x) . . . , adj1
g(x)
, 1jn
f

The change of coordinates


z = (x) = (x1 x2 , 1 ex2 , x3 )T

are involutive, then there is a function nk such that

whose inverse x = (z) = (z1 ln(1z2 ), ln(1z2 ), z3 )T


can be obtained directly or by applying Theorem II.2 (ii)
(see [33]), takes the original system into

z1
z2

z3

(Fk+1 ) ,

= z1 + ln(1 z22 ) (ln(1 z2 ))2


= (1 z2 )[z1 ln(1 z22 ) + (ln(1 z2 ))2 ] + u
= z1 .

2 fj
fj
= nk (x)
, 1 j k.
2
xk+1
xk+1

Moreover, functions (x) = (x1 , . . . , xk+1 ) and Fj (x) =


Fj (x1 , . . . , xk ) and j (x) = j (x1 , . . . , xk ) exist such that
fj (x1 , . . . , xk+1 ) = Fj (x) + j (x)(x) 1 j k

(6.1)

A permutation of the variables z1 = z3 , z2 = z1 , z3 = z2


yields a system in feedback form

with (x) depending exclusively on nk (x).


Proof: Remark that the vector field f can be written as

z1
(F B)
z2


z3

f (x) =

= z2
=
z2 + ln(1 z32 ) (ln(1 z3 ))2

= (1 z3 )[
z2 ln(1 z32 ) + (ln(1 z3 ))2 ] + u

=
=
=
=

z1
z2

z2 + ln(1 z32 ) (ln(1 z3 ))2


w 3 .

We thus deduce that the change of coordinates


w1
w2

=
=

x3
x1 x2

w3
v

=
=

x1 x22
2x2 (x1 + x22 ) (1 + 2x2 )ex2 u

fj (x1 , . . . , xk+1 )xj +

j=1

that can be linearized by


w1
w1
w3
v

k
X

n
X

fj (x1 , . . . , xj+1 )xj

j=k+1

and that the function given above is independent of


j; otherwise the decomposition (6.1) would have been
trivial.
For any 1 j n denote by j =

span xnj+1 , . . . , xn the module generated over the


field of smooth functions, that is, each element of j is
a linear combination of the vector fields xnj+1 , . . . , xn
whose coefficients are smooth functions. We first verify
easily that
adf g =

fn1
fn
xn1
x = n1 (x)xn1 +n1 (x)
xn
xn n

where n1 (x) = fxn1


and n1 (x) 1 . An induction
n
argument implies that for any 1 j n k 1, we have
adjf g = nj (x)xnj + nj (x)
where nj (x) = (1)j

brings into Brunovsk


y Br : w 1 = w2 , w 2 = w3 , w 3 = v.
Notice that such change of coordinates was given in [12].
However, the system was coupled with the given output
y = h(x) = x3 which made finding them straightforward.

j
Q
i=1

fni
xni+1

and nj (x) j . In

particular for j = n k 1 we have


adnk1
g = k+1 (x)xk+1 + k+1 (x)
f
12

where k+1 (x) nk1 . The Lie bracket with f gives

Proof of Theorem 2.2.

Below we first give a brief proof of the constructive approach for rectifying nonsingular vector fields (Theorem
j=1
2.2) and we later address the convergence of the series.
i
n h
P
+
fj (x1 , . . . , xj+1 )xj , k+1 xk+1 + k+1 Proof of Theorem 2.2 (i). Notice that for any diffeomorphism z = (x) the two following conditions are equivaj=k+1
lent.
k
X fj
(a) ()(z) = zn .
= k+1 (x)
xj + k ,
xk+1
(b) L (j )(x) = 0 and L (n )(x) = 1 for 1 j n 1.
j=1
For that reason we will show that condition (b) holds.

where k (x) nk = span xk+1 , . . . , xn . This is due


To start let us take 1 j n 1. It follows directly
to the following facts:

X
(1)s xsn s1
nk1
nk
Ln (n j )
L (j )(x) = L (xj ) +
L
(i) adf
g
;
s!
s=1
adnk
g
f

i
k h
P
fj (x1 , . . . , xk+1 )xj , k+1 xk+1 + k+1

(ii) fj (x1 , . . . , xj+1 )xj nk , k + 1 j n;

(iv) [

nk

nk

s=1

.
=

A simple calculation shows (using items (i)-(iv)) that


h

adnk
g, adnk1
g
f
f

j=k

L n (x) =
=

that is, the condition


fj
fj
= nk (x)
, 1 j k.
2
xk+1
xk+1

L Ls1
n (n )

(1)s1 xs1
n
n (x)Ls1
n (n )
(s

1)!
s=1

X
(1)s1 xsn
n (x)Lsn (n )
s!
s=1

where
Z
exp

X
(1)s xs

s!

n (x)Lsn (n )

n (x)n (x) = 1.

This ends the sketch of proof of Theorem 2.2 (i).


Proof of Theorem 2.2 (ii). The proof of the inverse is
constructive. It is enough to show it in the case k = n,
that is, we suppose (0) = zn . The general case follows
by first applying the following permutation

j = j (x) = xj , j 6= k, j 6= n,

x
x
k = k (x) = xn ,
(x) ,

x
n = n (x) = xk .

fj (x1 , . . . , xk+1 ) = Fj (x1 , . . . , xk ) + j (x1 , . . . , xk )(x)

s!

s=1

Notice that nk = nk (x1 , . . . , xk+1 ) depends exclusively on the variables x1 , . . . , xk+1 since the components
fj depend only on such variables. A double integration
shows that there exist functions Fj (x) and j (x), 1 j k
such that

xk+1

X
(1)s1 xs

+n (x)n (x) +

(1)s xsn
n (x)Lsn (n j ) = 0.
s!
s=1

k
k
X
X
2 fj
fj

=
(
)

xj ,
xj
k+1 nk
2
x
x
k+1
k+1
j=1
j=1

(x) =

n (x)Lsn (n j )

X
(1)s1 xsn s1
L
Ln (n )
s!
s=1
s=1

for some smooth functions 0 , 1 , . . . , nk . Comparing the


two Lie brackets it follows that

(Fk+1 ) ,

s!

A direct computation shows that

n
i X
nk1
adnk
g,
ad
g
=
nj adnj
g = nk adnk
g + k
f
f
f
f

(k+1 )2

j (x) +

L Ls1
n (n j )

n (x)Ls1
n (n j )

X
(1)s xs

j (x)

where k nk = span xk+1 , . . . , xn . The involutivity of Dnk+1 implies that


h

(s 1)!

s=1

k
X
2 fj
2k+1 (x)
xj + k (x),
2
x
k+1
j=1

s!

s=1

X
(1)s xs1
n

(iii) [fj (x1 , . . . , xk+1 )xj , nk ] = fj ()[xj , nk ];


nk

j (x) +

X
(1)s xs

nk (x1 , . . . , xk , s)ds dt

We look for a change of coordinates x = (z) that satisfies


(z)
n+1
as
zn = ((z)). First, we extend in R

depends exclusively on nk but not on the components.


This achieves the proof of the claim.

(x, y) = 1 (x, y)x1 + + n (x, y)xn + n+1 (x, y)y ,


13

where j = j (x) for 1 j n, and n+1 = n (x). We


want emphasize here the fact that the components n (x, y)
and n+1 (x, y) are both equal to n (x).
Because (0) 6= 0 there exist a change of coordinates
(z, w) = (x,
y) such that = zn + w . An inverse

(x, y) = (z, w) should thus satisfy




w)).
+
= ((z,
zn
w

To complete the proof we will show that


j (z)
= j ((z)) for all 1 j n; which indeed
zn
follows from the fact that
j (z)

=
j (z, zn )
zn
zn

(6.2)

This ends the proof-sketch of Theorem 2.2.

Convergence. Let us first introduce some useful notation. For any x Rn we put x = (x1 , . . . , xn ). For
the subset Nn Rn of n-tuples of integers we use a
bolded variable to denote its elements. Given two n-tuples
m = (m1 , . . . , mn ) and = (1 , . . . , n ) we say that
m if and only if mi i for all 1 i n and
n
1
we denote by m! = m1 ! mn ! and m = m
1 mn .
n
m
By extension, for x = (x1 , . . . , xn ) R we put x =
mn
1
xm
and put |m| = m1 + + mn . Let f be an
1 xn
P
analytic function with f (x) =
fm xm its Taylor se-

Define the operator , zn + w and rewrite (6.2) as


w)). Apply the operator again on both
= ((z,
side and get (we put 2 , )
w) =
2 (z,
=
=
=

j
j
(z, zn ) +
(z, zn ),
zn
w
zn )) = j ((z)).
= j ((z,

w)),

((z,

w))(z,
w)
((z,
(x, y)

w))
w))
((z,
((z,
(x, y)
w)).
(L )((z,

1 f
ries expansion where fm = m!
xm (0) are constant coefficients, and = 1 x1 + + n xn an analytic vector field. For any > 0 we define the norm || || by
P
||f || =
|fm | |m| and extend the norm to vector fields

A simple recurrence argument yields


w) = (Ls1 )((z,
w)), for all s 1.
s (z,

s
s
Define w
, w
s , and zn , z s for all s 1. Since on
n
the one hand side, w = zn + and on the other hand
side zn = zn , it follows that
s
w
=

s
X

by |||| = max {||1 || , . . . , ||n || } .


(i) We now prove the convergence of the series

(1)i Csi zi n si ,

j (x) = xj +

i=0

s
=
ws

w)
(1)i Csi zi n si (z,

i=0

= (1)s

s (z,w)
s
zn

s=1

where zsn 0 = zsn and z0n s = s . We deduce that


s
X

X
(1)s xs

s1
X

w)).
(1)i Csi zi n Lsi1
(
)((z,

s!

Ls1
k (k j )(x).

Assume k (0) 6= 0 and put k = 1/k (x) and take f =


k j . Choose > 0 such that ||k j || = j () < +
for all 1 j n and put () = max {1 (), . . . , n ()}.
Using Lemma 6.1 (ii) below we obtain, for any 0 < <
and any s 1, that

14

||Ls1
k (f )||

s1

(s 1)!(
ln(/
))s+1 ||f || (())
s
(s 1)!(
ln(/
))s+1 (()) .
0) = (z, 0), we get
Taking (z,
(6.3)
s1
X
Hence
the
norm
of
the
series

(x)
can
be
approximated
j
s
=
(1)i Csi zi n Lsi1
(
)(z, 0).

by
ws w=0 i=0

s
P

s1
w) with respect to w at
||j (x)|| +
L
(f
)

A Taylor series expansion of (z,


k
s!

s=1
w = 0 is

s1

P
1
s1
!

X
()/ ln(/
)
+ ()

s
s
X
w
z
s=1
w) =
(z,
(1)i Csi zi n Lsi1
(
)(z, 0)
+

s1
0

s!
P
s=1
i=0
()/ ln(/
)
+ ()
s=1
Let us define (z) by its components in the following way:
()

for any 1 j n we set j (z) = j (z, w)|w=zn . Since


The series converges and is bounded by + 1()/
ln(/)

for any 1 j n, j (x, y) = j (x) is independent of the


()
if
()/
ln(/

)
<
1,
that
is,
if
we
choose

<
e
.
s
s
variable y, it follows that L j = L j for all s 0. We
(ii) To prove the convergence of the series
then deduce that
s1
s1
!
!

X
X
zns X
zns X
i i i
si1
i i i
si1
j (z) = zj +
j (z) = zj +
(1) Cs zn L
(j )(z) .
(1) Cs zn L
(j )(z)
s! i=0
s! i=0
s=1
s=1
i=0

we use Lemma 6.1 (iii). Taking f = j we can estimate


the component j as follows
s1
!
s
X
P

||j (z)|| +
Csi ||zi n Lsi1 (j )(z)||
s!
s=1

P
s=1

s
s!

s1
X

s+1
X
ln(/
)

s1
X

!
Csi ()si1

()

The series is convergent provided we chose

1+()
ln(/)

< 1,

(m0 !/0 !) (ms !/s !)] xm ,


where, we put m = m0 + + ms and = 0 + + s
for convenience of notation.

11

nn (f )

mi i

f
1 ++n f
=
=
n .
1
x
x
1 xn

(ii)

(iii) ||zi n Lti


ln(/
))t ||f || ||||ti
(f )|| t!(

n
n
X
X
f
j =
0 (f ) 1 (j1 )
x
j
j=1
j =1

Proof of Lemma 6.1 (i) Because mi !/i ! (mi )i for


all 0 i s we deduce that

where |0 | = 1 and |1 | = 0 with 0 an n-tuple whose


components, except the (j1 )th component are zero. By
an inductive argument we check that for any s 1 the
successive Lie derivatives yield
XX
0 (f ) 1 (j1 ) s1 (js1 ) s (js ),
Ls (f ) =

mi i

fm0 x

and ji (x) =

m0 0

(ji )mi x

mi

(m0 )0 (ms )s (
/)|m|

||=s

On the other side,

(6.4)
where J = {j1 , . . . , js , 1 ji n} and the second summation is taken over some n-tuples i = (i1 , . . . , in ),
i = 0, 1, . . . , s with s = 0, |0 | 1 and |0 | + |1 | +
+ |s | = s. Let the Taylor expansions of the analytic
functions f, j1 , . . . , js be represented by
X

M sup

m0

||=s

(m0 !/0 !) (ms !/s !)(


/)|m| .

s
M s! ln(/
)

s
||Ls (f )|| s! ln(/
)
||f || ||||s

(i)

Then we have the following inequalities

For the vector field : () = (1 )x1 + +


(n )xn . It is easy to see that
L (f ) =

M = sup

that is, whenever < e1() .


To complete the proof we need to establish Lemma 6.1
below. Before some more notation is needed. Let denote
by i : C (Rn ) C (Rn ) the derivation operator with
f
i (f ) = x
. For = (1 , . . . , n ) we get
i

f (x) =

[fm0 (j1 )m1 (js )ms

Lemma 6.1 Let f (resp. ) be an analytic function (resp.


vector field). Let s 1 and t 0 be given integers and
0 < < two positive real numbers. Define

ln(/
)
s
||f ||
ln(/
) [(1 + ())s 1] .
()
s=1

(f ) =

J ||=s mi i

i=0

s+1
X
(1 + ())s 1
ln(/
)
s=1

P P

Csi (s 1)!(
ln(/
))s+1 ||f || ||||si1

s=1

+ ||f ||

Ls (f ) =

i=0

i=0

+ ||f ||

Consequently

s
(m0 )0 (ms )s |m0 | + + |ms | = ms

||=s

which implies that

mi 0

sup
mi i

for all 1 i s. It follows easily that


X
0 (f ) =
(m0 !/0 !)fm0 xm0 0 ,

sup
mi 0

|m0 | + + |ms |
n
|m0 | + + |ms |

(
/)|m|

o
(
/)|m| .

m0 0

The inequality follows from Stirling s! = 2s(s/e)s es


s
where
/)x
0, and the fact that the maximum of x (
s >

and for any 1 i s


X
(mi !/i !)(ji )mi xmi i .
i (ji ) =

is

mi i

15

s
ln(/)

es .

(ii) For any 0 < < we have the following estimates


P P
P h
||Ls (f )|| =
|fm0 ||(j1 )m1 | |(js )ms |

hard). Indeed, at each step those conditions are replaced


with the fact that the second derivative of a certain vector field is zero (state linearization) or proportional to its
first derivative with respect to the same variable (feedback
J ||=s mi i
i
case). Thus at each step, the previous system is trans(m0 !/0 !) (ms !/s !) |m| formed into a new system who is a cascade between an
affine lower dimensional system and a linear one (or feedP P
P h
||
=
|fm0 | |(j1 )m1 | |(js )ms | back form) whose first variable acts as control input for the
lower system. The extension of our results to the multiJ ||=s mi i
input case is in progress. We expect to apply the explicit
i
(m0 !/0 !) (ms !/s !) |m| (
/)|m| solving of the flow box theorem to Frobenius theorem by
finding coordinates that simultaneously rectify a given set
h
P
P
P
ofs |vector fields. The algorithms will then be generalized
= s
|fm0 ||m0 | |(j1 )m1 ||m1 | |(js )ms ||m
to multi-input systems.
J ||=s mi i
i
(m0 !/0 !) (ms !/s !) (
/)|m|
References
P P
P h
s
|fm0 ||m0 | |(j1 )m1 ||m1 | |(js )ms ||ms |
J ||=s mi i

s ||f || ||||s

[1] J. Alvarez-Gallegos, Nonlinear regulation of a Lorenz system


i
by feedback linearization techniques, Dynamics & Control,
|m|
(m0 !/0 !)| (ms !/s !) (
/)
(1994) 277-298.
(
) P. J. Antsaklis and A. N. Michel, Linear Systems, McGraw-Hill,
[2]
P
(1997).
(m0 !/0 !) (ms !/s !)(
/)|m|[3] . A. Banaszuk, J. Hauser, Approximate feedback linearization: A
sup

mi i

||=s

homotopy operator approach, Proceedings of the American Control Conference, Baltimore, Maryland (1994), pp. 1690-1694.
[4] R. W. Brockett, Feedback invariants for nonlinear systems, in
Proc. IFAC Congress, Helsinski, 1978.
[5] G. O. Guardabassi, S. M. Savaresi, Approximate linearization
via feedbackan overview, Automatica vol 37(2001) pp. 1-15.
[6] K. Guemghar, B. Srinivasan, D. Bonvin, Approximate inputoutput linearization of nonlinear systems using the observability
normal form, European Conference Control (2003).
[7] Q. Gong, W. Kang and I. M. Ross, A pseudospectral method for
the optimial control of constrained feedback linearizable systems,
IEEE Trans. Autom. Contr., 51(2006), pp. 1115-1129.
[8] L. Guzzella, A. Isidori, On approximate linearization of nonlinear
control systems, International Journal of Robust and Nonlinear
Control vol 3(3) pp. 261-276.
[9] J. Hauser, S. Sastry, P. Kokotovic, Nonlinear control via approximate input-output linearization: The ball and beam example,
IEEE Trans. Autom. Contr., 37:3(1992), pp. 392-398.
[10] M-T. Ho, Y-W. Tu, and H-S. Lin, Controlling a Ball and Wheel
System using Full-State-Feedback Linearization: A Testbed for
Nonlinear Control Design, in IEEE Control Systems magazine,
October 2009, vol. 29(5) pp. 93-101.
[11] L. R. Hunt and R. Su, Linear equivalents of nonlinear time varying systems, in Proceedings of Mathematical Theory of Networks
& Systems, Santa Monica, CA, USA, (1981), pp. 119-123.
[12] A. Isidori, Nonlinear Control Systems, 3rd ed., Springer, London, 1995.
[13] B. Jakubczyk and W. Respondek, On linearization of control
systems, Bull. Acad. Polon. Sci. Ser. Math., 28, (1980), pp. 517522.
[14] T. Kailath, Linear Systems, Prentice Hall Information and System Sciences Series, USA, (1980).
[15] W. Kang, Approximate linearization of nonlinear control systems, Systems & Control Letters Vol 23:1(1994), pp. 43-52.
[16] A. J. Krener, On the equivalence of control systems and the
linearization of nonlinear systems, SIAM Journal on Control, 11
(1973) pp. 670-676.
[17] A. J. Krener, Approximate linearization by state feedback and
coordinate change, Systems & Control Letters, vol 5(3), 1984,
pp. 181-185.
[18] M. Krstic, Feedback linearizability and explicit integrator forwarding controllers for classes of feedforward systems, in IEEE
Trans. Automat. Control, 49, (2004), pp. 1668-1682.
[19] M. Krstic, Explicit Forwarding Controllers-Beyond Linearizable
Class, in Proceedings of the 2005 American Control Conference,
Portland, Oregon, USA, (2005), pp. 3556-3561.

Using item (i) above it follows that

s
||Ls (f )|| s! ln(/
)
||f || ||||s .
Formula (6.3) follows directly if we replace s by s 1, the
vector field by k and the function f by k j taking
into account that ||f || ().
(iii) Consider (6.4) where s is replaced by t i, that is,
Lti
(f ) =

XX

0 (f ) 1 (j1 ) ti (jti )

with ti = 0, |0 | 1 and
|0 | + |1 | + + |ti | = t i.
Differentiating i times with respect to xn we get
xi n Lti
(f ) =

XX

0
1
ti

(f )
(j1 )
(jti )

(6.5)

0 | 1 and |
0 | + |
1 | + + |
ti | = t.
with |
Following the same steps in Lemma 6.1 (ii) we get
||xi n Lti
ln(/
))t ||f || ||||ti
(f )|| t!(
. Notice that
the power t i on the last term is due to the fact there
are t i factors only that involve the components of the
vector field .
Conclusion
In this paper we provided algorithms allowing to compute (feedback) linearizing coordinates for single-input
control systems. The algorithms are based on a successive
rectification of one vector field at a time using explicit convergent power series of functions. The algorithms do not
require an a priori checking of the (feedback) linearization conditions of Theorem 1.1 (which are usually very
16

[20] M. Krstic, Feedforward systems linearizable by coordinate


change, in Proceedings of the 2004 American Control Conference, Boston, Massachussetts, USA, (2004), pp. 4348-4353.
[21] D. A. Lawrence, W. J. Rugh, Input-output pseudolinearization for nonlinear systems, IEEE Trans. Autom. Contr., vol
39:11(1994), pp. 2207-2218.
[22] C. Liqun and L. Yanzhu, Control of the Lorenz chaos by the exact linearization, Applied Mathematics & Mechanics, 19(1998),
pp. 67-73.
[23] A. S. Morse, Structural Invariants of Linear Multivariable Systems, SIAM Journal of Control 11(3), (1973), pp. 446-465.
[24] Ph. Mullhaupt, Quotient submanifolds for static feedback linearization in Systems & Control Letters, 55(2006), pp. 549-557.
[25] H. Nijmeijer and A. J. van der Schaft, Nonlinear Dynamical
Control Systems Springer-Verlag, New York, (1990).
[26] M. Schlemmer and S. K. Agrawal, Globally feedback linearizable
time-invariant systems: optimal solution for Mayers problem, J.
of Dynam. Syst., Measurem. & Contr. 122:2(2000), pp. 343-347.
[27] A. Serrani, A. Isidori, C. I. Byrnes, and L. Marconi, Recent
advances in output regulation of nonlinear systems, Nonlinear
Control in the Year 2000, A. Isidori, F. Lamnabhi-Lagarrigue
and W. Respondek (eds), LNCIS vol. 259, 2(2000), pp. 409-419.
[28] I.A. Tall and W. Respondek, On Linearizability of Strict Feedforward Systems, in Proceedings of the 2008 American Control
Conference, Seattle, Washington, USA, pp. 1929-1934.
[29] I.A. Tall and W. Respondek, Feedback Linearizable Strict Feedforward Systems, in Proceedings of the 47th IEEE Conference on
Decision and Control, (2008), Canc
un, Mexico, pp. 2499-2504.
[30] I.A. Tall, Linearizable Feedforward Systems: A Special Class, in
Proceedings of the 17th IEEE Conference on Control and Applications Multi-conference on Systems and Control, San Antonio,
TX (2008), pp. 1201-1206.
[31] I.A. Tall, (Feedback) Linearizable Feedforward Systems: A Special Class, to appear in IEEE Trans. Autom. Contr..
[32] I.A. Tall, Flow Box Theorem and Beyond, to be submitted.
[33] I.A. Tall, State Linearization of Nonlinear Control Systems: An
Explicit Algorithm 48th IEEE Conference on Decison and Control, China, pp. 7448-7453.
[34] I.A. Tall, Explicit Feedback Linearization of Nonlinear Control Systems, in 48th IEEE Conference on Decison and Control,
China, pp. 7454-7459.
[35] I.A. Tall and W. Respondek, Analytic Normal Forms and Symmetries of Strict Feedforward Control Systems, to appear in International Journal of Robust and Nonlinear Control.
[36] S. Talwar, N. Sri Namachchivaya, and P. G. Voulgaris, Approximate Feedback Linearization: A Normal Form Approach, J. Dyn.
Sys., Meas., Control, 118:2(1996), pp. 201-210.

17

Вам также может понравиться