Вы находитесь на странице: 1из 6

52nd IEEE Conference on Decision and Control

December 10-13, 2013. Florence, Italy

Designing Purely Decentralized Controllers to Stabilize Non-Minimum-Phase


Double Integrator Networks with General Sensing Topologies
Artur Cook

Sandip Roy

decentralized controllers are used. The existing research thus


advocates for allowing communication of memory variables
to permit control of sophisticated/constrained autonomousagent teams, in cases where such communication is feasible
at reasonable cost. It remains an open question, however, as
to whether communication of memory variables is needed for
control, or if yet-to-be-discovered solutions that are purely
decentralized are possible.
In this article, we examine the gap between purelydecentralized and communication-allowing solutions to
autonomous-agent control, using a network with identical
non-minimum-phase double-integrator agents as a concrete
case study. For this generalized double-integrator network
model, purely-decentralized solutions cannot be used to
achieve stabilization, for some network topologies and/or
agent dynamics for which communications-allowing solutions are possible. However, using Wang and Davisons classical result [11], we determine that these further restrictions
on the network topology and agent dynamics are, in fact,
not necessary: in theory, a purely decentralized solution
can be obtained for the same class of models for which
communications-allowing solutions are possible. Finally, we
introduce a new decentralized control architecture that permits purely-decentralized stabilization of the network model
in full generality, and discuss both the merits and limitations
of this control scheme.
The remainder of the article is organized as follows.
In Section II, the generalized double-integrator-network
model is formulated. In Section III, conditions for purelydecentralzed stabilization are obtained using [11], and existing decentralized and communication-allowing control strategies are discussed in the context of this result. In Section IV,
a new purely-decentralized control scheme that works in full
generality is introduced.

Abstract This article examines whether purely decentralized controllers can be designed to stabilize networks of doubleintegrator agents with general observation topologies and identical non-minimum-phase internal dynamics. A new control
architecture is proposed, that permits stabilization of such
non-minimum-phase double-integrator networks. This design
provides an alternative to solutions that require information
exchange (of controller memory variables) between agents.

I. I NTRODUCTION
The problem of controlling a team of autonomous agents
with networked sensing or communication capabilities has
been intensely studied in the Controls community, with both
motion-control and algorithm-development goals in mind
(see, e.g.,[1], [2], [3], [4], [5]). In recent years, one thrust of
this research has been on developing controllers for increasingly sophisticated but also constrained agents, including
ones with increasingly complicated linear internal dynamics
[4], [6], [7] and those subject to actuator saturation and delay
[8], [9], [10]. Novel control schemes have been developed
that permit networks with such complicated agents to achieve
a range of control tasks, including notably stabilization,
formation-control, and synchronization tasks. There is now
a very wide literature on the control of multi-agent teams,
see e.g. [5] for an overview of recent results.
An interesting dichotomy has arisen in this research, that
is relevant to our development here: a number of studies
seek for purely decentralized solutions in which each agent
uses only its measurements of network dynamics in feedback
[4], [9], [10], [8], while other efforts permit communication of local storage or memory variables in accordance
with the measurement graph topology (i.e., an agent that
measures the dynamics of another may also receive storage
variables from that agent [6], [7]). While there have been
significant advances along both tracks, controller design has
been achieved for a wider class of agent models and network
topologies, and in a more systematic way, when communication of memory variables among the agents is permitted.
That is, communication of memory variables appears to
allow relaxation of several restrictions on both the network
topology and the agents dynamics, that arise when purely

II. M ODEL F ORMULATION


A network of n autonomous agents, labeled 1, . . . , n, is
considered. Each agent i 1, . . . , n is modeled as having
double-integrator internal dynamics:
x i = Axi + Bui ,

(1)

where xi  R2 is the agents state, ui R is its input,


0 1
0
A=
,B=
, and the dynamics is in continuous
0 0
1
+
time (t R ). Also, each agent is modeled as making a
single observation, which in general is a linear combination

The first two authors are with the School of Electrical Engineering and
Computer Science at Washington State University, and the last author is
with the Electrical Engineering Department at University of North Texas.
This work has been generously supported by National Science Foundation
Grants ECS-0901137, CNS-1035369, and CNS-1058124. Correspondence
should be sent to sroy@eecs.wsu.edu.

978-1-4673-5717-3/13/$31.00 2013 IEEE

Yan Wan

61

current and previous observations made by that agent (yi ( ),


0 t). Purely decentralized schemes will also briefly be
compared with ones that allow communication of memory
variables between graphical neighbors (that is, ones in which
agent i can use agent js memory variables, if gij 6= 0).

across multiple agents of a particular scalar output statistic.


Specifically, agent i has available the scalar measurement
yi =

n
X

gij zj ,

(2)

j=1

where the scalar local output statistic zj is given by

III. C ONDITIONS FOR D ECENTRALIZED S TABILIZATION

zj = cT xj ,
(3)
 
c
the local output matrix c = 1 is a two-element vector
c2
that specifies the (homogeneous) local output statistic, and
the weights gij specify how the local output statistics of
multiple agents are combined in the observation. We find
it convenient to define a matrix G = [gij ], and term this
matrix of weights as the topology matrix since it specifies the
sensing/communication topology among the agents (see e.g.
[12], [4]). We stress that each agent is assumed to only have
available the measurement yi , not the local output zi ; such
models have been termed non-introspective in the literature,
e.g. [7]. We also stress that no restrictions are placed on
the topology matrix: each agents measurements may be
arbitrary linear combinations of the output statistics, and so
the weights are arbitrary (positive or negative) real numbers.
We refer to the above-described model as the generalized
double-integrator network
(GDIN), noting that the special
 
1
case where c =
has been widely considered in the
0
literature under the heading of double-integrator network. We
also refer to the triple (c, A, b) as the full agent model, since
it specifies the input-to-local-output dynamics of each agent.
A particular focus of this article will be on the case that
the full agent model is non-minimum-phase. In this case,
we will refer to the network model as a non-minimum-phase
double-integrator network (NMPDIN).
We find it convenient to represent the full open-loop
dynamics of the GDIN as a single state-space system.
Specifically, we define xT = [x1,1 , ..., x1,n |x2,1 , ..., x2,n ]T ,
uT = [u1 , ..., un ]T , and yT = [y1 , ..., yn ]T , where xj,i is the
jth entry (j = 1, 2) of xi . In this notation, the open-loop
dynamics is given by


 


0 In
0
x =
x+
u, y = c1 G c2 G x. (4)
0 0
In

Using Wang and Davisons classical result, we obtain necessary and sufficient conditions on a GDIN, for
stabilization using a decentralized dynamic linear timeinvariant controller. The state-of-the-art in decentralized and
communications-allowing control of the GDIN is then discussed, in the context of this existence result. Here is the
condition for stabilization:
Theorem 1: A decentralized dynamic LTI controller can
be applied to the GDIN to achieve asymptotic stability, if
and only if the topology matrix G has full rank and (c, A)
is observable (which happens if and only if c1 6= 0).
Proof: The result of Wang and Davison [11] states that a
decentralized LTI controller can be used to stabilize a system
if and only if its decentralized fixed modes are all in the open
left-half of the complex plane (OLHP). Since in our case all
eigenvalues of the open-loop system are zero, decentralized
stabilization is possible if and only if 0 is not a decentralized
fixed mode. Per the definition of a decentralized fixed mode
[11], zero is a decentralized fixed mode of the GDIN if and
only if, for any n n diagonal matrix K, the following
determinant is equal to 0:

det(

0
0

  

In
0
+
K c1 G
0
In


c2 G ) = det(

0
c1 KG


In
).
c2 KG
(5)

This algebraic condition immediately allows us to verify the


conditions on G and the pair (c, A), as follows.
Sufficiency: Let us choose K = I, and use the notation
M
 for the quantity
 inside the determinant in Equation 5
0
In
(
). We can prove that the determinant is
c1 KG c2 KG
not zero by proving that the matrix has only the zero vector
in its null space. To do so, let us consider the possibility
T
that an arbitrary vector (say v = v1T v2T , where each
vi is n-dimensional) is in the null space of M , i.e. the 2n
equations M v = 0 are satisfied. However, from the first n
equations, it follows immediately that v2 = 0. Also, since G
is full rank and c1 6= 0, it further follows that v1 = 0, and
thus only the nil vector is in the null space of matrix M .
Therefore, the determinant is not zero, and stabilizability is
proved.
Necessity: If either G is not full rank or c1 = 0, there
exists a vector qn 6= 0 in the nullspace of c1 G. Then v =
 T
T
is in the nullspace of the matrix M , for any K.
qn 0T
Hence, the determinant of interest is 0. It follows that zero
is a fixed mode of the GDIN, and stabilization is impossible.


Controllers have been developed that permit networked


multi-agent systems to complete a range of cooperative
tasks, including stabilization, synchronization, consensus,
formation, formation-tracking, partial synchronization, and
self-partitioning tasks, among others (e.g., [4], [12], [13],
[14], [15]). Here, for ease of presentation, we focus on
controller design for asymptotic stabilization of the GDIN.
The controller design that we obtain for the canonical stabilization problem can be straightforwardly adapted for other
cooperative tasks, but details are not included here.
The focus of this article is on purely decentralized control
of the GDIN. A decentralized controller is one in which
each agent is input at a time t, ui (t), is determined from

For the GDIN, the condition for decentralized stabilization (or absence of closed-right-half-plane fixed modes)

62

is identical to the condition for (centralized) stabilizability and detectability. Hence, decentralized stabilization using dynamic LTI controllers can be achieved in all cases
that centralized control (by any type of controller) can be
achieved. This equivalence also immediately implies that
purely-decentralized LTI controllers can achieve stabilization
in all cases that an information-transmission-allowing controller or a non-linear time-varying decentralized controller
(see e.g. [17]) achieves stabilization.
Although application of Wang and Davisons classical result indicates that decentralized stabilization of the GDIN can
be achieved in full generality, this analysis does not directly
yield a practical controller designit is an existence result.
Several recent research efforts have proposed new controller
architectures for multi-agent networks, that yield practical
decentralized controllers for the GDIN. Two decentralized
control schemes are particularly relevant. First, the multilead-compensator paradigm introduced in [9], [10] permits
stabilization of the GDIN for arbitrary full-rank G, but only
for agents that are minimum-phase. On the other hand, the
observer-based design developed in e.g. [8] can be used
for an arbitrary full agent model, but requires G to have a
special form. Specifically, the authors in [8] (which focuses
on synchronization, not stabilization) assumed a directedLaplacian topology matrix G; similar designs for stabilization can be obtained for grounded-Laplacian matrices G or
(more broadly) for matrices G whose eigenvalues can be
placed in a single open half plane through diagonal scaling,
but the method does not work for arbitrary full-rank G.
Several further controller designs are also available including
a passivity-based approach (see e.g. [16]), however these
methods also only apply to a subset of stabilizable GDINs. In
sum, the existing approaches do not permit design of stabilizing purely-decentralized controllers, for non-minimum-phase
agents and arbitrary full-rank graph matrices.
In comparison with purely decentralized controllers,
schemes that allow communication of memory variables can
be designed to achieve synchronization or stabilization, for
a much broader class of multi-agent networks (including
ones with heterogeneous local agent models [7]). While
most of the communications-allowing control schemes have
been focused on synchronization problems, these results can
be straightforwardly adapted to stabilization problems. In
particular, it is easy to check that a communications-allowing
controller can be designed for the GDIN in full generality.

regarding stabilization of a matrix through diagonal scaling is


revisited; 2) the proposed control architecture is introduced,
and controller parameters are designed to achieve stability;
3) the controllers designed via this approach are shown to be
implementable as linear state-space systems. Once the control design has been presented, we also briefly compare and
contrast the proposed purely-decentralized controller with
ones that use communication of memory variables. Several
further remarks about the control architecture, including its
use in pole-placement design, are also made. Finally, an
example is presented to illustrate the design.
The controller design that we propose depends critically
on a classical result on stabilizing a matrix via diagonal
scaling, that was developed originally by Fisher and Fuller
for numerical-computation applications in the 1950s (and
re-discovered independently by Corfmat and Morse in 1973
[18], and yet again by our group in 2006 [12], with staticdecentralized-control-design applications in mind). For the
readers convenience, let us present the the result here as a
lemma:
Lemma 1: If an n n matrix B has a nested sequence
of n principal submatrices all of full rank, then an n n
diagonal matrix J can be found such that the eigenvalues of
JB are in the open left-half plane.
Remark: The matrix B has a nested sequence of n
principal submatrices of full rank, if it has a 1 1
(principal) submatrix of full rank that is a principal
submatrix of a 2 2 principal submatrix of full rank, which
is itself a principal submatrix of a 3 3 principal submatrix
of full rank, and so forth.
Let us now introduce the special control architecture that
we will use for stabilization of the double-integrator network.
The proposed control architecture for each agent comprises
a linear pre-compensator of order 2 (i.e., a pre-compensator
consisting of two integrations, which augment the state by
two memory variables) along with a linear feedback of the
local output and its first two derivatives. Specifically, we
propose using a controller of the following form for each
agent:
u i = vi
v i = zki yi + ki y i + ki yi + i ui + vi

(6)

where the GDIN inputs ui (t) and the signals vi (t) are
controller memory (state) variables, ki and i are gain
parameters that may be different for each agent, and z, , ,
and are scalar gain parameters. We note that the proposed
control architecture is fully decentralized. We also note that
the proposed control uses derivatives of the output, and so
cannot be directly implemented as a linear state-space system
in the specified architecture. However, we will later check
that its transfer function is proper, and hence verify that a
state-space implementation is possible.
Let us now present and prove the key result of this section,

IV. A N EW C ONTROL S CHEME


A new feedback-control architecture is introduced, that
permits design of fully decentralized linear state-space feedback controllers for the GDIN, even when the full agent
model is non-minimum-phase and the graph topology is
arbitrary. Thus, the gap between the stabilizability condition
obtained from Wang and Davison and the models addressed
by existing purely-decentralized controllers is resolved. The
result is presented in three steps: 1) a foundational lemma

63

are in the OLHP based on Lemma 1, since K(G +


b)
has a sequence of n principal minors of full rank. Finally,
through a scaling of K, the maximum magnitude among the
eigenvalues of K(G +
b) can certainly be made 1. For the
design of K and that we have obtained, let us consider the
real parts of the eigenvalues of K(G +
b). Let us denote the
minimum among the absolute values of these real parts (i.e.,
the minimum distance of an eigenvalue from the j-axis, as
.
We are now ready to specify the full time-scale design.
To do so, let us define a real scalar time-scale parameter ,
where 0 <  << . The remaining controller parameters are
defined in terms of , as follows: = , = 23 , and z = 5 .
For this choice of gains, let us characterize the eigenvalues
of the closed-loop system matrix Acl .
To begin the eigen-analysis, notice that the lower-right
block of Acl is equal to I plus a perturbation of order
 (O()), and hence its eigenvalues are within O() of 1.
Through a classical time-scale transformation, it is easy to
check that (upon transformation to the new coordinates) the
remainder of the closed-loop state matrix constitutes a small
perturbation to the bottom-right block, and hence the closedloop eigenvalues (to within a small perturbation) are the
union of the eigenvalues of this bottom-right block and the
following Schur form:


0 I 0
0
Acl1 = 0 0 I 0 (c2 KG + I)1 H, (8)
0 0 0
I

that a controller in the above architecture can be designed


to stabilize the GDIN in full generality, i.e. whenever G has
full rank and the full agent model is observable (c1 6= 0).
For convenience in presentation, we limit ourselves to the
case that c2 6= 0 (the case where the full agent model has an
invariant zero), noting that the case where c2 = 0 has already
been addressed in the literature [9]. The design is achieved
by applying a time-scale separation, specifically the closedloop poles are placed at three time scales. Here is the key
result; the design is made explicit in the proof:
Theorem 2: Consider a GDIN with full rank G, and c1 , c2
non-zero. A controller of the form (6) can be designed so
that the poles of the closed-loop system are all in the OLHP.
Proof: The state of the closed-loop
dynamics is specified

x
v1
x
.

by the 4n-element vector =


u, where v = .. . In
vn
v
these coordinates, the state matrix of the closed-loop system
is:

Acl =

0
0
0
zc1 KG

In
0
0
zc2 KG + c1 KG

0
In
0
(c2 + c1 )K(G +
b)

0
,

In
c2 KG + I
(7)

k1

where K = diag( ... ),
b =

1
1
,
c2 +c1 K

and =

kn

1

diag( ... ). Here, we have chosen to phrase the state matrix

n
in terms of
b rather than for convenience in design; the
gains can be computed from
b.
To commence the design, we choose = 1. Also,
we claim that K and
b can be designed so that: 1) the
eigenvalues of c1 K(G+b
) are in the OLHP, 2) the maximum
magnitude among the eigenvalues of K(G+b
) is 1, and 3)
b
is small compared to G in the sense that its largest magnitude
entry is much smaller than the smallest-magnitude eigenvalue
of G (say, less by a scale factor f , where f << 1). To show
that this is possible, consider
b = cI, where c is a small
constant. We claim that there exists c > 0 such that, for
b = G + cI
all c (0, c ], all principal submatrices of G +
are of full rank. To see this, consider Gq + cI, where Gq
is a principal submatrix of G (and the identity matrix I
is of commensurate dimension); this quantity is not of full
rank only if c is an eigenvalue of Gq . However, since each
principal submatrix Gq has a finite number of eigenvalues,
and G only has a finite number of principal submatrices,
all principal submatrices of G + cI have full rank except
for a finite set of values c. It immediately follows that all
principal submatrices of G+cI have full rank for c (0, c ],
for some c . Choosing sufficiently small c in this range, we
immediately obtain a
b that is small compared to G in the
sense that its largest magnitude entry is much smaller than
the smallest-magnitude eigenvalue of G. Further, for this
b,
K can be designed so that the eigenvalues of c1 K(G +
b)

where

H = zc1 KG


zc2 KG + c1 KG (c2 + c1 )K(G +
b) .
(9)
Simplifying the Schur form and eliminating terms of lower
order from each block on the last row, we obtain that the
Schur form (to within a small perturbation) is

0
I
0
.
0
I
Acl1 = 0
(10)
zc1 KG c1 KG c1 K(G +
b)
It remains to characterize the eigenvalues of Acl1 , so as
to characterize all eigenvalues of the closed-loop system.
Let us now focus on the bottom-right block of Acl1 . By
construction, the eigenvalues of this bottom-right block are
in the OLHP, and further they are O(). Again, it is easy to
check that, upon a classical time-scale state transformation,
the remainder of the matrix Acl1 constitutes a small perturbation of the lower-right block. Hence, the eigenvalues
are the union of those of lower-right block and those of the
following Schur form:

  


0 I
0
Acl2 =

(c1 K(G+b
))1 zc1 KG c1 KG
0 0
I
(11)
To simplify this expression, we recall that
b = cI has been
designed to be small compared to the eigenvalues of G
(by a scale factor of f ), and hence (c1 K(G +
b))1 is a

64

perturbation (of order f ) from (c1 KG)1 . Using this fact


to simplify Acl2 , we obtain:


0
I
Acl2 =
(12)

z
I I,

4) This study resolves the gap between purely decentralized controller designs and communication-allowing designs,
for the GDIN. It remains an open question, however, if the
presented design philosophy can be adapted to more complex
multi-agent systems. Much further study is also needed regarding adapting the design to overcome actuator saturation,
delay, and topological variation. We have developed some
preliminary simulations suggesting that a scaling of the controller gains can be used to permit stabilization under actuator
saturation, however further effort is needed to verify the
result. We stress that communications-allowing designs may
have many advantages over the purely-decentralized design
presented here, but our design does provide an alternative
in the case that communication of memory variables in
accordance with the network graph is not possible.
5) Either the decentralized controller design introduced
here or the one given in [10] can be used when the full
agent model is minimum phase. For the NMPDIN, however,
only the design presented here is stabilizing.
6) It is worth stressing that, although time-scale separation
techniques are very common in controller design, highgain feedback of an output derivative equal to the relative
degree of the plant is uncommon. Further study is needed
to evaluate e.g. the external stability and robustness of such
designs, however we believe that the design presented here
is a promising first step. We also stress that such a high
gain solution is needed only if the agent model is nonminimum phase and the graph matrix has eigenvalues with
both negative and positive real parts.

where we have excluded lower-order contributions to each


block on the bottom row. Substituting for z, , and and
re-arranging Acl2 , we immediately find that the eigenvalues
of Acl2 are (arbitrarily
close to) those of the 2 2 matrix


0
1
, or equivalently the roots of s2 + 22 s +
4 22
44 = (s + 2 )2 . Thus, all 2n eigenvalues of Acl2 are
(approximately) equal to 2 . We have thus verified that
all 4n eigenvalues of Acl are in the OLHP, for the presented
design.

We have thus shown that a second-order pre-compensator
together with multi-derivative feedback for each agent can be
be used for stabilization of the GDIN. Finally, let us verify
that the proposed control architecture admits implementation
as a state-space feedback controller. To do so, let us compute
the transfer function for each agents full proposed controller
(i.e., the transfer function from yi to ui ). With a little algebra,
we obtain that the transfer function is
ki s2 + ki s + zki
Ui (s)
=
.
Yi (s)
s2 s i

(13)

Since the controllers transfer function is proper, it follows


immediately that a state-space implementation is possible.
Thus, our result (together with existing results for the basic
DIN) allows design of second-order linear state-space controllers for the GDIN in full generality, as formalized in the
following theorem:
Theorem 3: Consider a GDIN for which the pair (c, A)
is observable (i.e., c1 = 0) and G has full rank. Secondorder linear state-space controllers can be designed for each
agent so that the closed-loop dynamics of the GDIN are
asymptotically stable.
Several remarks about the above result are worthwhile:
1) We note that the design uses three time scales. The
fastest time-scale is imposed by the controller to permit
design even when the full local dynamics is non-minimum
phase. The intermediate time-scale is governed by the networks graph matrix G, but incorporates a feedback of the
controllers internal variables that allows placement of all of
these eigenvalues in the OLHP. The intermediate time-scale
design achieves an approximate inversion of the network
sensing interaction for arbitrary full-rank G, thus permitting
design of the slow-time-scale dynamics for stabilization.
2) The design can be straightforwardly modified so that
the 2n slowest eigenvalues are placed in groups of n at two
desired locations in the complex plane. We refer the reader
to [9] for further discussion of such group-pole-placement
design.
3) The controller applied at each network agent has modest
complexity: two memory variables are used.

Example
 Let us consider stabilization of a NMPDIN with G =
0 1
, c1 = 1, and c2 = 1. We note that the eigenvalues
1 0
of the topology matrix G in this case cannot be placed in
a single half plane through diagonal scaling, so previouslydeveloped purely-decentralized control schemes cannot be
used. In fact, this NMPDIN is particularly difficult to control,
in that two agents have no local information (in addition
to being non-minimum phase). Using the design method
developed above, we obtain the following stabilizing controller design: k1 = 1, k2 = 1, = 1, = 103 ,
1 = 2 = 3 105 , = 2 109 , and z = 1015 .
For this design, we would expect the 8 eigenvalues of
the closed-loop system to be in the OLHP, at three time
scales: two eigenvalues should be near 1, two eigenvalues
should have magnitudes near 103 (and real parts that are
significantly larger than 106 ), and four eigenvalues should
be near 106 . In fact, the eight eigenvalues of the closed-loop
system are: 1.00 j2.00E 3, 2.60E 5 1.00E 3,
1.12E 6 1.5E 7, and 8.82E 7 9.5E 8 (where
the operator E represents the base-10 exponential). We note
that the two modes corresponding to the middle time scale
are quite oscillatory: this is not surprising given that neither
agent has local measurements.

65

R EFERENCES

[11] S. H. Wang and E. Davison, On the stabilization of decentralized


control systems, IEEE Transactions on Automatic Control, vol. 18,
no. 5, pp. 473-478, 1973.
[12] S. Roy, A. Saberi, and K. Herlugson, Formation and alignment
of distributed sensing agents with double-integrator dynamics and
actuator saturation, in Sensor Network Operations (S. Phoha, T.
Laporta, and C. Griffin, ed.), 2006.
[13] S. Roy, L. Chen, and A. Saberi, On the information flow required
for tracking control in networks of mobile sensing agents, IEEE
Transactions on Mobile Computing, vol. 10, no. 4, pp. 519-531,
Apr. 2011.
[14] K. Mathia, G. Lafferriere, and A. Williams, Cooperative control of
unmanned vehicle formations, in Proceedings of Euro UAV 2006
Conference, Paris, France, Jun. 2006.
[15] Y. Wan, S. Roy, A. Saberi, and B. Lesieutre, A stochastic automatonbased algorithm for flexible and distributed network partitioning,
in Proceedings of the 2005 IEEE Swarm Intelligence Symposium,
Pasadena, CA, Jun. 2005.
[16] J. Zhao, D. J. Hill, and T. Liu, Passivity-based output synchronization
of dynamical networks with non-identical nodes, in Proceedings
of the IEEE Conference on Decision and Control, Atlanta, GA,
Dec. 2010.
[17] Z. Gong and M. Aldeen, Stabilization of decentralized control
systems, Journal of Mathematical Systems, Estimation, and Control,
vol. 7, no. 1, pp. 1-16, 1997.
[18] J. Corfmat and A. Morse, Stabilization with decentralized feedback
control, IEEE Transactions on Automatic Control, vol. 18, no. 6,
pp. 679-682, 1973.

[1] W. Ren and R. Beard, Distributed consensus in multi-vehicle cooperative control: theory and applications, Springer: New York, 2008.
[2] S. Roy, A. Saberi, and A. Stoorvogel, eds., International Journal of
Robust and Nonlinear Control, Special Issue on Communicating-Agent
Networks,vol. 17, no. 10-11, Dec. 2006.
[3] L. Xiao and S. Boyd, Fast linear iterations for distributed averaging,
Systems and Control Lettters, vol. 53, no. 1, pp. 65-78, Sep. 2004.
[4] T. Yang, S. Roy, Y. Wan, and A. Saberi, Constructing consensus
controllers for networks with general linear agents, International
Journal of Robust and Nonlinear Control, vol. 21, no. 11, pp. 12371256, Jul. 2011.
[5] Y. Cao, W. Yu, W. Ren, and G. Chen, An overview of recent progress
in the study of distributed multi-agent coordination, arXiv:1207.3231
.
[6] Z. Li, Z. Duan, G. Chen, and L. Huang, Consensus of multi-agent
systems and synchronization of networks: a unified viewpoint, IEEE
Transactions on Circuits and Systems I, vol. 57, no. 1, Jan. 2010.
[7] H. F. Grip, T. Yang, A. Saberi, and A. A. Stoorvogel, Output synchronization for heterogeneous networks of non-introspective agents.
[8] X. Wang, A. Saberi, A. Stoorvogel, H. F. Grip, and T. Yang, Consensus in the network with uniform constant communication delay,
submitted to Automatica.
[9] Y. Wan, S. Roy, A. Saberi, and A. Stoorvogel, The design of
multi-lead-compensators for stabilization and pole placement in
double-integrator networks, IEEE Transactions on Automatic Control,
vol. 55, no.12, pp. 2870-2875.
[10] Y. Wan, S. Roy, A. Saberi, and A. Stoorvogel, A multiple-derivative
and multiple-delay paradigm for decentralized control: uniform rank
systems, Dynamics of Continuous, Discrete, and Impulsive Systems,
Special Issue in Honor of H. Khalils 60th Birthday, vol. 17, no. 6,
pp. 883-907, 2010.

66

Вам также может понравиться