Вы находитесь на странице: 1из 34

Consensus Algorithms

Flocking and Swarms


Bachelors Thesis, SA104X

Christopher Martensson Linus Sjovall


880706-7478 890414-0517
B cmarte@kth.se B sjovall@kth.se
T 0739996911 T 0730466929
Enginering Physics Vehicle Enginering

Supervisors:
Ulf Jonsson, Xiaoming Hu, Yuecheng Yang

Department of Mathematics, Optimization and Systems Theory


Royal Institute of Thechnology
Stockholm, Sweden

May 11, 2011


Abstract
An interesting field of mathematics is the study of swarming and flock-
ing. By using graph theory, one can describe a system of agents that trans-
fer information between each other. With the help of certain algorithms it
is possible to update the agents information in order to reach consensus
between the agents. If the information relates to the position, the velocity,
or the acceleration of each agent, a behaviour similar to that of animals
flocks or insect swarms is observed. Several other applications also exist,
for example in systems of multiple robots when no central coordination is
possible or simply not desired.
In this paper different algorithms used to change the agents informa-
tion state will be studied and researched in order to determine the require-
ments under which the entire set of agents achieve consensus. First the
case where agents receive information from a non-changing set of agents
will be studied. Specifically a particular algorithm, where each agents in-
formation is determined by a linear function depending on the information
state of all other agents from which information is received, will be con-
sidered. A requirement for this particular algorithm to reach consensus is
that every agent both receives information and also sends information to
every other agent, directly or indirectly through other agents. If all infor-
mation transfers are weighed equally, the consensus achieved will be the
average of all initial information states. Consensus can also be reached
under looser conditions where there exists an agent that sends information
to every other agent, directly or indirectly.
The changes of the systems behaviour when one uses different consen-
sus algorithms will be discussed, and computer simulations of these will
be provided. An interesting case is where the information (often referring
to location, velocity or acceleration) is received only from agents within a
given distance and thus the information is received from different agents
at different times. This results in nonlinear algorithms and mostly simu-
lations and interpretations will be given. An observation is that whether
consensus is achieved or not depends partially on the initial information
states of the agents and the maximum distance for information transfer.

1
Sammanfattning
Ett intressant omrade inom matematiken ar att beskriva fenomenet
med flockar och svarmar. Med hjalp av grafteori kan man beskriva ett
system av agenter som skickar information mellan varandra och med hjalp
av algoritmer som beskriver hur varje agents informationen ska uppdateras
sa att konsensus nas.
Om informationen beskriver en position eller forflyttning i rummet
kan man observera ett beteende som liknar det hos djurflockar eller in-
sektssvarmar. Manga andra tillampningsomraden finns ocksa, till exempel
i system av robotar nar det saknas central styrning och internt beslutsta-
gande ar onskvart.
I denna rapport kommer olika algoritmer for att uppdatera en agents
status att undersokas for att bestamma vilka krav som finns for att kon-
sensus skall nas. Forsta delen kommer att behandla ett enklare fall dar
varje agent tar emot information fran en oforanderlig uppsattning agen-
ter. Specifikt sa kommer en algoritm, dar en agents status bestams av
en linjar funktion som beror pa statusen hos de agenter fran vilka infor-
mation mottages, att studeras. Ett krav for att denna algoritm ska na
konsensus ar att varje agent bade skickar och tar emot information fran
samtliga ovriga agenter, direkt eller indirekt via andra agenter. Om alla in-
formationsoverforingar vags lika sa kommer alla agenter na medelvardet
av agenternas initialvarden. Konsensus kan ocksa nas under mindre re-
striktiva villkor, om det finns en agent som skickar information till alla
andra noder (direkt eller indirekt).
Forandringar av systemets beteende vid olika uppdateringsalgoritmer
kommer att studeras och datorsimuleringar av dessa fenomen kommer att
ges. Ett intressant fall ar da informationen (ofta position, hastighet eller
acceleration) endast kan tas emot fran de agenter som finns inom ett givet
avstand. darmed forandras den uppsattning agenter med vilka information
overfors med tiden. Detta resulterar i olinjara algoritmer och framforallt
kommer simulationer och tolkningar av dessa att ges. En observation ar att
om konsensus nas eller inte beror starkt pa densiteten bland agenterna i
utgangslaget samt det maximala avstand vid vilket informationsoverforing
kan ske.

2
Acknowledgements
We would like to thank our supervisors, Ulf Jonsson, Xiaoming Hu, and Yuecheng
Yang for all the time and help they have given us in the work on this bachelors
thesis.

3
CONTENTS

Contents
1 Introduction 5

2 Aim 5

3 Graph Theory 5
3.1 The Definition of a Graph . . . . . . . . . . . . . . . . . . . . . . 6
3.2 The Adjacency Matrix . . . . . . . . . . . . . . . . . . . . . . . . 6
3.2.1 Unweighted Adjacency Matrix . . . . . . . . . . . . . . . 7
3.2.2 Weighed Adjacency Matrix . . . . . . . . . . . . . . . . . 7
3.3 The Laplacian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.4 Results from Matrix Theory . . . . . . . . . . . . . . . . . . . . . 8
3.4.1 The Gershgorin Circle Theorem . . . . . . . . . . . . . . . 8
3.4.2 Perron-Frobenius Theorem . . . . . . . . . . . . . . . . . 8

4 Linear Consensus in Continuous Time 9


4.1 The Consensus Algorithm . . . . . . . . . . . . . . . . . . . . . . 9
4.2 Convergence of a Linear Differential Equation . . . . . . . . . . . 9
4.3 Eigenvalues of Laplacian . . . . . . . . . . . . . . . . . . . . . . . 11
4.4 Convergence of the Consensus Algorithm . . . . . . . . . . . . . 11
4.5 Simulations and Results . . . . . . . . . . . . . . . . . . . . . . . 14

5 Swarms and Flocking 14


5.1 Edges based on distance . . . . . . . . . . . . . . . . . . . . . . . 14
5.1.1 Changes to the Update Law . . . . . . . . . . . . . . . . . 14
5.1.2 Results and Simulations . . . . . . . . . . . . . . . . . . . 16
5.2 Collision Avoidance . . . . . . . . . . . . . . . . . . . . . . . . . 16
5.2.1 Changes to the Update Law . . . . . . . . . . . . . . . . . 16
5.2.2 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . 20
5.3 Leaders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.3.1 The Dynamics of Leaders . . . . . . . . . . . . . . . . . . 23
5.3.2 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . 23

6 Discussion 24

References 26

Appendices 27

A Java Application 27

B Matlab Simulations 33

4
3 GRAPH THEORY

1 Introduction
Distributed consensus is a concept that is used in a wide range of applications,
from human decision making to advanced robotic systems. The basic principle
of distributed consensus is that agents in a group communicate with each other
in order to reach a consensus, rather than each agent making decisions for it self
or having a universal source giving instructions to all agents. In more detail,
each agent receives information from a set of other agents in the group. The
agents then adjust their own information state depending on the information
received from other agents with the goal to reach a consensus, an agreement,
amongst all agents in the group. The information that is observed differs from
application to application. In the case of swarming of insects or flocking of
animals, the information of interest is the position and velocity of each agent.
In the example of a network of temperature sensors, the relevant information
would be the measured temperature.
The manner in which each agents information changes depending on the
information input from other agents can be described by a consensus algorithm.
The consensus algorithm varies depending on the application and the model. In
the case of swarming and flocking, the consensus algorithm may be a function
of the position and velocity of each agent. If, additionally, the velocity, position
or acceleration of each agent is given by the consensus algorithm, a model for
the behaviour of agents in flocks or swarms may be obtained.
Graph theory can be used to describe the communication links between the
agents in a group. The graph theory gives support to mathematical analysis of
different consensus algorithms.

2 Aim
The aim of this bachelors thesis is to present a rigorous and thorough inves-
tigation of a certain basic consensus algorithm associated with a non-changing
graph, eventually providing a sufficient requirement for consensus, as well as
considering other consensus algorithms more suitable for modelling of swarms
and flocks and carry out consensus studies of these.

3 Graph Theory
Let us observe a group of agents transferring information between one another
through some sort of connection. The connection between some agents may be
of such nature that information is only sent one way, while other connections
are such that information transfers are sent both ways over that connection.
An example of this is shown in Figure 1, where the arrows represent informa-
tion transfers. In the figure some connections send information both ways and
some are only one-way connections. The layout of the connections, including
the direction of the information transfers, is known as the information trans-
fer topology. One might desire to weigh the information received from certain
agents differently, as the information held by some agents may be of greater
importance or greater reliability than others. A way of describing the informa-
tion transfer topology and the weight of each connection for a network of agents
is through the use of graph theory. We will see that the use of graph theory

5
3 GRAPH THEORY

Figure 1: Illustration of a graph.

enables us to mathematically describe the networks properties, and will be of


use to us in tackling future problems.

3.1 The Definition of a Graph


A graph is defined as G = (N, E), where N = {1, 2, . . . , n} and E N N .
In graph theory each agent can be represented by a node. With n agents, and
therefore n nodes, the set V = {vi : i N } contains all nodes vi , where the
node indexes belong to the finite index set N .
The elements in E are called the edges of the graph G and is the set of
connections between the nodes, more specifically the set of all (i, j) where node
j receives information from node i. We use Ni to denote the neighbouring nodes
of node i, i.e.
Ni = {j N : (j, i) E} .
If for a certain graph we have that (j, i) E for every (i, j) E, the graph is
called an undirected graph. This would in the case of consensus among agents
correspond to information travelling both ways in a connection between two
agents. If all of the arrows in Figure 1 pointed both ways, it would describe
a undirected graph. If a graph is not undirected it is referred to as a directed
graph or digraph. The graph illustrated in Figure 1 is a directed graph. If the
graph is such that for any node i, there exists a directed path to any other node
j, the graph is called a strongly connected graph. If there exists a certain node
i, such that any other node j can be reached via a directed path, the path is
called a spanning tree and the graph is said to contain a spanning tree.

3.2 The Adjacency Matrix


There are two matrices that play a central role in graph theory, that further
describes in what manner information is transferred between nodes. One is a
matrix called the adjacency matrix, which corresponds to a certain graph. The
definition of the adjacency matrix varies depending on whether the information
transfer between nodes are weighed differently or not.

6
3 GRAPH THEORY

3.2.1 Unweighted Adjacency Matrix


The adjacency matrix A = [aij ]ni,j=1 corresponding to a certain graph G with
unweighted information transfers has the elements
(
1 if (j, i) E
aij = .
0 otherwise

3.2.2 Weighed Adjacency Matrix


The adjacency matrix A = [aij ]ni,j=1 corresponding to a certain graph G with
weighed information transfers has the elements
(
ij > 0 if (j, i) E
aij = .
0 otherwise

where ij is the weight of edge (j, i).


By these definitions, if G is undirected then A is symmetric, that is, A = AT .

3.3 The Laplacian


Another matrix associated with a certain graph G is a matrix referred to as the
Laplacian. The Laplacian L = [lij ]ni,j=1 is defined as
(P
j6=i aij i=j
lij = (1)
aij i 6= j .

where aij are the elements of the unweighted adjacency matrix or the weighed
adjacency matrix. By this definition, if G is undirected then L is symmetric,
that is, L = LT . Additionally we can provide this valuable property of the
Laplacian.

Theorem 1. Any vector u span(1) where 1 = (1, 1, ..., 1)T Rn satisfies the
equation Lu = 0.
Proof. We have that
u = span(1) = u = 1
For some constant . We then get

Lu = 0 L1 = 0 L1 = 0

since is arbitrary so we must only show the last relation. From the definition
of L we get for the i:th component of L1
X X X
[L1]i = lii + lij = aij + (aij ) = 0.
j6=i j6=i j6=i

for all i {1, 2, . . . , n}. This completes the proof.

7
3 GRAPH THEORY

3.4 Results from Matrix Theory


Here some results from matrix theory will be presented. They provide valuable
information regarding the eigenvalues of a matrix. These results will be of great
interest later when we will be required to study the eigenvalues of the Laplacian
in order to determine the convergence of the consensus algorithms.

3.4.1 The Gershgorin Circle Theorem


An arbi-
The Gershgorin circle theorem, as described in [1], states that all the eigenvalues trary matrix
of a matrix A are located in a special disc that depend only on the elements aij A, not the
of A. adjacency
Theorem 2 (Gershgorin circle theorem). If aij is the elements of the matrix matrix A.
A, and Ri is defined for each row i of A by Confusing?
X
Ri = |aij |, 1 i n.
j6=i

Then every eigenvalues of A is located in at least one of the discs centred in aii
and with a radius of Ri .
Proof. If is a eigenvalue of A and
Pn is the corresponding eigenvector, then
A = . This can be rewritten as j=1 aij j = i , i which is equivalent to
n
X
aij j = i aii i , i (2)
j=1
j6=i

if i is chosen as the largest absolute value among the elements in and 6= 0


then i 6= 0 and (2) can be rewritten to
Pn Pn Pn
j=1 |aij ||j |

j=1 aij j j=1 aij j
j6=i j6=i
= | aii | = j6=i
= aii = |aii |
i i
|i |

but since |i | |j |, j = {1, . . . , n} it follows that


Pn
n j=1 |aij ||j |
j6=i
X
|aij | = Ri = Ri | aii |.
j=1
|i |
j6=i

This means that lies within a disc centred at aii with radius Ri

3.4.2 Perron-Frobenius Theorem


The Perron-Frobenius provides us with a guarantee that the eigenvalue of a
certain matrix A with largest modulus is a simple eigenvalue, that is, the eigen-
value has algebraic (and therefore geometric) multiplicity 1. We will use the
version of the Perron-Frobenius theorem presented in [3].
Theorem 3 (Perron-Frobenius). Let A be an irreducible nonnegative n n ma-
trix. Then the eigenvalue of A with largest modulus, (A), is a simple eigenvalue
of A.
The Laplacian matrix is irreducible if the corresponding graph is a strongly
connected graph.

8
4 LINEAR CONSENSUS IN CONTINUOUS TIME

4 Linear Consensus in Continuous Time


In this chapter we will take a first look at a simple consensus algorithm with a
constant, non-changing graph and we will eventually reach a requirement under
which it is guaranteed that consensus will be achieved using this algorithm.
Moreover, we will determine the final value of the nodes when consensus has
been achieved. We will start by defining the consensus algorithm.

4.1 The Consensus Algorithm


A simple algorithm that aims towards reaching consensus and agreement amongst
agents is X
xi (t) = aij (xj (t) xi (t)) , t 0
jNi

where xi (t) is the information state of node i at time t with xi (0) being the
initial state of node i. The algorithm can be interpreted as the change of agent
is information being the difference between its own information and the agents
connected to agent is information.
With the laplacian L from (1) one can rewrite the consensus algorithm for
all agents in a more compact form

x = Lx. (3)

4.2 Convergence of a Linear Differential Equation


A reasonable question to ask is whether or not the algorithm in (3) converges.
One can consider a differential equation of the form

x(t) = M x(t) (4)

where M is a constant n n matrix. We will express the solution to the above


equation and show that the given solution converges. A proof that the solutions
are correct and do indeed solve the above equation is for example given in [2].
If rank(M ) = n and M is real and symmetric then M has n linearly in-
dependent eigenvectors k . The equation (4) then has a fundamental set of
solutions S = {1 e1 t , 2 e2 t , . . . , n en t } and the general solution to (4) is a
linear combination of the elements in S, that is;

x(t) = c1 1 e1 t + c2 2 e2 t + ... + cn n en t , ck Z k {1, 2, . . . , n}

If on the other hand M is not symmetric and rank(M ) < n an eigenvalue k


belonging to M may be repeated, that is, it may have an algebraic multiplicity of
hk 2. If this is the case, the eigenvalue may also have a geometric multiplicity
of less than hk . In other words, there may be hk or less linearly independent
eigenvectors associated with this k and lk > 0 vectors that are not linearly
independent. This means that we cannot guarantee that M s eigenvectors are
all linearly independent and thus the set S from the previous section is not a
fundamental set of solution for the problem (4). The solution to (4) must be
modified.

9
4 LINEAR CONSENSUS IN CONTINUOUS TIME

If LIk is the set of hk lk linearly independent eigenvectors corresponding


to the eigenvalue k of M we define

S1,k = ek t : LIk


It can be shown that there exists vectors k,m 6= 0 such that with the set
(m )
X
i k t
S2,k = t k,i e : lk > 0 , m {1, 2, . . . , lk }
i=1

a fundamental set of solutions for (4) is

S = {S1,k S2,k : k {1, 2, . . . , N }}

where N is the number of distinct eigenvalues of M . The general solution is


then a linear combination of the elements in S. With the use of this general
solution we state the following theorems:
Theorem 4. Consider the equation (4). A sufficient condition for the solution
of the equation to converge is that for every eigenvalue k corresponding to M ,
Re{k } < 0. An exception can be made if an eigenvalue k has equal algebraic
and geometric multiplicity, then Re{k } 0 is sufficient.
Proof. We are interested in the behaviour of the solution as t . We will
observe three different cases; Re{k } < 0, Re{k } = 0 and Re{k } > 0.
We see that if Re{k } < 0 , then all of the elements s S1,k S2,k tends
to 0 as t grows large. If Re{k } = 0 the elements in S1,k become the elements
in LIk and thus converge. The elements in S2,k , on the other hand, diverge as
t . Finally, if Re{k } > 0, we see that all the elements in both S1,k and
S2,k diverge as t .
For the solution to converge, it is sufficient to require that all of the ele-
ments in S converge. Thus we conclude that a sufficient requirement for this
is Re{k } < 0, k {1, 2, . . . , N }. We can allow ourselves the exception
Re{k } 0 for those k for which S2,k = , that is, those k with both alge-
braic and geometric multiplicity hk . This completes the proof.
Corollary 1. If the matrix M in problem (4) has a simple eigenvalue in the
origin, and all other eigenvalues have negative real parts, the solution x(t) con-
verges to the one-dimensional nullspace of M .
Proof. If k = 0 is a simple eigenvalue, then S2,k = . Since all other eigenvalues
have negative real parts, all terms in S except k ek t , where k is the eigenvector
corresponding to k , disappear for large t. Since k = 0 we get that for the
solution x(t)
lim x(t) = c k
t

for some constant c, that is to say the solution converges to a multiple of k .


Since x = ck satisfies the equation M x = 0, ck belongs to the nullspace of M .
Also, since k is a simple eigenvalue, it also has geometric multiplicity 1 and
thus k forms a basis for the nullspace of M , which means that the nullspace of
M is one-dimensional. This completes the proof.

10
4 LINEAR CONSENSUS IN CONTINUOUS TIME

4.3 Eigenvalues of Laplacian


In order to apply the above results, especially the results of Corollary 1, to our
case with equation (3), it is obvious that we must have more information about
the eigenvalues of L. We will use an idea from [4], using the Perron-Frobenius
and Gershgorin Circle Theorem to satisfy the requirements of Corollary 1. We
state the following theorems. The first theorem provides information about the
possible positions of the eigenvalues using the Gershgorin Circle Theorem (The-
orem 2). The second theorem gives us a requirement under which the eigenvalue
at the origin is simple, using the Perron-Frobenius Theorem (Theorem 3).
Theorem 5. Ls eigenvalues all have positive real parts or exist at the origin.
Proof. From Theorem 2, the Gershgorin circle theorem, we have that all of the
eigenvalues of LP are located in at least one of the discs centred in lii with a
radius of Ri = j6=i |lij | for all i {1, 2, . . . n}. Let us examine these discs. By
the definition of L we have
X X X X
lii = aij = |aij | = | aij | = |lij | = Ri
j6=i j6=i j6=i j6=i

since aij 0 and aij R. Thus all of the discs are centred at lii and extend
from the origin to 2lii . The disc corresponding to the largest lii will encircle all
other discs and therefore all eigenvalues of L are contained in the disc centred
at max{lii } with radius max{lii }. Since
X
lii = aij 0
j6=i

the largest disc only encircles points either in the origin or with positive real
parts. Thus all of the eigenvalues corresponding to L are either in the origin or
have positive real parts. This completes the proof.
Theorem 6. If the graph corresponding to L is strongly connected, the eigen-
value k = 0 corresponding to L is a simple eigenvalue.
Proof. First we must prove that the matrix L has an eigenvalue at the origin,
however, this is obvious from Theorem 1. We apply Theorem 3 to the matrix
cI L where I is the identity matrix. If the graph corresponding to L is a
strongly connected graph, then L is an irreducible matrix. It can be shown that
if L is an irreducible matrix, then so is cI L. We choose c large enough so
that cI L is a nonnegative matrix. The theorem then tells us that the largest
eigenvalue of cI L, or equivalently the eigenvalue of L located in the origin,
is simple. If the zero eigenvalue of L is simple, so is the zero eigenvalue of L.
This completes the proof.

4.4 Convergence of the Consensus Algorithm


We are now ready to establish a theorem that both guarantees the convergence
of the consensus algorithm given in (3) and that consensus is achieved. The
theorem also states what consensus will be achieved. This theorem is a version
of the theorem presented in [5].

11
4 LINEAR CONSENSUS IN CONTINUOUS TIME

Theorem 7. Consider the equation (3). Let the graph G corresponding to the
Laplacian L be strongly connected, then it holds that
i) a consensus is reached for all initial states x(0);
ii) there exists a 6= 0 satisfying T L = 0 and the consensus achieved is

lim x(t) = 1 = (, , . . . , )T (5)


t

where
T x(0)
= Pn . (6)
i=1 i

Proof.
i) We will begin by showing that the system converges. To use Corollary 1, we
observe that the system (3) is the same as system (4) with M = L. It is clear
from the corollary that if we can show that L has a simple eigenvalue in the
origin and all other eigenvalues have negative real parts, the solution converges.
Equivalently, we can show that L has a simple eigenvalue in the origin and all
other eigenvalues have positive real parts. The fact that all eigenvalues of L have
positive real parts or are located in the origin is provided in Theorem 5. The
fact that the eigenvalue located in the origin is simple is given from Theorem 6.
Thus the requirements in Corollary 1 are met and the system converges.
To show that consensus is achieved, we apply Corollary 1 that states that
the system converges to the one-dimensional nullspace of L. From Theorem
1 we have that a vector belonging to the nullspace of L is 1. If L1 = 0 then
L1 = 0 as well and thus 1 belongs to the nullspace of L as well. Since the
nullspace of L is one-dimensional, the nullspace of L is span(1). Thus the
system converges to 1 for some constant , which means that consensus is
achieved for any initial state x(0).
ii) First we show that there exists a 6= 0 such that

T L = 0

or, taking the transpose,


LT = 0.
Since the nullspace of L is one-dimensional we have that

rank(L) = n 1.

It can be shown (see for example [7]) that

rank(L) = n 1 = rank(LT ) = n 1

thus the nullspace of LT is one-dimensional and therefore there exists a vector


6= 0 such that LT = 0 and T L = 0.
To show what consensus is achieved we consider the quantity

y(t) = T x(t).

It hold that y(t) is an invariant quantity, since

y(t) = T Lx(t) = 0

12
4 LINEAR CONSENSUS IN CONTINUOUS TIME

Thus
lim y(t) = y(0)
t
lim T x(t) = T x(0)
t
1 = T x(0)
T

T x(0)
= Pn .
i=0 i
This completes the proof.
A special case of Theorem 7 can be made for balanced graphs.
Definition 1. A graph G is balanced if
X X
aij = aji i
j6=i j6=i

or in other words, the total weight of all edges entering a node is equal to the
total weight of all edges exiting the node.
Corollary 2. Consider Theorem 7, if the graph G is also balanced the consensus
reached is
lim x(t) = 1 = (, , . . . , )T
t
where
n
1X
= xi (0),
n i=1
i.e, the consensus reached is the average of the initial values.
Proof. From part ii ) of Theorem 7, there exist a satisfying T L = 0 and the
consensus reached depends on . If = 1 we have that
 T  X X
1 L i= lji = lji + lii .
j j6=i

Using the definition of the Laplacian (1),


X X X
lji + lii = aji + aij
j6=i j6=i j6=i

and from the fact that the graph G is balanced


X X X X
aji + aij = aij + aij = 0.
j6=i j6=i j6=i j6=i

So = 1 solves T L = 0 and, from Theorem 7, the consensus reached is


lim x(t) = 1 = (, , . . . , )T (7)
t

with
n
T x(0) 1X
= Pn = xi (0), (8)
i=1 i n i=1
thus the corollary is true.
Remark 1. Theorem 7 also holds under the weaker condition that the graph
contains a spanning tree. This is discussed in [5] and its references.

13
5 SWARMS AND FLOCKING

Figure 2: An example of a graph with four nodes and pre-determined edges.

4.5 Simulations and Results


To illustrate a few agents represented as points in R2 some simulations based
on the consensus algorithm described in (3) has been done. One possible con-
figuration with pre-determined communication paths is shown in in Figure 2.
Since the agents together with the connection paths form a strongly connected
graph we expect all the agents to converge to a single location. The simulations
have been done in Matlab with Simulink. In Figure 3 the result is illustrated.

5 Swarms and Flocking


In this section we will draw our attention to some modifications of the con-
sensus algorithm stated above. The changes will be motivated by the attempt
to create a model that mimics the behaviour of groups of animals or insects,
regarding flocking and swarming. We let xi denote a point in R2 or R3 space.
An interpretation of a point xi R2 may be an animal in a flock located on the
ground, viewed from above with horizontal position xi,1 and vertical position
xi,2 . Similarly a point xi R3 may represent the position of a fly in a swarm
of flies located in the air.

5.1 Edges based on distance


One might think that individuals striving to be a part of a flock or swarm has the
desire to be near the other individuals in its vicinity, its nearest neighbours.
Be it that individuals will not have this desire for individuals far away due to
bad sight or just instinct, the result remains the same. This section will adjust
the consensus algorithm stated above to attempt to mimic this behaviour.

5.1.1 Changes to the Update Law


We will begin by observing the update law
X
ui = aij (xj xi ) (9)
jNi

14
5 SWARMS AND FLOCKING

(a) At time 0. (b) At time 0.5. (c) At time 1.

(d) At time 2. (e) At time 3.

Figure 3: Simulation of a simple consensus between the position of four agents.

that together with the dynamics

xi (t) = ui

forms the consensus algorithm discussed in the above sections. As we have


seen in the previous sections, this algorithm will cause the nodes to strive for a
state where consensus is achieved, in the case with position, this corresponds to
the nodes converging to the same position. To allow each node to only travel
towards its nearest neighbours, any nodes within a set distance r, we define the
set of edges as such
E = {(i, j) : kxi xj k r} .
The set of edges is generally not constant as in the previous sections, but depends
on the current position of each node.
Following the discussion from the above sections, we could perhaps expect
that if the set is strongly connected at every moment in time, the system
would converge. Using the algorithm discussed in the previous section, we
have that every position of each node at any moment will be inside the convex
hull spanned by the initial positions of all nodes. Therefore if r R where
R = max {kxi xj k : i, j N }, we get that for every pair of nodes (i, j) at any
time,
kxi xj k r = E = {(i, j) : i, j N }
and thus the set is strongly connected at every moment in time and we could
perhaps expect the set to converge.

15
5 SWARMS AND FLOCKING

5.1.2 Results and Simulations


Rather than using Matlab to simulate the following a Java application was
written for higher flexibility and to provide a simple interface for testing different
input values easily (see Appendix A). In figure 4 and figure 5 simulations done
in this application using the update law above are presented. The number
of nodes is 100 and the initial states have been chosen pseudorandomly with a
aproximatly uniform disribution within a disc of radius 0.5. The largest distance
for connection, r, is set to 0.3. All of the simulations have been done in discrete
time, using the following equation

xi (t + d) = xi (t) + d xi (t)

with the discrete time jump d set to 0.001. This equation is an approximation
and is not exact but becoms more accurate for small d. This d will be used
throughout the entire chapter as the results of the simulations should be a
accurate enough within a margin of confidence. An interesting fact that can be
drawn from these plots is that even though the initial positions of the different
simulations are relatively similar and the simulation parameters are identical,
the outcome of the simulations are quite different. Furthermore, most initial
states of the simulations gives a strongly connected graph. Despite the fact
that the graph is strongly connected in the initial state, the simulation of the
agents may not converge to a single location. This implies that a stronger
restriction on the initial states than the initial graph being strongly connected
is required for consensus.
In Figure 6 the percentage of the initial positions that leads to consensus
between all agents on one position is plotted against the maximum radius for a
connection between two agents to be established, r. From the simulations we
can observe a shift around a certain r where most initial states below seem to
not reach consensus but most initial states above do reach consensus. This is
done for different numbers of agents, with less agents a bigger r is needed for
a high probability of the agents to reach consensus. These plots are based on
100 simulations for each value of r, with 0.01 r 0.50 and a step of 0.01
(resulting in a total of 5000 simulations for each plot).

5.2 Collision Avoidance


In this section we will consider another characteristic of flocking animals or
swarming insects. The previous model is not very realistic in the regard that
animals will not gather in too close proximity, but rather attempt to keep a
certain desired distance between one another. Let us denote this desired distance
with rd . This section is partially inspired by the discussion of collision avoidance
found in [6].

5.2.1 Changes to the Update Law


To achieve collision avoidance between nodes we have to adjust the update
law (9). An idea could be to multiply the current update law with a function
(kxi xj k) so that, in the direction from node i to node j, the update law is
positive for kxi xj k > rd , negative for kxi xj k > rd and zero for kxi xj k = rd

16
5 SWARMS AND FLOCKING

(a) At time 0.001. (b) At time 0.025. (c) At time 0.050.

(d) At time 0.075. (e) At time 0.100. (f) At time 0.125.

Figure 4: Simulation of agents with connections based on distance.

17
5 SWARMS AND FLOCKING

(a) At time 0.001. (b) At time 0.050. (c) At time 0.100.

(d) At time 0.150. (e) At time 0.200. (f) At time 0.250.

(g) At time 0.300. (h) At time 0.350. (i) At time 0.400.

Figure 5: Simulation of agents with connections based on distance.

18
5 SWARMS AND FLOCKING

(a) n = 10. (b) n = 25. (c) n = 50.

(d) n = 75. (e) n = 100. (f) n = 125.

Figure 6: Plots of the percentage of the simulations that reached consensus


compared to the maximum distance for a connection. Each graphs is for n
nodes.

19
5 SWARMS AND FLOCKING

and also grows largely negative as kxi xj k 0. A function that satisfies these
requirements is for example
rd
(x) = 1 .
x2
Our new update function now has the form
X
ui = aij (xj xi )(kxi xj k). (10)
jNi

In particular we see that



X

lim kui k = lim
aij (xj xi )(kxi xj k)
=
xi xj 0 x x 0
i j jNi


X (xj xi ) xj xi
lim aij rd
=
xi xj 0
jNi kxi xj k kxi xj k2

where the term in the direction of (xi xj ) grows large, that is, away from
node j, which is in accordance with our requirements of the function . Using
this new update law will make to nodes travel towards each other up until
the distance between them is less than rd , they will then repel each other and
eventually they will settle at distance rd from each other. In larger groups of
nodes, a sort of crowding phenomena will appear, where only the nodes in
direct vicinity will be repelling a certain node, whilst the node will be attracted
by other nodes further away than rd , causing the nodes in the large group to
settle at a smaller distance than rd .

5.2.2 Simulations
Examples of simulations with collision avoidance where the agents converges to
one group and thus reaches consensus is shown in Figure 7. Another example
where the agents do not reach consensus and form two different groups can be
observed in Figure 8. These simulations were done with 100 agents spawning in
a circle of radius 0.5 with the radius set to r = 0.3 and rd = 0.01.
In Figure 9 the connection between consensus rate and number of spawning
nodes for a number of simulations can be observed. The nodes are spawned in a
circular area with the radius 0.5 and the maximum distance for communication
between nodes r = 0.3 together with the desired distance between agents rd =
0.01. The plot is based on the results of 100 simulations for each number of
spawning nodes. As expected a higher initial density seems to increase the
probability of convergence.
The second parameter looked at is the maximum distance for a connection
between two agents r. Plots of the relation to the consensus rate can be observed
in Figure 10. The plot is, as before, based on 100 simulations at each r with the
agents spawning in a circle with a radius of 0.5. Another possibility is to keep
the desired distance rd constant when changing the maximum radius r. The
result is interesting as a certain combination of r and rd seem to give a high
convergence rate as seen in Figure 11.

20
5 SWARMS AND FLOCKING

(a) At time 0.001. (b) At time 0.050. (c) At time 0.100. (d) At time 0.150.

Figure 7: Simulation of consensus between the horizontal and vertical position of


a set of agents where consensus is reached. With connections based on distance
and collision avoidance.

(a) At time 0.001. (b) At time 0.050. (c) At time 0.100. (d) At time 0.150.

Figure 8: Plot of consensus between the horizontal and vertical position of a set
of agents. With connections based on distance and collision avoidance.

21
5 SWARMS AND FLOCKING

Figure 9: The percentage of the simulations that reached consensus compared


to the number of agents. Based on 100 simulations at each value of n.

Figure 10: The percentage of the simulations that reached consensus compared
to the maximum distance for a connection. Based on 100 simulations at each
value of r, with rd = 0.025r.

22
5 SWARMS AND FLOCKING

Figure 11: The percentage of the simulations that reached consensus compared
to the maximum distance for a connection. Based on 100 simulations at each
value of r, with constant rd = 0.01.

5.3 Leaders
Leaders are agents that not only follow the other agents in the group but also
have the desire to move towards a certain target state. In animal flocks this
could correspond to an animal having knowledge about a location with food or
water. A number of leaders can be introduced and an interesting possibility is
to give the connection to leaders a higher weight in order to simulate that the
leaders in the flock are chosen leaders, and therefore the individuals will follow
them more than other individuals.

5.3.1 The Dynamics of Leaders


A target location is introduced, with position denoted by . The dynamics
for the leaders, with indexs belonging to the set Z, is changed. Weights are
introduced to adjust the leaders desire to follow the flock and move towards
the target, these are denoted by wf and w respectively. The new dynamics for
the leaders is
xi
xi (t) = wf ui + w iZ
k xi k
with the same ui as in (10)
X
ui = aij (xj xi )(kxi xj k).
jNi

5.3.2 Simulations
A simulation with 100 agents, of which five were leaders, is shown in Figure
12. The input parameters used were r = 0.3 and rd = 0.01 together with the

23
6 DISCUSSION

important weights w = 2.0, wf = 0.1. Variation of the weights strongly affects


the result, if w is to big the leaders outrun the other agents. A big wf makes
the leaders slow and it takes longer for the flock to reach the target, but the
possibility the flock will eventually reach the becomes larger.

6 Discussion
In the first part highly defined rules on convergence for certain algorithms can
be set up, although they are simpler than what is applicable in most real world
scenarios it gives basic understanding to the consensus algorithms used later.
With the more advanced algorithms, relations to the convergence conditions for
simpler consensus algorithms can be made.
Lots of analysis can be done based on the simulations, and as the final
result depends on the random initial state analysis based on large numbers of
simulations are desirable.
Using connections based on the distance between nodes and having collision
avoidance is some of the basic rules that is needed to make the model more
connected to the applications. The agents dependence on the input parame-
ters is generally high, but as the simulations show there are ranges for which
small changes in the configuration increases the probability of convergence by a
significant amount.
More properties can be added and interesting results shows when leaders
that steer the group in a specific direction is added. Other variations could also
be added, such as obstacles that the agents need to avoid.

24
6 DISCUSSION

(a) At time 0.000. (b) At time 0.100. (c) At time 0.200.

(d) At time 0.400. (e) At time 0.600. (f) At time 0.800.

(g) At time 1.000. (h) At time 1.500. (i) At time 1.900.

Figure 12: Simulation of agents with leaders.

25
REFERENCES

References
[1] Roger A. Horn and Charles R. Johnson, Matrix analysis, Cambrigde, U.K.:
Cambridge University Press, 1999.
[2] William E. Boyce and Richard C. Diprima, Elementary Differential Equa-
tions and Boundary Value Problems, Eighth Edition, Department of Mathe-
matical Sciences, Rensselaer Polytechnic Institute, John Wiley & Sons Inc,
2005.
[3] Turker Biyikoglu, Josef Leydold and Peter F. Stadler, Laplacian Eigenvec-
tors of Graphs Springer, 2007.

[4] Reza Olfati-Saber and Richard M. Murray, Consensus Problems in Networks


of Agents With Switching Topology and Time-Delays, IEEE Transactions on
Automatic Control, Vol. 49, No.9 September 2004.
[5] Reza Olfati-Saber, J. Alex Fax and Richard M. Murray, Consensus and
Cooperation in Networked Multi-Agent Systems, Proceedings of the IEEE,
Vol. 95, No.1, January 2007.
[6] Reza Olfati-Saber, Flocking for Multi-Agent Dynamic Systems: Algorithms
and Theory, IEEE Transactions on Automatic Control, vol 51, no. 3, March
2006.

[7] Howard Anton, Chris Rorres, Elementary Linear Algebra with Supplemen-
tial Applications, 10th Edition, International Student Version, Drexel Uni-
versity, University of Pennsylvania, John Wiley & Sons Inc, 2011.

26
A JAVA APPLICATION

Figure 13: The java application used for simulations.

A Java Application
A Java Application (shown in Figure 13) was built to easily test the behaviour
of the agents when the input parameters is changed. The application uses the
algorithms described in the report with data from the user. There is also a
possibility to run a program of simulations with pre-specified input parameters
and the results saved to a text file for later usage. An algorithm to determine
if consensus is reached have been implemented.
The significant parts of the code, where the algorithms is implemented, have
been attached. The application and the full source code can be downloaded
from https://sjovall.org/KTH/SA104X-public/sim/javaapp/.

The Java Class Agents


This is where the algorithms is implemented.

import j a v a . awt . geom . Point2D ;


import j a v a . u t i l . A r r a y L i s t ;
import j a v a . u t i l . Arrays ;

27
A JAVA APPLICATION

public c l a s s Agents {

private int n = 1 0 0 0 ;
public A r r a y L i s t <Point2D . Double> x0 = new A r r a y L i s t <Point2D . Double > ( ) ;
private A r r a y L i s t <Point2D . Double> x S t a r t = new A r r a y L i s t <Point2D . Double > ( ) ;
public double R = 0 . 3 ;
public double d = 0 . 0 0 0 1 ;
private int spawnpattern = 1 ;
private double minR = 0 . 0 1 ;
private boolean f i n i s h e d ;
private double currentTime = 0 ;
private boolean l e a d e r s ;
private int leadersNum = 5 ;
private double l e a d e r s F o l l o w T a r g e t = 1 ;
private double l e a d e r s F o l l o w F l o c k = 0 ;

private Point2D . Double t a r g e t = new Point2D . Double ( 1 , 1 ) ;


// p r i v a t e d o u b l e m a x V e l o c i t y = 1;

public void s e t L e a d e r s ( boolean l e a d e r s ) {


this . l e a d e r s = l e a d e r s ;
}

public void s e t L e a d e r s F o l l o w T a r g e t ( double l e a d e r s F o l l o w T a r g e t ) {


this . leadersFollowTarget = leadersFollowTarget ;
}

public void s e t L e a d e r s F o l l o w F l o c k ( double l e a d e r s F o l l o w F l o c k ) {


this . leadersFollowFlock = leadersFollowFlock ;
}

public void setCurrentTime ( double currentTime ) {


t h i s . currentTime = currentTime ;
}

public boolean i s F i n i s h e d ( ) {
return f i n i s h e d ;
}

public Agents ( ) {
finished = false ;
leaders = false ;
}

public int getSpawnpattern ( ) {


return spawnpattern ;
}

public void s e t S p a w n p a t t e r n ( int spawnpattern ) {


t h i s . spawnpattern = spawnpattern ;

28
A JAVA APPLICATION

public void setMinR ( double minR ) {


t h i s . minR = minR ;
}

public void setR ( double r ) {


R = r;
}

public void setN ( int N) {


t h i s . n = N;
}

public void setD ( double d ) {


this . d = d ;
}

public void c y c l e ( ) {
A r r a y L i s t <Point2D . Double> dx = new A r r a y L i s t <Point2D . Double > ( ) ;
i f ( minR == 0 ) {
dx = c a l c D i f f B a s i c ( ) ;
} else {
dx = c a l c D i f f M i n D i s t ( ) ;
}

if ( leaders ) {
f o r ( int i = 0 ; i < leadersNum ; i ++) {
Point2D . Double t o T a r g e t = new Point2D . Double (
( t a r g e t . getX ( ) x0 . g e t ( i ) . getX ( ) ) ,
( t a r g e t . getY ( ) x0 . g e t ( i ) . getY ( ) ) ) ;
toTarget . x = toTarget . x / toTarget . distance (0 , 0 ) ;
toTarget . y = toTarget . y / toTarget . distance (0 , 0 ) ;
Point2D . Double newDx = new Point2D . Double (
( l e a d e r s F o l l o w F l o c k dx . g e t ( i ) . getX ( )
+ l e a d e r s F o l l o w T a r g e t t o T a r g e t . getX ( ) ) ,
( l e a d e r s F o l l o w F l o c k dx . g e t ( i ) . getY ( )
+ l e a d e r s F o l l o w T a r g e t t o T a r g e t . getY ( ) ) ) ;
dx . s e t ( i , newDx ) ;
}
}

f o r ( int i = 0 ; i < x0 . s i z e ( ) ; i ++) {


Point2D . Double x i = x0 . g e t ( i ) ;
Point2D . Double d x i = dx . g e t ( i ) ;
x0 . s e t ( i , new Point2D . Double ( x i . getX ( ) + d x i . getX ( ) d , x i . getY ( )
+ d x i . getY ( ) d ) ) ;
}
t h i s . currentTime = t h i s . currentTime + d ;

29
A JAVA APPLICATION

i f ( c o n v e r g e d ( dx ) ) {
f i n i s h e d = true ;
}
}

private boolean c o n v e r g e d ( A r r a y L i s t <Point2D . Double> dx ) {


f o r ( int i = 0 ; i < dx . s i z e ( ) ; i ++) {
i f ( dx . g e t ( i ) . d i s t a n c e ( 0 , 0 ) >= 0 . 1 ) {
return f a l s e ;
}
}
// d o u b l e max = 0 ;
// f o r ( i n t i = 0 ; i < dx . s i z e ( ) ; i ++) {
// i f ( dx . g e t ( i ) . d i s t a n c e ( 0 , 0) > max ) {
// max = dx . g e t ( i ) . d i s t a n c e ( 0 , 0 ) ;
// }
// }
// System . o u t . p r i n t l n (max ) ;
i f ( dx . s i z e ( ) < 1 ) {
return f a l s e ;
}
return true ;
// r e t u r n f a l s e ;
}

public boolean i s C o n s e n s u s ( ) {
boolean [ ] b e l o n g 2 g r o u p = new boolean [ x0 . s i z e ( ) ] ;
Arrays . f i l l ( belong2group , f a l s e ) ;
b e l o n g 2 g r o u p [ 0 ] = true ;

// System . o u t . p r i n t l n ( b e l o n g 2 g r o u p . l e n g t h ) ;
b e l o n g 2 g r o u p = inGroup ( belong2group , 0 ) ;

f o r ( int i = 0 ; i < b e l o n g 2 g r o u p . l e n g t h ; i ++) {


i f ( b e l o n g 2 g r o u p [ i ] == f a l s e ) {
// System . o u t . p r i n t l n ( nope + i ) ;
return f a l s e ;
}
}

return true ;
}
public int isConsensusNum ( ) {
i f ( isConsensus ( ) ) {
return 1 ;
} else {
return 0 ;
}
}

30
A JAVA APPLICATION

private boolean [ ] inGroup ( boolean [ ] group , int a g e n t ) {


// System . o u t . p r i n t l n ( Agent nr + a g e n t + i s i n group ) ;
f o r ( int i = 0 ; i < x0 . s i z e ( ) ; i ++) {
i f ( x0 . g e t ( a g e n t ) . d i s t a n c e ( x0 . g e t ( i ) ) <= R && group [ i ] != true )
group [ i ] = true ;
// System . o u t . p r i n t l n ( yep + i ) ;
group = inGroup ( group , i ) ;
}
}
return group ;
}

private A r r a y L i s t <Point2D . Double> c a l c D i f f M i n D i s t ( ) {


A r r a y L i s t <Point2D . Double> dx = new A r r a y L i s t <Point2D . Double > ( ) ;
f o r ( int i = 0 ; i < x0 . s i z e ( ) ; i ++) {
Point2D . Double x i = x0 . g e t ( i ) ;
Point2D . Double d x i = new Point2D . Double ( ) ;
f o r ( int j = 0 ; j < x0 . s i z e ( ) ; j ++) {
Point2D . Double x j = x0 . g e t ( j ) ;
double d = x i . d i s t a n c e ( x j ) ;
i f ( d <= R && j != i ) {
d x i . s e t L o c a t i o n ( d x i . getX ( ) + x j . getX ( ) x i . getX ( )
( x j . getX ( ) x i . getX ( ) ) Math . pow ( minR , 1 )
/ Math . pow ( d , 2 ) ,

d x i . getY ( ) + x j . getY ( ) x i . getY ( )


( x j . getY ( ) x i . getY ( ) ) Math . pow ( minR , 1 )
/ ( Math . pow ( d , 2 ) ) ) ;
}
}
dx . add ( d x i ) ;
}
return dx ;
}

private A r r a y L i s t <Point2D . Double> c a l c D i f f B a s i c ( ) {


A r r a y L i s t <Point2D . Double> dx = new A r r a y L i s t <Point2D . Double > ( ) ;
f o r ( int i = 0 ; i < n ; i ++) {
Point2D . Double x i = x0 . g e t ( i ) ;
Point2D . Double d x i = new Point2D . Double ( ) ;
f o r ( int j = 0 ; j < n ; j ++) {
Point2D . Double x j = x0 . g e t ( j ) ;
double d = x i . d i s t a n c e ( x j ) ;
i f ( d <= R && j != i ) {
d x i . s e t L o c a t i o n ( d x i . getX ( ) + x j . getX ( ) x i . getX ( ) , d x i
. getY ( )
+ x j . getY ( ) x i . getY ( ) ) ;
}
}

31
A JAVA APPLICATION

dx . add ( d x i ) ;
}
return dx ;
}

public void s e t u p ( ) {
x S t a r t = new A r r a y L i s t <Point2D . Double > ( ) ;

switch ( spawnpattern ) {
case 0 :
f o r ( int i = 0 ; i < n ; i ++) {
x S t a r t . add (new Point2D . Double ( Math . random ( ) , Math . random ( ) ) ) ;
}
break ;
case 1 :
f o r ( int i = 0 ; i < n ; i ++) {
Point2D . Double p = new Point2D . Double ( Math . random ( ) , Math
. random ( ) ) ;
Point2D . Double middle = new Point2D . Double ( 0 . 5 , 0 . 5 ) ;
while ( p . d i s t a n c e ( middle ) > 0 . 5 ) {
p = new Point2D . Double ( Math . random ( ) , Math . random ( ) ) ;
}
x S t a r t . add ( p ) ;
}
break ;
}
restart ();
}

public void r e s t a r t ( ) {
x0 = new A r r a y L i s t <Point2D . Double > ( ) ;
f o r ( int i = 0 ; i < x S t a r t . s i z e ( ) ; i ++) {
x0 . add ( x S t a r t . g e t ( i ) ) ;
}
t h i s . currentTime = 0 ;
finished = false ;
System . out . p r i n t l n ( i s C o n s e n s u s i s + i s C o n s e n s u s ( ) ) ;
}

public int getN ( ) {


return n ;
}

public double getR ( ) {


return R;
}

public double getD ( ) {


return d ;
}

32
B MATLAB SIMULATIONS

public double getMinR ( ) {


return minR ;
}

public double getCurrentTime ( ) {


return currentTime ;
}
}

B Matlab Simulations
Early simulations were mostly done in Matlab. The Matlab scripts and
simulink files are located on https://sjovall.org/KTH/SA104X-public/sim/
matlab/.

33

Вам также может понравиться