Вы находитесь на странице: 1из 5

Particle Swarm Optimiser with Neighbourhood Operator

P.N.Suganthan
Department of Computer Science and Electrical Engineering
University of Queensland St Lucia QLD 4072, Australia,
Email : suganocsee.uq. edu.au

Abstract In recent years population based meth- tion is maintained through out (i.e. it is not necessary
ods such as genetic algorithms, evolutionary pro- to apply operators such as recombination, selection, etc.
gramming, evolution strategies and genetic pro- t o the population) and constructive cooperation between
gramming have been increasingly employed to particles.
solve a variety of optimisation problems. Re-
cently, another novel population based optimi- Recent studies by Angeline [2] showed t h a t although
sation algorithm - namely the particle swarm op- PSO discovered reasonable quality solutions much faster
timisation ( P S O ) algorithm, was introduced by than other evolutionary algorithms, it did not possess
Eberhart and Kennedy [3,7]. Although the P S O the ability t o perform a fine grain search t o improve upon
algorithm possesses some attractive properties, the quality of the solutions as the number of generations
its solution quality has been somewhat inferior was increased. Whereas other evolutionary algorithms
to other evolutionary optimisation algorithms [2]. almost always continued t o improve the quality of the
In this paper, we propose a number of techniques solutions as the number of generations was increased and
to improve the standard P S O algorithm. Simi- in the end generated much better solutions than those
lar techniques have been employed in the context generated by the PSO algorithm.
of self-organising maps and neural-gas networks
[9,111. In this paper, we propose a number of improvements
for the standard PSO algorithm. Similar techniques have
been used widely in the context of the self-organising
1 Introduction
maps [9] and neural-gas networks [ll]. We introduce
In recent years population based methods such as genetic a variable neighbourhood operator. During the initial
algorithms [5], evolutionary programming [4], evolution stages of the optimisation, the PSO algorithm’s neigh-
strategies [la] and genetic programming [lo] have been bourhood will be an individual particle itself. As the
increasingly employed t o solve a variety of optimisation number of generations increases, the neighbourhood will
problems. These multiple solutions based stochastic al- be gradually extended t o include all particles. In other
gorithms are less likely t o get trapped in local minima words, the variable GBEST in the PSO algorithm is re-
t,han single solution based methods such as the steep- placed by LBEST (i.e. local best solution) where a local
est descent algorithm. Due t o this advantage, numerous neighbourhood size is gradually increased. In addition,
problerns have been formulated in order t o be solved by the magnitudes of the random walk and inertia weight in
evolutionary algorithms. the PSO are also gradually adjusted in order t o perform
Recently, another novel population based optimisa- a fine grain search during the final stages of optimisa-
tion algorithm - namely the particle swarm optimisa- tion. We study function optimisation t o illustrate the
tion (PSO) algorithm, was introduced by Eberhart and performance of the modified PSO algorithm.
Kennedy [3,7]. T h e underlying motivation for t,he devel-
opment of PSO algorithm was social behaviour of ani- In the next section, we will present our modified PSO
mals such as bird flocking, fish schooling, animal herd- algorithm. In Section 3, we explain the test problem -
ing and swarming theory. T h e PSO algorithm possesses namely the function optimisation. Experimental proce-
some attractive properties such as memory, (i.e. every dures are described in Section 4. T h e paper is concluded
particle remembers its best solution), the initial popula- in Section 5.

0-7803-5536-9/99/$10.00 0 1999 IEEE 1958


2 Modified Particle Swarm Opti- It is obvious from equation ( l ) ,t h a t we accelerate ev-
ery particle in a direction between its personal best and
mization Algorithm the global best. It was also observed t h a t committing
Table 1: T h e Original PSO Algorithm too early in the search to the global best found so far is
more likely t o confine the search around a local minima,
because the global best found early in the search may
PSO1: Initialise positions and associated velocity of all
well be a poor local minima. Some effort [6,13]has been
particles (potential solutions) in t,he population
made t o consider a local best solution instead of either
randomly in the D dimension space.
P B E S T or G B E S T . In this paper, we propose t o re-
P S 0 2 : Evaluate the fittness value of all particles.
place the G B E S T component by a local best ( L B E S T )
P S 0 3 : Compare the PBESTU of every particle with
solution. We dynamically increase t h e neighbourhood in
its current fitness value. If the current fitness value is
order t o change the L B E S T from P B E S T t o G B E S T
better, then assign t h e current fitness value t o
with increasing number of iterations. T h e neighbour-
PBESTU and assign the current coordinates t o
hood may be defined in two different ways. T h e simplest
P B E S T z n [d] coordinat,es.
and fastest method is t o consider particles just above
P S 0 4 : Det,errnine the current best fitness value in the
and below a specific particle for which a neighbourhood
whole population and its coordinates. If t,he current.
is sought, as in [6]. Alternatively, we could calculate
best fitness value is better than the G B E S T , then
distances between particles and choose a fraction of par-
assign the current best, fitness value t o G B E S T and
t,icles t h a t are close t o the particle for which a neighbour-
assign the current coordinates t o G B E S T z [ [ d ]
hood is sought. This fraction can be increased gradually
coordinates.
t o include all particles during final stages of search pro-
P S 0 5 : Change velocities using t8hefollowing equation:
cess. This method can be computational intensive, if the
dimension of the parameter space is large. We investi-
gate both alternatives.

Table 2: T h e Modified PSO Algorithm

where C1 = Cz = 2.0 PSO1: Initialise positions and associated velocity of a


+
P S 0 6 : Move each particle t o Presentzn[d] w [ [ d ] number of particles (potential solution) randomly.
P S 0 7 : Repeat steps PSO2-PSO6 until a stop crkerion PSO2: Evaluate the fitness value of all particles.
is satisfied O R a prespecified number of iterat,ions is P S 0 3 : Compare the P B E S T [ ]of every particle with
completed. its current fitness value. If the current fitness value is
better, then assign the current fitness value t o
As we indicated before, the PSO algorithm is very PBESTO and assign the current coordinates to
similar t,o other evolutionary algorithms, in t h a t the al- P B E S T z [ [ d ]coordinates.
gorit,hm is randomly initialised with a potential solution P S 0 4 : For every particle define a neighbourhood and
population. In the PSO method, each particle (or po- determine t8helocal best fitness value and its
tential solution) is assigned with a random velocity and coordinates for every particle.
flown through the solution parameter space. Each po- P S 0 5 : Change velocities using the following equation:
tential solution retains the coordinates and the best fit-
ness value associated with the best solution t h a t particle 4lPl = W ( t )* 4 l [ d l +
achieved so far. This solution is referred t o as PBEST, C , ( t )* r a n d ( ) l * ( P B E S T z [ ] [ d-] Presentz[][d])+
i.e. the personal best solution. T h e PSO algorithm also C,(t) * r a n d 0 2 * ( L B E S T z [ l [ d -] Presentz[][d])(2)
maintains the coordinates and the value of the best so-
lution achieved by the whole populat.ion. This solution P S 0 6 : Move each particle t o PresentzU[d] ~ [ [ d ] +
is known as GBEST, i.e. the global best, solution. In P S 0 7 : Change parameter values using t h e following
our irnplement,ation, we require the best solution within equation:
a neighbourhood. This solution is called LBEST solu-
tion. This solution can be discovered by inspecting all
W(2) = W“ + (WO - W“)[l - t / K ] (3)
P B E S T solutions wit,hin the neighbourhood. T h e orig- Cl(2) = c y + (CY - C,”)[l - t / K ] (4)
inal PSO algorithm is summarised in the table below. C,(t) = C,- + (C,”- C7)[1- t / K ] (5)

1959
where K and t are total number of T h e fourth function is the generalised Griewank function
iterations/generations and current iteration/generation expressed as follows.
number. Superscripts cm and 0 denotes parameter n
values at the start and in the end of search process. 1 "
PS08: Repeat steps PSO2-PSO7 until a stop criterion
is satisfied O R a prespecified number of iterations is
These functions have been used by several evolutionary
completed.
computation researchers [ l ] .

In addition, we introduced time decay t o weight val-


ues in equation (1). In the original algorithm, W, C1 and
4 Experimental Results
Cz all are assigned with fixed values through out the In our experiments, 20, 30 and 50 dimensions are consid-
search process. In our investigations, we attempted t o ered. In all cases, 40 particles are used. Maximum num-
decay the values of these paramet,ers, so t h a t the par- ber of iterations is fixed at 1000. We had 50 trial runs for
ticles make very large movement,s in the early stages, evpry instance of t,hese problems. W ( t ) was adapted as
t,hereby scanning the whole parameter space for good lo- follows: Winitial = 0.95, Wjinal = 0.2 weight adaptation:
cal minima and then in the final stages particles perform W(t) = (Winilia, - 0.2)* ( M A X I T E R ?- ITER.) 0.2 +
a fine grain search. However, our experiments indicated Likewise, parameters C1 and Cz are also initially set t o
t,hat fixed values for C1 and Cz generated bett'er solu- 3 and gradually decreased t o 0.25. However, varying C1
t,ions, though not necessarily 2 for both parameters as and Cz lowered the quality of the solution. Hence, we
shown in equation ( 1 ) . In Table 2 , we summarise the kept them fixed at 2.0.
modified PSO algorithm. T h e standard algorithm provided by Dr Yuhui Shi and
my modified algorithm were run using t h e above parame-
ter values. In my modified algorithm, the G B E S T com-
3 Function Optimisation ponent in the update equation was replaced by L B E S T
component. T h e L B E S T was gradually incremented t o
In our study, we employed the modified PSO algorithm
become the G B E S T . T h e neighbourhood operator used
t o perform function optimisation. T h e test functions
t o obtain the L B E S T is explained below.
t,hat we use are commonly used in the evolutionary com-
T h e L B E S T is defined to be the best solution within
putation literature [a]. We considered four functions -
a neighbourhood of a particle t o be updated. In or-
two of which are unimodal and t,he other two have a
der t o define a neighbourhood, we calculate distances
number of local minima. All functions are designed t o
between the candidate particle (to be adapted) and all
have minima at or near the origin. T h e first function is
other particles. We also obtain the m a x i m u m distance
t,he sphere rnodel given by
(rnaz-dist) between the candidatmeparticle and any other
n particle in the population. We then obtain a ratio us-
.fl(.) = x: (6) ing these distances as follows: dist[l]/rnaz-dist for ev-
i=l
ery particle 1 in the population. We also defined a
where x is a real-valued vector of dimension n and zi is fract,ion as a function of iteration ( I T E R ) as follows:
the ith element of the vector. Although it is a unimodal *
frac = (3 .O * IT ER+ 0.6 M A X IT E R )/ M A X IT ER. If
function, it differentiates well between good local opti- frac (fraction) is less than 0.9, we search for the LBEST
misers and poor local optimisers. T h e second function is solution among particles which satisfy the following con-
the Rosenbrock function given by dition: frac > d i s t [ l ] / r n a z d i s t . If t h e frac is less t h a n
0.9, we use the G B E S T in the update equation. T h e
results are summarised in Table 3.
In table 3, the first value is the worst result. T h e
second value is the aggregate error for 50 runs. We can
T h e third function is t h e generalised Rastrigrin function observe t h a t the aggregate error is often better for the
given by the following equation. modified algorithm. T h e worst error is marginally worse
for the modified algorithm. Further investigations are
n
h(2)= C(z?
- locOs(2Tzi) + 10) (8)
necessary, particularly along t h e directions detailed be-
low.
i=l

1960
My experiments also showed t h a t C1 and Cz had t o 4. L.J.Fogel, “Evolutionary programming in perspec-
be constants t o generate good quality solutions, but not tive: T h e top-down view”, In Computational Intel-
necessarily a value of 2.0 for both parameters gave the ligence: Imitating Life, J.M.Zurada, R.J.Marks I1
best result. For instance, C1 = 2.5, C2 = 1.5 gave an and C.Goldberg, Eds, IEEE Press, Piscataway, N J ,
aggregate error of 0.391 for function 4. This result is 1994.
better than those reported above. On the other hand
5. D.E.Goldberg, Genetic algorithms in search, op-
C1 = 0.0, C2 = 2.5 gave an aggregate error of near zero
for function 2. Hence, it would be interesting t o develop timisation and machine learning, Addison-Wesley,
MA, 1989.
an algorithm (genetic based?) t o search for the best
combination of parameters for the PSO for every problem 6. J.Kennedy, “The particle swarm: Social adaptation
separately. Further, the computation of frac can also of knowledge”, In Proc. of IEEE Int. Conf. on Euo-
be adjusted in an efficient manner by replacing 3.0 and lutionary Computation, Indianapolis, USA, 1997.
0.6 factors by problem specific variables obtained by an
efficient search algorithm. 7. J.Kennedy and R.Eberhart, “Particle swarm optmi-
mization”, In Proc. of IEEE Int. Conf. on Neural
Networks, Perth, Australia, December 1995.
5 Conclusions 8. J.Kennedy and R.Eberhart, “A discrete binary ver-
sion of the particle swarm algorithm”, In Proc. of
In this paper, we investigated the PSO algorithm. In
IEEE Int. Conf. on Systems, Man an,d Cybernetics,
particular, we proposed a number of improvements such
Orlando, FL, USA, 1997.
as gradually increasing local neighbourhood, time vary-
ing random walk and inertia weight values and two al- 9. T.Kohonen, Self-organising maps, Springer Verlag,
ternative schemes for determining the L E EST solution 1995; “The self-organising maps”, Proceedings of the
for every particle. Our experimental results showed some IEEE, Vol. 78, No. 9, 1464-1480, 1990.
advantages of using a neighbourhood operator. We found
10. J.R.Koza, Genetic programming: On the program-
t h a t the time varying values for C1 and C;!did not in fact
ming of computers b y means of natural selection,
perform worse than the system with fixed values. How-
MIT Press, Cambridge, MA, 1992.
ever, we found out t h a t there existed different combina-
tion of values for C1 and C2 (not just 2.0) t o yield good 11. T.M.Martinetz, S.G.Berkovich and K.J.Schulten,
solutions for different problems. We are currently inves- “Neural-gas network for vector quantisation and its
tigating the means t o determine these values efficiently application t o time-series prediction”, IEEE Trans-
for different problems. actions on Neural Networks, Vol. 4 , No. 4, 558-569,
1994.

12. I.Rechenberg, “Evolution strategy”, In Com.puta-


References tional Intelligence: Imitating Life, J.M.Zurada,
R.J.Marks I1 and C.Goldberg, Eds, IEEE Press, Pis-
P.Angeline, “Using selection t o improve particle cataway, NJ, 1994.
-swarm optimisation” , In Proc. of Int. Conference
13. Y S h i and R.Eberhart, “A modified particle swarm
on Evolutionary Computation, Alaska, USA, May
optimizer”, In Proc. o f IEEE Int. Conf. on Euolu-
1998.
tionary Computation, Anchorage, USA, May 1998.
P.Angeline, “Evolutionary optimisation versus par- 14. X.Yao, “Evolutionary artificial neural networks”,
ticle swarm optimization: Philosophy and perfor- Encyclopedia of Computer Science and Technology,
mance difference”, In Proc. of Evolutionary Pro- Eds A.Kent, et al., Vol. 33, 137-170, Marcel Dekker
gramming Conference, San Diago, USA, 1998. Inc. NY, 1995; also in Int. J of Neural Systems,
Vol. 4 , NO. 3, 203-222, 1993.
3. R.Eberhart and J.Kennedy, “A new optimizer using
particle swarm theory”, In Proc. of sixth lnt. Sym.
on Micro Machine and Human Science, Nagoya,
Japan, October 1995.

1961
Table 3: Function optimisation results

function I dimension I standard algo. modified algo.


20 0 0
0 0
fl 30 0.000006 0.000005
0.000043 0.000034
50 0.8929 0.953
9.162 8.80
20 0.0108 0.0244
0.02891 0.0318
f2 30 0.189051 0.00552
0.2125 0.0194
50 0.1669 0.1611
0.5635 0.4104
20 51.74 49.26
1516.1 1443.7
f3 30 106.59 109.45
3506.85 3447.99
50 383.0 431.4
11745.3 11557.5
20 0.1104 0.1399
1.705 1.48
f4 30 0.0615 0.0659
0.662 0.5778
50 0.0528 0.0615
0.783 0.7614

1962

Вам также может понравиться