Вы находитесь на странице: 1из 2

PARTICLE SWARM OPTIMIZATION

Particle swarm optimization ( PSO) is a heuristic global optimization technique put


forward by Doctor Kennedy and E berhart in 1995. it makes few or no assumptions
about the problem being optimized and can search very large spaces of candidate
solutions. It is developed from swarm intelligence and was inspired by the
movement of behavior of bird and fish flock. PSO simulates the bird flocking
behavior. Consider the following situation a group of birds are randomly searching
food in an area suppose in area being searched there is only one piece of food and
no bird knows where the food is. But they know how far the food is in each
iteration performed. So the best strategy to find the food is to keep an eye on the
bird which is nearest to the food. PSO acquired the knowledge from the scenario
and used it in finding the solutions for optimization problems. In Particle Swarm
Optimization each single solution is a "bird" in the search space called "particle".
Each particle has a fitness value associated with it which is evaluated by the fitness
function to be optimized which directs the flying of the particles and have
velocities. The particles follow the current optimum particles and fly through the
problem space. The initial value assigned to the PSO is a group of
arbitrary/random particles (solutions) then it updates generations to search for
optimal solution . Each particle is updated by two "best" values in each iteration.
The first value is one is the best solution (fitness) it has achieved so far it is called
pbest and this value is stored . The second value is another "best" value that is
traced by the particle swarm optimizer is the best value obtained in the population
so far by any particle. This best value called the gbest is the global best. When a
particle takes part of the population as its topological neighbors, the best value is a
local best and is called lbest.
PSO algorithm works by having a population (called a swarm) of candidate
solutions (called particles). There are few simple formulae and according to those
formulae the particles are moved around in the search-space. The best known
position of the particle guide their movements in the search space as well as the
whole swarm,s best known position When improved positions are being discovered
these will then come to guide the movements of the swarm. The process is repeated
and by doing so it is hoped that a satisfactory solution will eventually be
discovered but not guaranteed.

Algorithm
1. For each particle i=1 to s do
Initialize the particles position with a uniformly distributed random vector
Xi=U(blo ,bup) where blo is lower boundry of search space and bup is upper
boundry of search space.
Intialize best known position of particle to its initial position: pi xi
If (f(p) < f(g)) update the swarms best known position: g p
Initialize the particles velocity: v ~ U(-|bp-bl|, |bp-bl|)
2. Until a termination criterion is met i:e iterations are performed
For each particle i=1 to s do
Choose random number: rp,rg ~U(0,1)
For each dimension d=1 to n do
Update velocity of particle
Vi,d vi,d p rp(pi,d - xi,d) +g rg(gd xi,d)
Update best known position of particle xi xi + vi
If (f(x) < f(p)) do:
Update the particles best known position p x
If (f(p.) < f(g)) update the swarms
best known position: g p
3. Now g holds the best found solution.

Вам также может понравиться