Вы находитесь на странице: 1из 2

Liu. J.-L.

Paper:

Evolving Particle Swarm Optimization Implemented by a Genetic Algorithm


Jenn-Long Liu
Department of Information Management, I-Shou University 1, Section 1, Hsueh-Cheng Rd., Ta-Hsu Hsiang, Kaohsiung County, Taiwan 840, Taiwan E-mail: jlliu@isu.edu.tw [Received April 15, 2007; accepted September 22, 2007]

Particle swarm optimization (PSO) is a promising evolutionary approach related to a particle moves over the search space with velocity, which is adjusted according to the ying experiences of the particle and its neighbors, and ies towards the better and better search area over the course of search process. Although the PSO is effective in solving the global optimization problems, there are some crucial user-input parameters, such as cognitive and social learning rates, affect the performance of algorithm since the search process of a PSO algorithm is nonlinear and complex. Consequently, a PSO with well-selected parameter settings may result in good performance. This work develops an evolving PSO based on the Clercs PSO to evaluate the tness of objective function and a genetic algorithm (GA) to evolve the optimal design parameters to provide the usage of PSO. The crucial design parameters studied herein include the cognitive and social learning rates as well as constriction factor for the Clercs PSO. Several benchmarking cases are experimented to generalize a set of optimal parameters via the evolving PSO. Furthermore, the better parameters are applied to the engineering optimization of a pressure vessel design.

Keywords: evolving PSO, genetic algorithm, cognitive and social learning rates

1. Introduction
Several well-known representative evolutionary computation techniques have been developed to cope with the global optimization problems. These techniques are genetic algorithms (GAs) [1], genetic programming (GP) [2], evolutionary programming (EP) [3], evolutionary strategies (ES) [4], ant colony system (ACS) [5], and particle swarm optimization (PSO) [6]. Among these methods, the ACS and PSO are inspired from the cooperation of social behavior between agents. The ACS uses articial ants pheromones to attain a simple, yet highly effective biological mechanism, to look for the shortest path between food source and nest of ants. The PSO involved creating a population of ying particles that each particle 284

able to adjust its velocity, according to its own ying experience and the groups best experience, to locate a new position. Therefore, each particle in the PSO can be regarded as a cooperating agent. The original PSO algorithm, proposed by Kennedy and Eberhart in 1995 [6], is based on visualizing the movement of organisms in a ock of birds, and then modeled the individuals behavior of exploration and exploitation mathematically, in the hope of simulating the search behavior of a swarm seen in nature. The methodology of the original PSO is basically uses particles own over the searching space and memorized the best position encountered. The particle adjusts its value of velocity vector, based on its previous velocity vector and the inuence of its local best solution and the groups best solution, and then moves towards the next position. The momentum of each particle tends to keep it from being trapped at local optima, yet but by each particle considering both its own memory and that of its neighbors, the entire swarm trends to converge on the global solution. As shown in Ref. [7], PSO has been compared favorably to GAs. Although the PSO obtains the global or near optima effectively in a variety of unimodal or multimodal functions, there are many user-input parameters associated with the convergence rate and global solution of the particle swarm optimizer. These parameters for running a PSO include the cognitive and social learning rates, population size, neighborhood size, asynchronous or synchronous updates, and additional damping factors for velocity vector. Zhang et al. studied the effects of the magnitude of the sum of cognitive and social learning rates, velocity constraint, and population size on the performance of Clercs PSO [8]. The increment of parameters is varied using different intervals. They indicated that the choice of sum of cognitive and social learning rates , the rst important parameter, in the range [4.05, 4.3] is generally reasonable, and population size with 30 may be a good choice. However, they presumed that the cognitive learning rate equal to the social learning rate still to be further investigated because the ratio of the cognitive and social learning rates plays an important role in affecting the algorithmic efciency of the PSO. Besides, Carlisle and Dozier proposed an offthe-shelf PSO with a set of parameter settings [9]. They distilled the best general PSO parameter settings with a Vol.12 No.3, 2008

Journal of Advanced Computational Intelligence and Intelligent Informatics

Evolving PSO Implemented by a GA

population size of 30, global neighborhood, with updates applied asynchronously. Moreover, they suggested that the appropriate cognitive and social learning rates are 2.8 and 1.3, respectively. Instead of the parameter study procedures of aforementioned two references [8, 9] which used the increment of parameters varied using different intervals, this work proposes an evolving PSO based on the evolutionary computation of a GA to evolve the optimal set of the cognitive and social learning rates as well as the constriction factor intelligently. From the results of benchmarking cases and practical optimization problem of a pressure vessel design, the proposed evolving PSO signicantly enhances the efciency of the Clercs PSO.

PSO searches wide areas effectively, but tends to lack local search precision [12]. They introduced a control parameter called the inertia weight, w, to damp the velocities over time, allowing the swarm to converge more accurately and efciently [13]. The modied PSO for updating the velocity vector is reformulated as follows. vik1 wvik c1 rand1 pbesti xik . . (3)

c2 rand2gbest xik

2. Particle Swarm Optimizers


2.1. Original PSO of Kennedy and Eberhart Suppose that the i-th particle is ying over a hyperplane space, its position and velocity being denoted by xi and vi . The best previous position of the i-th particle is recorded and represented as pbesti. The best previous position of the i-th particle is recorded and represented to serve as the index of the best particle among all the particles (N particle ) using the symbol gbest. Consequently, the next ying velocity and position of the particle is updated at iteration k 1 using the following heuristic equations:
vik1 xik1 vik c1 rand1 pbesti xik

Equation (3) represents a dynamically adapting formulation for velocity resulting in has better ne tuning ability. Looking at Eq. (3) reveals that the large inertia weight facilitates a global exploration while the small value facilitates a local search. Consequently, a dynamically adjustable formulation for inertia weight should be suitable for achieving a balance between global and local exploration and thus faster search result. By introducing a linearly decreasing inertia weight into the original version of PSO, the performance of PSO has been signicantly improved through parameter study of inertia weight [13, 14].

c2 rand2gbest xik xik vik1 i 1 2 N particle

. .

(1)

. . (2)

where c1 and c2 are the cognitive and social learning rates, respectively. These two parameters control the relative importance of the memory (position) of the particle itself to the memory of the neighborhood, and are often both set to the same value to give each component equal weight. The variable rand1 and rand2 are two random functions that are uniformly distributed in the range [0,1]. As shown in Eq. (1), the two random values are generated independently, and the velocity of the particle is updated in relation to the variations on its current position, its previous best position, and the previous best position of its neighbors. After updating the velocity of the particle from Eq. (1), position is updates by adding the velocity vector to the current position to locate the next position. The stability and convergence of the algorithm have been analyzed theoretically by Clerc and Kennedy [10], and using a dynamic system theory by Trelea [11].

where c1 c2 and 4, then the Kennedy and Eberharts original PSO for velocity updating becomes vik1 K vik c1 rand1 pbesti xik .

2.3. Clercs PSO In 1999, Clerc proposed the use of a constriction factor, K, that improves PSOs ability to constrain and control velocities [15] in the original PSO. Later, Eberhart and Shi found that K, combined with constraints on the maximum allowable velocity vector (Vmax ), signicantly improved the PSO performance [16]. Also, when using the PSO with constriction factor, the setting of Xmax on each dimension is the best approach. The constriction coefcient, K, implements a velocity control, effectively eliminating the tendency of some particles to spiral into ever increasing velocity oscillations. The formulation of K is expressed as follows: 2 K . . . . . . . . (4) 2 4 2

c2 rand2gbest
xik

(5)

2.2. Modied PSO of Shi and Eberhart The original PSO of Kennedy and Eberhart is effective in determining optimal solutions in static environments, but it suffered from poor performance in locating a changing extrema. It was also necessary to impose a maximum value Vmax to avoiding the particle exploded because of there was no exist a mechanism for controlling the velocity of a particle. In 1998, Shi and Eberhart shown that
Vol.12 No.3, 2008

Clearly, the constriction factor K in Eq. (5) can be seen as a damping factor that controls the magnitude of the ying velocity of a particle. From the experiments in the literature, the Clercs PSO has a potential ability to avoid particles being trapped into local optima effectively while possessing a fast convergence capability and was shown to have superior performance than the standard and modied PSOs. As shown in Eq. (4), the value of , dened as the sum of the cognitive and social learning rates, is highly affect the constriction factor K, and thus is the very important parameter for achieving a good PSO with high performance. In general, when Clercs constriction PSO is used, the common value for is set to 4.1 and the constriction factor K is approximately 0.729. This is equivalent to the Shi and Eberharts modied PSO 285

Journal of Advanced Computational Intelligence and Intelligent Informatics

Вам также может понравиться