Вы находитесь на странице: 1из 5

2010 Sixth International Conference on Natural Computation (ICNC 2010)

An Integrated Algorithm Based on Artificial Bee Colony and Particle Swarm Optimization
Xiaohu Shi1,2, Yanwen Li3, Haijun Li4, Renchu Guan1, Liupu Wang1 and Yanchun Liang1*
2

College of Computer Science and Technology, Jilin University, Jilin, Changchun 130012, China State Key Lab. for Novel Software TechnologyNanjing University, Jiangsu, Nanjing 210093, China 3 College of Computer, Northeast Normal University, Changchun 130117, China 4 School of Computer Science, Yantai University, Shandong, Yantai 264005, China *Corresponding Author: Yanchun Liang {ycliang@jlu.edu.cn} search and solution improvement mechanism, so it is able to get excellent performance on optimization problems [16-19] and has attracted much attention in the related research fields. Karaboga and Basturk extended ABC from unconstrained optimization to constrained optimization problems by introducing a constrained handling method [20], Chong et. al. developed a bee colony optimization algorithm for job shop schedule problem [21], Wong and Chong proposed an efficient ABC-based method for traveling salesman problem [22], Kang et. al. applied ABC method to structural inverse analysis [23], Fathian et. al. proposed a honey-bee mating optimization algorithm for clustering problem [24]. For excellent reviews of ABC-based methods, one can refer to Ref. [25]. In this paper, a novel integrated SI algorithm is developed by synthesizing particle swarm optimization and artificial bee colony. The approach is initialized by two sub-systems of PSO and ABC, and then performs the two sub-systems in parallel. During the two sub-systems are executing, two information exchanging processes are introduced into the system. To validate the effectiveness of the proposed algorithm, it is applied to 4 benchmark optimizing functions together with PSO and ABC methods. Comparison results show that the integrated algorithm performs much better than both PSO and ABC. II. BACK GROUND

AbstractBased on particle swarm optimization (PSO) and artificial bee colony (ABC), a novel hybrid swarm intelligent algorithm is developed in this paper. Two information exchanging processes are introduced to share valuable information mutually between particle swarm and bee colony. Numerical results show that the proposed method is effective and performs better than both PSO and ABC. Keywords - particle swarm optimization; artificial bee colony; hybrid swarm intelligence

I.

INTRODUCTION

Swarm Intelligence (SI) has become an emerging and interesting area in the field of optimization during the past decade. SI is defined as any attempt to design algorithms or distributed problem-solving devices inspired by the collective behavior of social insect colonies and other animal societies [1]. Considering immune cells as a swarm, and simulating the cooperative anti-virus mechanism among the population, Castro and Zuben developed Artificial Immune Systems (AIS) [2]. Inspired by the ant colony searching food behaviors, Dorigo proposed Ant Colony Optimization (ACO) in 1992 [3]. ACO mimics the way that real ants exchange information by means of pheromone trails during their food searching process. Different with ACO, an ant colony clustering algorithm was developed by simulating the ergates behavior of collecting and throwing eggs or dead bodies into heaps in a distributed way [4]. Particle Swarm Optimization (PSO) was originally developed by Eberhart and Kennedy in 1995 [5], which is inspired from the metaphor of social interaction in bird swarm or fish school. PSO begins with a randomly generalized potential solution population. It finds the optimum solution by swarms following the best particle. Currently, PSO has become one of the most popular SI algorithms and is successfully applied to many research fields, such as numerical optimization [6], travelling salesman problem [7], artificial neural network construction [8], job shop scheduling problem [9-10], clustering problem [11], vehicle routing problem [12], multiobjective optimization [13], system control [14], and so on. Artificial bee colony (ABC) algorithm is a recently developed SI algorithm. Inspired by the foraging behavior of bee colony, Karaboga proposed this new method for numerical optimization [15]. ABC has advantages on memory, local

A. Particle Swarm Optimization (PSO) In this section, standard PSO will be introduced briefly. Suppose the solution space of the problem is D-dimensional and the size of particle swarm is m, respectively. Then each particle is defined by two D-dimension vectors, one denotes the particles location and the other represents its velocity. The location of a particle is a potential solution of the problem, so we should define a fitness function according to the problem. The principle for the definition of the fitness function is the higher fitness, the better solution. Each particle is able to memorize two items, namely the historical best position of itself and the best position of the whole population. Denote the location, velocity, historical best position of the ith particle as Xi, Vi, Pi, and the best position of the whole population as Pg, respectively. Here i=1, 2, , m, and all of the above four vectors are D-dimensional.

978-1-4244-5961-2/10/$26.00 2010 IEEE

2586

The PSO approach begins with an initial particle population, and the locations and the velocities of which are both randomly produced. Then the velocities and locations are updated according to the following two equations: vid = wvid + c1 r1 (pid - xid) + c2 r2 (pgd - xid) xid = xid + vid (1) (2)

dimension of the problem space. The main steps of ABC are summarized as below: (1) Initialize Population (2) Place the employed bees on their food sources (3) Place the onlookers on the food sources depending on their nectar amounts (4) Send the scouts to the search area for discovering new food sources (5) Memorize the best food source found so far (6) If requirements are met, output the best solution, otherwise go to step (2). III. INTEGRATED ALGORITHM BASED ON ABC AND PSO

where i=1,2,,m, d=1,2,,D, w is a inertial parameter, c1 and c2 are learning rates, r1 and r2 are random real values in interval [0,1], vid [-vmax, vmax], and vmax is a designated value. When the fitness of the best population location reaches a designated value or after running a defined upper limit iteration number, the program will output the best solution and terminate. B. Artificial Bee Colony (ABC) Real bee colony is one of the natural societies with the most specialized social divisions. A very simplified society model of honey bee that ABC algorithm adopts, consists of three kinds of bees: employed bees, onlooker bees, and scout bees [16]. The goal of the whole bee colony is to maximize the nectar amount. Employed bees are those who are exploiting on a former explored nectar source, and can give the information about the nectar source to onlooker bees. Onlookers wait in the hive and decide to exploit a food source depending on the information shared by the employed bees. The responsibility of scouts is to find the new valuable food sources. They search the space near the hive randomly. In ABC algorithm, also suppose the solution space of the problem is D-dimensional, the size of employed bees and onlooker bees are both SN, which is equal to the number of food sources. A food source represents a possible solution and its fitness could be computed accordingly. Each employed bee is associated with a food source. The one associated to the ith food source searches for new solution according to Eq.(3) xid=xid +id (xid - xkd), (3) where i=1,2,,SN, d=1,2,,D, id is a random generalized real number within the range [-1, 1], k is a randomly selected index number in the colony. The new solution Xi={xi1, xi2,, xiD} is compared with its original one Xi= {xi1, xi2,, xiD}, and the better one should be remained. Next, each onlooker bee selects a food source to exploit with the probability

PSO and ABC are two outstanding swarm intelligence methods. In PSO algorithm, a particle position is updated according to the experience of itself and the experience of the whole swarm. So it is very simple to be applied and the speed of algorithm is very fast. While for ABC method, the most significant features are self-improvement of solution and its local search ability. In this paper, we would like to combine the merits of both PSO and ABC and to obtain the more excellent performance than the two original algorithms. So, an integrated algorithm based on ABC and PSO (IABAP) is proposed. IABAP begins with two sub-systems of ABC and PSO, and executes them in parallel. The main idea of the approach is to enhance the qualities of the two sub-systems mutually by transferring better information between them. The goal is achieved by two information exchanging processes, called by Information Exchanging Process1 and Information Exchanging Process2, respectively. The first one forwards better information from particle swarm to bee colony, and the second is reversed. The bee colony shares the information from particle swarm through Scouts. During each iteration, the scouts, if we have, are able to obtain the information from particle swarm with a designated probability. For one scout, a food position is randomly selected from the particle swarm as its initial search point instead of by Eq. (5), while those particles with a higher fitness value are more likely to be chosen as the information sources. On the other hand, the particles obtain bee colony information by adjusting their velocity updating rule with a small given probability. In conventional PSO algorithm, velocities are updated according to the historical best positions of themselves, the best position of whole swarm, and their current velocities as well. If a particle satisfies the given probability, Information Exchanging Process2 should be applied to this particle. Then its velocity is updated by introducing an item of information shared from bee colony. The updating rule is written by vid =wvid+c1r1(pid - xid)+c2r2(pgd - xid)+c3r3 (pad - xid), (6) where c3 is a learning rate, r3 is a random real value in interval [0, 1], and Pa= {pa1, pa2, , pad} is a randomly selected food source from ABC sub-system. The same as above, those food sources with a higher fitness value are preferred to be chosen. Therefore, the main steps of the algorithm could be described as follows: (1) Initialize ABC and PSO sub-systems respectively.

pi = fiti

fit
j =1

SN

(4)

where fiti is the fitness of the solution Xi. Then the onlooker bee searches a new solution in the selected food source site by Eq.(3), the same as employed bees. After all the employed bees exploit a new solution and the onlooker bees are allocated a food source, if a source is found that the fitness hasnt been improved for a given number (denoted by limit) steps, it is abandoned, and the employed bee associated with which becomes a scout and makes a random search by Eq.(5). xid=xdmin +r(xdmax - xdmin), (5) where r is a random real number within the range [0, 1], and xdmin and xdmax are the lower and upper borders in the dth

2587

(2) Execute employed bees search and onlookers selection and search processes on bee colony. (3) Execute employed scouts search process. The start search points should be determined according to a given probability, whether they are randomly produced by Eq.(5) or they are obtained through the Information Exchange Process1. (4) Execute particles search process. For each particle, if the small given probability is satisfied, Information Exchange Process2 should be adopted and the particles velocity and position are updated according to Eq.(6) and Eq.(2); otherwise, its velocity and position are updated according to Eq.(1) and Eq.(2). (5) Memorize the best solution as the final solution and stop if the best individual in one of the two sub-systems satisfies the termination criterion. The flow chart of the IABAP is shown in Fig 1.
Initialization of Ant Colony Initialization of Particle Swarm

Process1 and Process2 are 0.5 and 0.01, and the other parameters are set the same as those of ABC and PSO. The comparison results of the four benchmark functions are listed in Tables (II-V). Each experiment was repeated 50 times, and the mean values and standard deviations are calculated. In the tables, the best mean value results of all the three methods are listed in boldface. From the tables, it is easy to see that the proposed IABAP method performs best compared with the other two algorithms. For sphere and griewank functions, IABAP possesses all the best results from dimension 10 to dimension 200. For rosenbrock function, ABC method performs best at dimension 30, and IABAP beats ABC and PSO at the other 4 dimensions. For rastrgin function, ABC shares the best results with IABAP at dimension 10, while IABAP monopolizes the best results of the other 4 dimensions. As an example, Fig. 3 gives a comparison of the error evolution curves of a typical run on Griewank function at dimension 200. From the tables and Fig.3, it could be concluded that ABC and IABAP are much better than PSO overall, and IABAP performs best among all the 3 methods. V. CONCLUTSIONS

Employed bees' search


Probabilit y Condit ion Satisfied?

Onlookers' search No Information Exchanging Process1 Information Exchanging Process2 Yes

Yes Particle search Process (By Eq.(1) and Eq.(2))

Probabilit y Condit ion Satisfied?

No Scouts' random search

Based on two recently developed swarm intelligent algorithm, namely particle swarm optimization and artificial bee colony, an integrated swarm intelligent approach is proposed. Two information exchanging processes are introduced for the information communication between particle swarm and bee colony, which is assumed being able to improve the performance of the algorithm. Numerical simulations are performed to examine the effectiveness of the proposed method. It is applied to 4 selected benchmark functions together with PSO and ABC methods. Numerical results show that the proposed algorithm performs best on these 4 benchmark functions. ACKNOWLEDGMENT

New generation of ABC

New generation of PSO

No

Termination Critera Met?

Yes Output Final Solution

The authors are grateful to the support of the National Natural Science Foundation of China (60703025, 10872077), the science-technology development project of Jilin Province of China (20090152, 20080708), the National High-technology Development Project (2009AA02Z307), and the Special Fund for Basic Scientific Research of Central Colleges, Jilin University. REFERENCES
[1] [2] Bonabeau E., Dorigo M., Theraulaz G. Swarm Intelligence: From Natural to Artificial Systems, Oxford University Press, NY, 1999. Castro D. L.N., Zuben V. F.J. Artificial immune systems, Part I. Basic theory and applications, Technical Report Rt Dca 01/99, Feec/Unicamp, Brazil, Dorigo M. Optimization, Learning and Natural Algorithms (in Italian). PhD thesis, Dipartimento di Elettronica , Politecnico di Milano, Italy, 1992, : 140. Monmarche N. On data clustering with articial ants, in: A.A. Freitas (Ed.), AAAI-99 and GECCO-99 Workshop on Data Mining with Evolutionary Algorithms: Research Directions, Orlando, Florida, 1999, pp. 2326 Kennedy J., Eberhart R.C. Particle swarm optimization, Proceedings of the IEEE International Conference on Neural Networks, Perth, Australia, IEEE Service Center, Piscataway, NJ, 1995, (4): 19421948.

Figure 1. Flow chart of IABAP

IV.

NUMERICAL RESULTS

In the experiments, results of IABAP, ABC and PSO algorithms are compared to each other on 4 benchmark functions with 5 different dimensions from 10 to 200. All the experiments are executed on a PC with 2.54GHz processor and 4G memory. Table I gives the detailed information of these 4 functions. In ABC, the food source number SN is set as 40, and limit is 100. The parameters of PSO are as follows: w is 0.8, c1 and c2 are 1.2 and 0.8, and population size is 60. In IABAP, c3 is set as 1.0, the given probabilities of Information Exchanging

[3]

[4]

[5]

2588

[6]

[7]

[8]

[9]

[10]

[11]

[12]

[13]

[14]

Shi X.H., Liang Y.C., Lee H.P., Lu C., Wang Q.X. Particle swarm optimization-based algorithms for TSP and generalized TSP, Information Processing Letters, 2007 (103): 169-176. Shi X.H., Liang Y.C., Lee H.P., Lu C., Wang L.M. An improved GA and a novel PSO-GA-based hybrid algorithm. Information Processing Letters, 2005, 93 (5): 255-261. Kiranyaz S., Ince K., Yildirim A., Gabbouj M.. Evolutionary artificial neural networks by multi-dimensional particle swarm optimization. Neural Networks, 2009, 22(10): 1448-1462. Liu H.B., Abraham A., Hassanien A.E. Scheduling jobs on computational grids using a fuzzy particle swarmoptimization algorithm, Future Generation Computer Systems (2009), doi:10.1016/j.future.2009.05.022. Lin T.L., Horng S.J., Kao T.W., Chen Y.H., Run R.S, Chen R.J., Lai J.L., Kuo I.H. An efficient job-shop scheduling algorithm based on particle swarm optimization. Expert Systems with Applications, 2010, 37 (3): 2629-2636. Yang F.Q., Sun T.L., Zhang C.H. An efcient hybrid data clustering method based on K-harmonic means and Particle Swarm Optimization. Expert Systems with Applications, 2009, 36(6): 98479852. Marinakis Y., Marinaki M. A hybrid genetic Particle Swarm Optimization Algorithm for the vehicle routing problem. Expert Systems with Applications, 2010, (37): 14461455. Wang Y.J., Yang Y.P. Particle swarm with equilibrium strategy of selection for multi-objective optimization. European Journal of Operational Research, 2010, 200 (1): 187197. Zamani M., Ghartemani M. K., Sadati N., Parniani M. Design of a fractional order PID controller for an AVR using particle swarmnext term optimization. Control Engineering Practice, 2009, 17(12): 1380 1387.

[15] Karaboga D. An idea based on honey bee swarm for numerical optimization. Technical Report TR06, Erciyes University, Engineering Faculty, Computer Engineering Department , 2005. [16] Basturk B., Karaboga D. An artificial bee colony (abc) algorithm for numeric function optimization. In IEEE Swarm Intelligence Symposium 2006, Indianapolis, Indiana, USA, May 2006. [17] Karaboga D., Basturk B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (abc) algorithm. Journal of Global Optimization, 39(3):459471, 2007. [18] Karaboga D., Basturk B. On the performance of artificial bee colony (abc) algorithm. Applied Soft Computing, 8(1):687697, 2008. [19] Karaboga D., Akay B.A. Comparative Study of Artificial Bee Colony Algorithm. Applied Mathematics and Computation, 214 (1): 108-132, 2009. [20] Karaboga D., Basturk B. Artificial Bee Colony (ABC) Optimization Algorithm for Solving Constrained Optimization Problems. Lecture Note in Computer Science, 8: 789798, 2007. [21] Chong C.S., Low M.Y.H., Sivakumar A.I. Gay K.L. A Bee Colony Optimization Algorithm to Job Shop Schedule. Proceedings of the Winter Simulation Conference, 1954-1961, 2006. [22] Wong L.P., Chong C.S. An Efficient Bee Colony Optimization Algorithm for Traveling Salesman Problem using Frequency-based Pruning. Proceeding of the IEEE International Conference on Industrial Informatics, INDIN, 775-782, 2009. [23] Kang F., Li J.J, Xu Q. Structural inverse analysis by hybrid simplex artificial bee colony algorithms. Computers and Structures, 87: 861870, 2009. [24] Fathian M., Amiri B., Maroosi A. Application of honey-bee mating optimization algorithm on clustering. Applied Mathematics and Computation, 190: 15021513, 2007. [25] Karaboga D., Akay B.A. A survey: algorithms simulating bee swarm intelligence. Artif Intell Rev, 31:6185, 2009.

TABLE I. Function Sphere


n 1 i 1

TEST FUNCTIONS USED IN THE EXPERIMENTS Formulae Range [-100,100]


2

Minimum value f(0)=0 f(1)=0 f(0)=0 f(0)=0

f ( X ) = xi2
i 1

Rosenbrock Griewank Rastrgin

f ( X ) = 100 xi +1 xi2 + ( xi 1) 2
f (X ) =
n 1 n 2 x cos i + 1 xi 4000 i 1 i i =1

[-30,30] [-600,600] [-5.12,5.12]

f ( X ) = xi2 10 cos(2xi ) + 10
i 1

TABLE II. Results on Sphere Mean Std Mean Std Mean Std

COMPARISON ESULTS FOR SPHERE FUNCTION Dimension

10

30

50

100

200

PSO

1.10E-05 1.79E-05 6.41E-17 1.48E-17 4.80E-17 1.11E-17

4.46E-15 1.34E-14 6.46E-16 8.528E-17 5.38E-16 6.48E-17

4.82E-13 1.55E-12 1.67E-15 3.03E-16 1.34E-15 1.79E-16

2.80E-06 1.1E-05 9.88E-15 4.49E-15 4.41E-15 8.84E-16

1.65E-01 0.289588 1.13E-13 1.06E-13 2.23E-14 9.04E-15

ABC

IABAP

2589

TABLE III. Results on Rosenbrock PSO Mean Std Mean Std Mean Std

COMPARISON RESULTS FOR ROSENBROCK FUNCTION Dimension

10

30

50

100

200

1.03E+01 29.83842 8.45E-02 0.078007 8.00E-02 0.078156 TABLE IV.

7.99E-01 1.612401 1.06E-01 0.141269 1.14E-01 0.214262

1.31E+00 2.089912 2.68E-01 0.379678 1.17E-01 0.199364

1.13E+02 43.75557 1.21E+00 2.493321 1.44E-01 0.247321

5.19E+02 116.5106 3.34E+00 4.379442 4.10E-01 0.472663

ABC

IABAP

COMPARISON RESULTS FOR GRIEWANK FUNCTION Dimension

Results on Griewank PSO Mean Std Mean Std Mean Std

10

30

50

100

200

2.63E-09 2.73E-09 8.88E-17 4.49E-17 3.11E-17 5.04E-17 TABLE V.

3.34E-15 4.07E-15 6.57E-16 1.42E-16 4.73E-16 4.92E-17

2.88E-13 1.19E-12 1.76E-15 2.85E-16 9.99E-16 1.96E-16

1.02E-07 6.50E-07 9.69E-15 4.91E-15 2.22E-15 3.17E-16

3.56E-05 5.25E-05 1.19E-13 1.23E-13 1.57E-14 8.50E-15

ABC

IABAP

COMPARISON RESULTS FOR RASTRGIN FUNCTION Dimension

Results on Rastrgin PSO Mean Std Mean Std Mean Std

10

30

50

100

200

1.12E+02 61.02547 0.00E+00 0 0.00E+00 0

9.57E+02 341.4607 3.69E-15 4.72E-15 1.42E-16 4.87E-16

1.95E+03 442.7091 1.19E-11 4.17E-11 3.43E-14 4.66E-14

4.55E+03 892.5579 2.00E-02 0.140701 3.90E-10 1.6E-09

8.65E+03 1794.175 1.12E+00 1.020167 1.53E-04 0.001043

ABC

IABAP

6 4 2 0 -2

PSO ABC IABAP

Log10(err)

-4 -6 -8 -10 -12 -14 0 5000 10000 15000 20000

Iteration Number

Figure 2. The error evolution curves of a typical run on Griewank function of dimension 200

2590

Вам также может понравиться