Академический Документы
Профессиональный Документы
Культура Документы
Abstract—The artificial bee colony (ABC) algorithm is a rel- discontinuous, nondifferentiable, and so on and attracted more
atively new optimization technique which has been shown to be and more attention in recent years. The most prominent EAs
competitive to other population-based algorithms. However, ABC proposed in the literatures are genetic algorithm (GA) [1],
has an insufficiency regarding its solution search equation, which
is good at exploration but poor at exploitation. To address this particle swarm optimization (PSO) [2], differential evolution
concerning issue, we first propose an improved ABC method (DE) [3], ant colony optimization [4], biogeography-based
called as CABC where a modified search equation is applied to optimization [5], artificial bee colony (ABC) algorithm [6], and
generate a candidate solution to improve the search ability of ABC. so on.
Furthermore, we use the orthogonal experimental design (OED) In this paper, we concentrate on ABC, developed by
to form an orthogonal learning (OL) strategy for variant ABCs
to discover more useful information from the search experiences. Karaboga [6] in 2005 based on simulating the foraging behavior
Owing to OED’s good character of sampling a small number of of the honeybee swarm. Numerical comparisons demonstrated
well representative combinations for testing, the OL strategy can that the performance of ABC is competitive to that of other
construct a more promising and efficient candidate solution. In population-based algorithms with an advantage of employing
this paper, the OL strategy is applied to three versions of ABC, i.e., fewer control parameters [7]–[10]. Due to its simplicity and
the standard ABC, global-best-guided ABC (GABC), and CABC,
which yields OABC, OGABC, and OCABC, respectively. The ex- ease of implementation, ABC has captured much attention and
perimental results on a set of 22 benchmark functions demonstrate has been applied to solve many practical optimization problems
the effectiveness and efficiency of the modified search equation and [11]–[16] since its invention.
the OL strategy. The comparisons with some other ABCs and sev- However, similar to other EAs, ABC also faces up to the
eral state-of-the-art algorithms show that the proposed algorithms poor convergence. The reasons are as follows. It is well known
significantly improve the performance of ABC. Moreover, OCABC
offers the highest solution quality, fastest global convergence, and that both exploration and exploitation are necessary for EAs.
strongest robustness among all the contenders on almost all the In practice, the two aspects contradict each other. In order
test functions. to achieve good performances on problem optimizations, they
Index Terms—Artificial bee colony (ABC) algorithm, orthogo- should be well balanced. While the solution search equation
nal experimental design (OED), orthogonal learning (OL), search of ABC, which is used to generate new candidate solutions
equation. based on the information of previous solutions, is good at
exploration, it is poor at exploitation [17], which results in the
I. I NTRODUCTION poor convergence.
To improve the performance of ABC, one active research
O PTIMIZATION techniques play a very important role
in engineering design, operational research, information
science, and related areas. They can be divided into two cate-
trend is to study its search equation. Until now, various search
equations have been suggested, such as [17]–[23] and so on.
The most representative one is that inspired by PSO. Zhu
gories: derivative-based methods and derivative-free methods.
and Kwong [17] proposed a global best (gbest)-guided ABC
For a variety of reasons, there have always been many real-
(GABC) by incorporating the information of the gbest solution
world problems where derivatives are unavailable or unreliable.
into the solution search equation of ABC to improve the ex-
Thus, as an important branch of derivative-free methods, evo-
ploitation. However, like PSO, it may cause an “oscillation”
lutionary algorithms (EAs) have shown considerable success
phenomenon because the guidance of the two terms in the
in solving optimization problems characterized as nonconvex,
search equation may be in opposite directions. This is ineffi-
cient to the search ability of the algorithm and delays the con-
Manuscript received December 29, 2011; revised April 28, 2012 and
vergence speed. However, we note that a well-designed search
July 19, 2012; accepted September 27, 2012. Date of publication October 18, equation is usually beneficial to enhance the performance of
2012; date of current version May 10, 2013. This work was supported in part the algorithm. Hence, in this paper, we randomly choose a
by the National Nature Science Foundation of China under Grants 60974082
and 11126287 and in part by the Fundamental Research Funds for the Central
solution instead of the current solution to make up a modified
Universities under Grant K5051270002. This paper was recommended by search equation, which can bring more information and produce
Associate Editor H. Takagi. a promising candidate solution to balance exploration and
W. Gao and S. Liu are with Xidian University, Xi’an 710071, China (e-mail:
gaoweifeng2004@126.com; liusanyang@126.com). exploitation. We use the modified search equation instead of
L. Huang is with the China University of Petroleum, Qingdao 266580, China the old one in ABC and get a new algorithm named as CABC.
(e-mail: huangzxp@yahoo.cn). On the other hand, the search equation of ABC, which is
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org. much like a blind mutation operator to search in a randomly
Digital Object Identifier 10.1109/TSMCB.2012.2222373 selected dimension of the solution vector, is short of an effective
learning mechanism. This is because that one individual may where i = 1, 2, . . . , SN , j = 1, 2, . . . , D, and D is the number
have good values on some dimensions of the solution vector of optimization parameters; xmin,j and xmax,j are the lower
while the other individual may have good values on some other and upper bounds for the dimension j, respectively.
dimensions. At the same time, we note that the orthogonal After the initialization, the population of the food sources
experimental design (OED) offers an ability to discover the best (solutions) is subjected to repeated cycles of the search pro-
combination levels for different factors with a reasonably small cesses of the employed bees, onlookers, and scouts. Each
number of experimental samples [32]. Therefore, in this paper, employed bee always remembers its previous best position and
the OED is used to form an orthogonal learning (OL) strategy produces a new position within its neighborhood in its memory.
for ABC to discover and preserve useful information from the If the new food source has equal or better quality than the old
search experiences. Owing to the OED’s orthogonal test ability source, the old source is replaced by the new source. Otherwise,
and prediction ability, the OL strategy can bring better learning the old source is retained. After all employed bees finish their
efficiency to ABC, which results in faster convergence speed search process, they will share the information about nectar
and better global optimization performance. This learning strat- amounts and positions of food sources with onlookers. Each
egy is applicable to ABC, GABC, and CABC. Accordingly, onlooker chooses a food source depending on the probability
the OL strategy-based ABCs are termed as OABC, OGABC, value associated with the food source and searches the area
and OCABC, respectively. Moreover, the experiment results within its neighborhood to generate a new candidate solution,
on a set of benchmark functions demonstrate the effectiveness and then, as in the case of the employed bee, the greedy
of the modified search equation and the OL strategy in solv- selection is applied again. When the nectar of a food source is
ing complex numerical optimization problems when compared abandoned, the corresponding employed bee becomes a scout.
with other algorithms. The scout will randomly generate a new food source to replace
The rest of this paper is organized as follows. In Section II, the abandoned position.
the standard ABC algorithm is presented, and the research An onlooker bee chooses a food source depending on the
works on improved ABCs are reviewed. In Section III, the probability value pi associated with that food source, where
OCABC algorithm is proposed based on the new solution
f iti
search equation and the OL strategy. Section IV presents and pi = (2.2)
discusses the experimental results. Finally, the conclusion is
SN
f itj
drawn in Section V. j=1
solution search equation to improve the exploitation. The modified ABC by controlling the frequency of pertur-
experimental results show that GABC can outperform bation and introducing the ratio of the variance opera-
ABC in most experiments. Banharnsakun et al. [18] tor. Kang et al. [25] developed a hybrid simplex ABC
presented a modified search equation for the onlooker which combines the Nelder–Mead simplex method with
bees. In their method, the best feasible solutions found so ABC. Kang et al. [26] used the Rosenbrock’s rotational
far are shared globally among the entire population. Thus, direction method to implement the exploitation phase and
the new candidate solutions are more likely to be close proposed the Rosenbrock ABC. Alatas [27] introduced
to the current best solution. Li et al. [19] proposed an the chaotic maps into the initialization and chaotic search
improved ABC by introducing not only the best solutions into the scout bee phase and proposed the chaotic ABC.
found so far but also the inertia weight and acceleration Zhao et al. [28] presented a hybrid swarm intelligent
coefficients to modify the search process. Tsai et al. [20] approach. The main idea of the approach is to obtain the
introduced the concept of universal gravitation into the parallel computation merit of GA and self-improvement
consideration of the affection between the employed bees merit of ABC by sharing information between the GA
and the onlooker bees and proposed the interactive ABC. population and bee colony. Duan et al. [29] adopted
Coelho and Alotto [21] developed a novel alternative ABC to increase the local search capacity as well as the
search equation in which a parameter is responsible for randomness of the populations. In this way, the improved
the balance between the Gaussian and the uniform dis- quantum EA can jump out of the premature convergence
tribution. Inspired by DE, Gao and Liu [22] proposed and find the optimal value. Shi et al. [30] developed
two improved solution search equations. Furthermore, a novel hybrid swarm intelligent algorithm. In their
in order to achieve a better balance between the ex- method, two information exchanging processes are intro-
ploration and exploitation of the two different solution duced to share valuable information mutually between the
search equations, they used a selective probability P to particle swarm and bee colony. Sumpavakup et al. [31]
control the frequency of introducing the two different so- proposed a cultural-based bee colony algorithm. The
lution search equations and got a new search mechanism. cultural algorithm operates in two spaces, i.e., population
Alam et al. [23] introduced an adaptation scheme of space and belief space. In this hybrid approach, ABC
mutation step size for the solution search equation of works in the population space.
ABC to find a suitable scaling factor.
2) Hybridization of ABC with other operators [24]–[31]. It is necessary to emphasize that our work falls in both of the
For example, Basturk and Karaboga [24] proposed a categories.
1014 IEEE TRANSACTIONS ON CYBERNETICS, VOL. 43, NO. 3, JUNE 2013
TABLE I
R ESULT C OMPARISONS OF IABC1, IABC2, AND CABC ON 30-D IMENSIONAL F UNCTIONS f1 − f20
AND 100-D IMENSIONAL F UNCTIONS f21 AND f22 W ITH 100 000 F UNCTION E VALUATIONS (FES)
TABLE IV
FA OF THE OED M ETHOD
2) FA: The FA can evaluate the effects of individual factors generation. By doing this, the overhead of the OL in preparing
on the experimental result, rank the most effective factors, and experiments at each generation is significantly reduced.
determine the best level for each factor. The FA is based on How to use the OL to generate a candidate solution Vi will
the experimental results of all the M cases of the OA. The FA be described in the next section.
results are shown in Table IV, and the process is described as
follows. C. Algorithmic Framework
Let fm denote the experimental result of the mth (m = It is important to point out that, in this paper, we intend to
1, . . . , M ) combination and Snq denote the effect of the qth make use of the OL to improve the search ability of the standard
(q = 1, . . . , Q) level in the nth (n = A, B, C) factor ABC, GABC, and CABC. As an example of this, an OL
M strategy-based CABC (OCABC), which combines CABC with
fm × zmnq
Snq = m=1 M (3.6) OL, is presented in Fig. 5. The main steps of the algorithm are
m=1 zmnq summarized as follows. After the initialization, the algorithm
repeats the cycle of the search processes of the employed bees,
where zmnq is 1 if the level of the nth (n = A, B, C) factor
onlooker bees, and scout bees. In the employed bee phase, a
of the mth (m = 1, . . . , 9) combination is q (q = 1, . . . , 3);
randomly selected individual from the population uses the OL
otherwise, zmnq is 0. In this way, the effect of each level
to construct a candidate solution, while other individuals em-
on each factor can be calculated and compared, as shown in
ploy (3.2) to generate a candidate solution. A greedy selection is
Table IV. For example, when we calculate the effect of level 1
applied between Xi and Vi , and the better one is selected. After
on factor A (A1), the experimental results of C1 , C2 , and C3
all the employed bees complete their searches, the probability
are summed up for (3.6) [32] because only these combinations
values pi (i = 1, 2, . . . , SN ) are calculated by (2.2). Next,
are involved in A1. Then, the sum divides the combination
the algorithm enters the onlooker phase. Each onlooker bee
number (3 in this case) to yield Snq (SA1 in this case). After
chooses a food source site with a probability related to its nectar
all the Snq ’s are calculated, the best level of each factor can be
amount and produces a candidate solution by (3.2). Moreover,
determined by selecting the level of each factor that provides
the greedy selection is also applied between Xi and Vi , and
the highest quality Snq . As the example shown in Table IV
the better one is selected. Finally, the scout process is executed.
is a maximization problem, the larger the Snq , the better the
Then, the algorithm turns to the employed bee phase and repeats
qth level on factor n is. Otherwise, vice versa. Thus, the best
the aforementioned steps until a specified stop condition is met.
combination of (A3, B2, C2) can be obtained.
Accordingly, ABC and GABC combined with OL are called as
2) OL: Owing to the poor learning mechanism in ABC, it
OABC and OGABC, respectively.
does not carry out a systematic search. We use the OED to
Here, the construction process of a candidate solution Vi by
form an OL strategy for ABC to construct an efficient and
the OL is described in the following six steps.
promising candidate solution. At the same time, in order to add
more variation to the search in our framework, the following Step 1) Generate an LM (2N ) OA where M = 2log2 (D+1) ,
formulation is used to generate a transmission vector Ti which and a number of OAs can be found in http://www2.
will take part in the OL: research.att.com/~njas/oadir/.
Step 2) According to the OA, make up M tested solutions
Ti = Xk + rand(0, 1)(Xbest − Xk ) (3.7) Zj (1, . . . , M ) by selecting the corresponding value
from Ti or Xi .
where rand(0, 1) denotes a uniformly distributed random num- Step 3) Evaluate each tested solution Zj (j = 1, . . . , M ),
ber in (0, 1), k is an integer uniformly chosen from the range and record the best (with best fitness) solution Zb .
[1, SN ] and is also different from i, and Xbest is the best Step 4) Calculate the effect of each level on each factor, and
individual with the best fitness in the current population. determine the best level for each factor using (3.6).
The OL is used to combine the information of Ti and Xi Step 5) Derive a predictive solution Zp with the levels deter-
to form a better candidate solution Vi . However, if the OL is mined in Step 4), and evaluate Zp .
applied to every pair of Ti and Xi , the number of function eval- Step 6) Compare f (Zb ) with f (Zp ), and the better solution
uations is SN · (M + 1) at each generation when LM (QN ) is used as the vector Vi .
is used, whereas the standard ABC only spends SN function From the aforementioned analysis, it can been seen that a
evaluations. It is unwise to apply the OL to every pair of Ti and better solution can be obtained with less number of function
Xi . Hence, in order to save the computational cost and keep evaluations by the OL strategy. In other words, it is impossible
the implementation simple, the OL is used only once at each to obtain such a better solution by so few computation cost
GAO et al.: NOVEL ABC ALGORITHM BASED ON MODIFIED SEARCH EQUATION AND OL 1017
without the help of the OL strategy. Therefore, we conclude Summarized in Table V are the 22 scalable benchmark
that the OL strategy can improve the exploitation ability of the functions. f1 − f6 and f8 are continuous unimodal functions.
algorithm. f7 is a discontinuous step function, and f9 is a noisy quar-
tic function. f10 is the Rosenbrock function which is uni-
modal for D = 2 and 3 but may have multiple minima in
IV. E XPERIMENTAL S TUDIES ON F UNCTION high dimension cases [40]. f11 − f22 are multimodal, and the
O PTIMIZATION P ROBLEMS number of their local minima increases exponentially with
the problem dimension. In addition, f14 is the only bound-
A. Benchmark Functions and Parameter Settings
constrained function investigated in this paper. Table V also
In this section, ABC-based algorithms are applied to mini- gives the global optimal value (column 3). Moreover, “Accept”
mize a set of 20 scalable benchmark functions of dimensions (column 4) is defined for each test function. If a solution found
D = 15, 30, or 60 [9], [17], [36], [38] and a set of 2 func- by an algorithm falls between the acceptable value and the
tions of dimensions D = 50, 100, or 200 [36], [38], as shown actual global optimum (column 3), the run is judged to be
in Table V. For convenience, according to different dimen- successful.
sions, the functions are divided into the low-, middle-, and For a fair comparison among ABCs, they are tested using
high-dimensional functions. For example, the low-dimensional the same settings of the parameters, i.e., the population size
functions contain 15-dimensional functions f1 − f20 and SN = 100 [8] and limit = 200 [24]. Furthermore, we set
50-dimensional functions f21 and f22 , etc. the maximum number of function evaluations to be 50 000,
1018 IEEE TRANSACTIONS ON CYBERNETICS, VOL. 43, NO. 3, JUNE 2013
TABLE V
B ENCHMARK F UNCTIONS U SED IN E XPERIMENTS
100 000, and 200 000 in the case of D = low, middle, and high. OABC, OGABC, and OCABC are always better than, or at
All the results reported in this section are obtained based on 50 least comparable to, ABC, GABC, and CABC on 20, 19, and 20
independent runs. For clarity, the results of the best algorithms test functions, respectively. For the high-dimensional functions,
are marked in boldface. OABC, OGABC, and OCABC are superior to ABC, GABC,
and CABC on 19, 19, and 17 test functions. Finally, for the
three OL-based ABCs, no method can surpass OCABC on any
B. Comparison Among ABC Algorithms
test function except that OABC performs better than OCABC
The mean and the standard deviation of the results ob- on the Rosenbrock function (f10 ). Moreover, OCABC is better
tained by each algorithm for f1 − f22 are summarized in than the five competitors on most test functions.
Tables VI–VIII. An interesting result is that the six ABC-based According to the aforementioned analyses, it can be con-
algorithms have most reliably found the minimum of the Step cluded that the modified search equation and the OL strategy
function f7 . It is a region rather than a point in f7 that is the op- are beneficial to the ABC performance. What is more, the
timum. Hence, this problem may relatively be easy to solve with ABC with the OL and modified search equation is much better
a 100% success rate. Important observations about the solution than the ABC with either or neither of them on most test
accuracy of different algorithms can be made from the results functions.
presented in Tables VI–VIII on the other 21 test functions. In order to compare the convergence rate and reliability of
First, the performance comparison among ABC, GABC, and different algorithms, more experimental results are given and
CABC is discussed. For the low-dimensional functions, CABC compared in Tables IX–XI. The results given there are the
is significantly better than ABC and GABC on 21 test functions. average FES (AVEN) needed to reach the threshold expressed
For the middle- and high-dimensional functions, CABC always as acceptable solutions specified in Table V. In addition, the
outperforms ABC and GABC on 20 and 21 test functions, re- successful rates (SR%) of the 50 independent runs for each
spectively, except that ABC is better than GABC and CABC on function are also compared. Note that the AVEN are calculated
the Griewank function f13 . Second, the solutions obtained by only for the runs that have been “successful.” For convenience
ABCs with OL are compared with the ones obtained by ABCs of illustration, we also plot the convergence graph for some test
without OL. For the low- and middle-dimensional functions, functions in Fig. 6.
GAO et al.: NOVEL ABC ALGORITHM BASED ON MODIFIED SEARCH EQUATION AND OL 1019
TABLE VI
R ESULT C OMPARISONS OF ABC S ON 15-D IMENSIONAL F UNCTIONS f1 − f20 AND 50-D IMENSIONAL F UNCTIONS f21 AND f22
TABLE VII
R ESULT C OMPARISONS OF ABC S ON 30-D IMENSIONAL F UNCTIONS f1 − f20 AND 100-D IMENSIONAL F UNCTIONS f21 AND f22
TABLE VIII
R ESULT C OMPARISONS OF ABC S ON 60-D IMENSIONAL F UNCTIONS f1 − f20 AND 200-D IMENSIONAL F UNCTIONS f21 AND f22
1020 IEEE TRANSACTIONS ON CYBERNETICS, VOL. 43, NO. 3, JUNE 2013
TABLE IX
C ONVERGENCE S PEED AND S UCCESSFUL R ATE C OMPARISONS OF ABC S ON 15-D IMENSIONAL
F UNCTIONS f1 − f20 AND 50-D IMENSIONAL F UNCTIONS f21 AND f22
TABLE X
C ONVERGENCE S PEED AND S UCCESSFUL R ATE C OMPARISONS OF ABC S ON 30-D IMENSIONAL
F UNCTIONS f1 − f20 AND 100-D IMENSIONAL F UNCTIONS f21 AND f22
It can be observed from Tables IX–XI and Fig. 6 that CABC promising in bringing a high reliability to ABC. In particu-
is faster than ABC and GABC and that OCABC is faster than lar, OCABC results in higher algorithm reliability with 100%
OABC and OGABC on all the test functions. This indeed shows successful rate on all the test functions except the Michalewicz
the advantages of the OL and the modified search equation in function (f22 ) with D = 200.
constructing a promising candidate solution to make the search Experimental results and comparisons verify that the ABC
faster. In particular, with a reasonable agreement to the fact with the OL and modified search equation performs better than
that GABC is always faster than ABC and slower than CABC, the ABC with either or neither of them on most test functions.
OGABC is observed to be faster than OABC and slower than Moreover, the OL and modified search equation work together
OCABC, and OCABC is the fastest algorithm among the six to improve the performance of ABC rather than contradict
contenders. each other. Thus, OCABC is the best algorithm among the six
Furthermore, the successful rates shown in Tables IX–XI also contenders, in terms of solution accuracy, convergence speed,
indicate that the OL and modified search equation are very and algorithm reliability.
GAO et al.: NOVEL ABC ALGORITHM BASED ON MODIFIED SEARCH EQUATION AND OL 1021
TABLE XI
C ONVERGENCE S PEED AND S UCCESSFUL R ATE C OMPARISONS OF ABC S ON 60-D IMENSIONAL
F UNCTIONS f1 − f20 AND 200-D IMENSIONAL F UNCTIONS f21 AND f22
TABLE XII
C OMPARISONS B ETWEEN OCABC AND S EVERAL VARIANT EA S ON O PTIMIZING 30- OR 100-D IMENSIONAL F UNCTIONS
TABLE XIII
C OMPARISONS B ETWEEN OCABC AND DE S ON O PTIMIZING 30-D IMENSIONAL F UNCTIONS
TABLE XIV
C OMPARISONS B ETWEEN OCABC AND PSO S ON O PTIMIZING 30-D IMENSIONAL F UNCTIONS
evolutionary programming (CEP) [44], and EA based on level- Additionally, OCABC is compared with the standard PSO, the
set evolution and latin squares (LEA) [38]. The results of “fully informed” PSO (FIPS) [49], the self-organizing hierar-
the compared algorithms are all derived directly from their chical PSO with time-varying acceleration coefficients (HPSO-
corresponding references. NA represents that the results are not TVAC) [50], the comprehensive learning PSO (CLPSO) [51],
available in the corresponding reference. The results show that and orthogonal learning PSO (OLPSO-G) [32] in Table XIV.
OCABC yields good performance and does best on 7 of 9 test The results of these variant PSOs are based on the reports in
functions with the smallest FES, while owing to the advantages the literature [32]. It is clear that OCABC works best in almost
of the OED method, OGA/Q surpasses OCABC on functions all the cases and achieves overall better performance than other
Sphere and Schwefel 2.22. competitive algorithms.
Further experimental results are listed in Tables XIII and
XIV. OCABC is compared with the standard DE [3], self-
D. Discussions
adapting control parameters in DE (jDE) [45], DE with strategy
adaptation (SaDE) [46], and adaptive DE (JADE) with optional OCABC, OLPSO-G, and OGA/Q are three population-based
external archive [47] in Table XIII. The results of these vari- algorithms which use the OED to enhance the performance
ant DEs are based on the reports in the literature [47], [48]. of the algorithms. However, the way of using the OED is
GAO et al.: NOVEL ABC ALGORITHM BASED ON MODIFIED SEARCH EQUATION AND OL 1023
[28] H. Zhao, Z. Pei, J. Jiang, R. Guan, C. Wang, and X. Shi, “A hybrid swarm [48] W. Y. Gong, Z. H. Cai, C. X. Ling, and H. Li, “Enhanced differential evo-
intelligent method based on genetic algorithm and artificial bee colony,” lution with adaptive strategies for numerical optimization,” IEEE Trans.
in Proc. ICSI, 2010, vol. 6145, pp. 558–565, Lect. Notes Comput. Sci. Syst., Man, Cybern. B, Cybern., vol. 41, no. 2, pp. 397–413, Apr. 2011.
[29] H. B. Duan, C. F. Xu, and Z. H. Xing, “A hybrid artificial bee colony opti- [49] R. Mendes, J. Kennedy, and J. Neves, “The fully informed particle swarm:
mization and quantum evolutionary algorithm for continuous optimization Simpler, maybe better,” IEEE Trans. Evol. Comput., vol. 8, no. 3, pp. 204–
problems,” Int. J. Neural Syst., vol. 20, no. 1, pp. 39–50, Feb. 2010. 210, Jun. 2004.
[30] X. Shi, Y. Li, H. Li, R. Guan, L. Wang, and Y. Liang, “An integrated [50] A. Ratnaweera, S. Halgamuge, and H. Watson, “Self-organizing hierarchi-
algorithm based on artificial bee colony and particle swarm optimization,” cal particle swarm optimizer with time-varying acceleration coefficients,”
in Proc. 6th Int. Conf. Nat. Comput., 2010, vol. 5, pp. 2586–2590. IEEE Trans. Evol. Comput., vol. 8, no. 3, pp. 240–255, Jun. 2004.
[31] C. Sumpavakup, S. Chusanapiputt, and I. Srikun, “A hybrid cultural-based [51] J. J. Liang, A. K. Qin, P. N. Suganthan, and S. Baskar, “Comprehensive
bee colony algorithm for solving the optimal power flow,” in Proc. IEEE learning particle swarm optimizer for global optimization of multimodal
Int. Midwest Symp. Circuits Syst., 2011, pp. 1–4. functions,” IEEE Trans. Evol. Comput., vol. 10, no. 3, pp. 281–295,
[32] Z. H. Zhan, J. Zhang, Y. Li, and Y. H. Shi, “Orthogonal learning particle Jun. 2006.
swarm optimization,” IEEE Trans. Evol. Comput., vol. 15, no. 6, pp. 832–
847, Dec. 2011.
[33] S. Y. Ho, H. S. Lin, W. H. Liauh, and S. J. Ho, “OPSO: Orthogonal particle
swarm optimization and its application to task assignment problems,”
IEEE Trans. Syst., Man, Cybern. A, Syst., Humans, vol. 38, no. 2, pp. 288– Wei-feng Gao received the B.S. degree in applied
298, Mar. 2008. mathematics from the School of Science, Xidian
[34] Q. F. Zhang and Y. W. Leung, “An orthogonal genetic algorithm for University, Xi’an, China, in 2008, where he is cur-
multimedia multicast routing,” IEEE Trans. Evol. Comput., vol. 3, no. 1, rently working toward the Ph.D. degree.
pp. 53–62, Apr. 1999. His current research interests include evolutionary
[35] Y. Wang, Z. X. Cai, and Q. F. Zhang, “Enhancing the search ability of algorithms and their applications in the real world.
differential evolution through orthogonal crossover,” Inf. Sci., vol. 185,
no. 1, pp. 153–177, Feb. 2012.
[36] Y. W. Leung and Y. Wang, “An orthogonal genetic algorithm with quan-
tization for global numerical optimization,” IEEE Trans. Evol. Comput.,
vol. 5, no. 1, pp. 41–53, Feb. 2001.
[37] Math. Stat. Res. Group, Chinese Acad. Sci., Orthogonal Design
(in Chinese) Beijing, China: People Educ. Pub., 1975.
[38] Y. P. Wang and C. Y. Dang, “An evolutionary algorithm for global op-
timization based on level-set evolution and latin squares,” IEEE Trans.
Evol. Comput., vol. 11, no. 5, pp. 579–595, Oct. 2007. San-yang Liu received the M.S. degree in applied
[39] J. T. Tsai, T. K. Liu, and J. H. Chou, “Hybrid Taguchi-genetic algorithm mathematics from Xidian University, Xi’an, China,
for global numerical optimization,” IEEE Trans. Evol. Comput., vol. 8, in 1984 and the Ph.D. degree in computational
no. 4, pp. 365–377, Aug. 2004. mathematics from Xi’an Jiaotong University, Xi’an,
[40] Y. W. Shang and Y. H. Qiu, “A note on the extended Rosenbrock func- in 1989.
tion,” Evol. Comput., vol. 14, no. 1, pp. 119–126, Mar. 2006. After finishing the Ph.D. degree, he spent one year
[41] C. Y. Lee and X. Yao, “Evolutionary programming using mutations based at the University Paul Sabatier, Toulouse, France,
on the Levy probability distribution,” IEEE Trans. Evol. Comput., vol. 8, as a postdoctoral fellow. He is currently a Profes-
no. 1, pp. 1–13, Feb. 2004. sor and the Dean of the School of Science, Xidian
[42] X. Yao, Y. Liu, and G. M. Lin, “Evolutionary programming made faster,” University. His research interests include nonlinear
IEEE Trans. Evol. Comput., vol. 3, no. 2, pp. 82–102, Jul. 1999. optimization, combinatorial optimization, network
[43] Q. Zhang, J. Sun, E. Tsang, and J. Ford, “Hybrid estimation of distribution optimization, system reliability, and so on.
algorithm for global optimization,” Eng. Comput., vol. 21, no. 1, pp. 91–
107, Jan. 2004.
[44] K. Chellapilla, “Combining mutation operators in evolutionary program-
ming,” IEEE Trans. Evol. Comput., vol. 2, no. 3, pp. 91–96, Sep. 1998. Ling-ling Huang received the Ph.D. degree in ap-
[45] J. Brest, S. Greiner, B. Boskovic, M. Mernik, and V. Zumer, “Self- plied mathematics from Xidian University, Xi’an,
adapting control parameters in differential evolution: A comparative study China, in 2012.
on numerical benchmark problems,” IEEE Trans. Evol. Comput., vol. 10,
She is currently a Lecturer in the China University
no. 6, pp. 646–657, Dec. 2006.
of Petroleum, Qingdao, China. Her major research
[46] A. K. Qin, V. L. Huang, and P. N. Suganthan, “Differential evolution
interests concentrate on optimization theory, meth-
algorithm with strategy adaptation for global numerical optimization,”
ods, and applications.
IEEE Trans. Evol. Comput., vol. 13, no. 2, pp. 398–417, Apr. 2009.
[47] J. Zhang and A. C. Sanderson, “JADE: Adaptive differential evolution
with optional external archive,” IEEE Trans. Evol. Comput., vol. 13, no. 5,
pp. 945–958, Oct. 2009.