Вы находитесь на странице: 1из 11

A new method to optimize dynamic environments with

global changes using the chickens-hen' algorithm

Mostafa Zarei, Hamid Parvin*, Marzieh Dadvar

Department of Computer Engineering, Boushehr Branch, Islamic Azad University,


Boushehr, Iran
*
parvin@alumni.iust.ac.ir

Abstract. Many different methods for optimization have been proposed and
they are swarm-based intelligence algorithms. By these types of algorithms
optimized solutions can be found that almost close global solution.
Optimization algorithms both maximize and minimize issues to cover.
Optimization problems can be divided into two general categories: static and
dynamic optimization. Most of the problems in the real world are dynamic in
nature. Solving dynamic optimization problems due to the change in the
optimal location would be very difficult. To solve dynamic optimization
problems we should use an appropriate algorithm. Division algorithm chickens
species and collective intelligence algorithms derived from nature is presented
in this paper. This is a technically efficient algorithm for solving dynamic
optimization based on the laws of probability and on the basis of the working
population. This algorithm is an example of behaviorism in artificial
intelligence. In this way, each member of the population who are called
chickens with the implementation of individual and group behaviors are
behaviors that proposed algorithm, they move towards the final answer. The
algorithm for estimating the efficiency of a memory density with Euclidean
clustering is used. Suitable solutions have been used to preserve the memory of
the past. To test the efficiency of the proposed method of moving peaks
benchmark famous simulator is used. Experimental results show that the
proposed revision on moving peaks benchmark in dynamic performance is
acceptable solve optimization problems.

Keywords: Collective Intelligence, Dynamic Optimization Problems, Moving


Peaks Benchmark

1 Introduction

In the real world often face complex issues that need to be optimized and important
[1]-[5]. In fact, these issues can optimize time, costs and optimize our many other
useful things [6]-[16]. In general, optimization problems can be put into two basic
groups: 1) static optimization problems 2) dynamic optimization problems [17]-[31].
Static optimization problems, optimization problem is fixed and does not change,
so the optimal tracking this stuff is somewhat easy task [32]-[42]. Given that most of
the problems in the real world are dynamic, so in these environments requires that in
addition to finding the optimal variable algorithms, optimal variable to follow
desirable manner. So in these areas will be optimal algorithms, with an average error
found the best solution in every moment of time is lowest. Whenever a change in the
target optimization, sample issue or restriction of an optimization problem to occur
some changes, it may change its optimum. If this happens it is necessary to adapt the
solution to old solution. A standard way to face with these dynamics, dealing with
each new change as an optimization problem must be solved first. This method is
sometimes impractical because of this that solving a problem from the beginning,
without reuse of the information is very time consuming. In the optimization of
complex issues, using careful optimization is impossible, so the random search
methods used to achieve a near-optimal response. In fact, in this type of algorithms
appropriate response can be taken in an acceptable timeframe, but there is no
guarantee to obtain the best response. Among the random search methods
evolutionary algorithms that are derived from nature have a special place. Given the
complexity of many optimization problems with a dynamic, developing ways to
improve the efficiency of the algorithms for solving dynamic optimization has been
done. Since evolutionary algorithms have traditionally been evolving, a perfect
candidate for solving optimization problems seems to come.
Most species of animals have been seen in group behaviors. Perhaps that some of
these species are also guided by a top member groups. For example, lions, monkeys
and deer-quite seen it. There's more interesting is that there are species of animals that
live in groups, but not tips. Each member has a self-organizing behavior without the
use of a guide can-can in the environment and their natural needs to iron out such as
birds, fish and flocks of sheep. These animals do not have any knowledge about the
general behavior of the entire group or even no knowledge of the environment in
which they are not. Instead, members are able to exchange the information with your
neighbors move in. This simple interaction between particles caused more complex
treatment group. Find a particle-like environment.
Division algorithm eggs and chicks of collective intelligence algorithms are based
on population and random search work. HCSA of chickens is based on social
behavior and on the basis of random searches, works the crowd and behaviorism. This
algorithm has features such as high convergence speed, flexibility and fault tolerance
is that it is acceptable for solving optimization problems. This algorithm is a method
in this article to solve dynamic optimization problems are presented and detailed in
the following sections will be described.
As stated, there are three main problems in dynamic environments:
The first problem is to identify the changes in the environment could algorithm timely
to show an appropriate response to these changes. The third problem, the basic
problem in dynamic environments problem is the loss of diversity, because
diversification is optimized for mobile and converged a group then it is optimized
convergence algorithms to dramatically reduce efficiency.
2 Related Works

Regarding dynamic environment, a variety of techniques have been suggested


recently. In following you can find some of them.
Yazdani et al [43] proposed a novel multi-swarm algorithm for optimization based
on particle swarm optimization. They try to maximize diversity in the environment. A
collaborative model for tracking optima in dynamic environments [43] has been
proposed [44]. CESO, their method, employs two subpopulations with the same
volume to recognize and follow the movable optimal solutions in dynamic
environments.
In 2015, Ozoydan and Baykasoglu [45] presented a multi-population algorithm
based on the fireflies algorithm to solve dynamic optimization problems. In that
method, the chaos mapping was used to generate the initial population. The multi-
population method in the fireflies algorithm was used to maintain the diversity at an
appropriate level.
Hu and Ebrehart provided a method based on collective optimization of particles,
namely RPSO, for dynamic environments in which the random movement of particles
will occur whenever the necessary diversity is lost [46].
Sadeghi et al [47] presented an algorithm based on particle crowding for solving
dynamic optimization problems. In this method, a clear memory is used for fast track
of the changed optimum. Also an appropriate strategy is used to update the memory.
In this algorithm the best past solutions that are not too old are used in the memory to
track the optimum in the future.
In [48] the SPSO method has been provided for dynamic environments. The SPSO
algorithm is capable of dynamic distribution of the particles to some groups. SPSO
has been designed based on the concept of "species”. The center of a species which is
called the species core is a particle that has always the best fitness in the species. All
particles that are located within a predefined radius of the species core are in the same
species. The algorithm converges the groups to several local optimums instead of
converging them to a single global optimum. Hence, several sub-populations are
developed in parallel.
The idea of using several populations, especially in multimodal environments, is
very useful. Here, the idea of multi-groups to divide the total search space into several
sub- spaces has been used in order to converge each of group on one of the possible
peaks.
In FMSO [49] a parent group has been used as a base class for identifying
promising areas, and a swarm of children groups has been also used for the local
search in their own related sub-space. Each children group has its own search area
which is considered as a sphere with the radius r and the center of the best particle in
that group. Hence, a particle with a distance less than r from the best particle of
that group ( ) is considered. Therefore, a particle with a distance less than r from
belongs to the children group.
In [50] an algorithm named EA-KDTree has been suggested. In this method, the
explorative and non- explorative areas are separated by adaptation. The algorithm
divides the search areas into several regions. In this method the areas are covered to
detect the changing optimum. A simple solution has been used. According to this
approach, each area is estimated to find the optimum, and a special data structure
called the KD-Tree has been used to memorize the search areas in order for the
convergence speed to increase.
In [51] a particle crowd algorithm based on clustering has been proposed. In this
algorithm the particles are divided into some clusters of. Each particle does a local
search in its own cluster. The k-means clustering has been used. Also a parameter
named the crowd factor has been used in this method. The crowd factor has been used
for more diversity. The crowd factor states that if the particle crowd occurs in some
area of the search space, some particles crowded in that area will randomly migrate to
a less crowded area of the search space. This model is a strategy to encourage the
simulated tracking for multi-peak functions that prevents the accumulation of
particles in the peaks. In this algorithm, instead of using a group that has s particles
and aims at finding the optimal solution of the n -dimensional vector, a group of
particles is chosen per dimension. So, n particle groups are used, each with s
particles.
In [52] a collective optimization of particles based on the cellular automata, called
CellularPSO, has been suggested. The main idea of this approach is to utilize local
interactions in the cellular automaton (CA) and divide the population of particles
within the CA cells. Each group tries to find a local optimum and this leads to finding
the global optimum. In this method, a cellular automaton with cd equal cells in a d-
dimensional environment has been used. Therefore, a particle in the search space can
be assigned to a cell of the cellular automaton. This concept is used to maintain the
diversity in the search space. In addition, a concept known as particle density is used
for each cell. According to that, a threshold for the maximum number of particles that
can be placed in a cell has been determined. This causes all the particles not to be
converging on a cell; so only a portion of the particles can search in a single area of
the search space and other particles do the search in other areas of the search space.
In [53], Blackwell et al offered the adaptive algorithm Adaptive mQSO for
dynamic environments. In the Adaptive mQSO algorithm, the number of groups is not
specified in the beginning, and it increases with the changes in the environment and
finding new peaks so that whenever all groups get converged, an anti-convergence
operator will create a new free group that helps to find new local optimums.
A special kind of particle swarm algorithms proposed for dynamic environments is
the mQSO algorithm [54]. In this algorithm, the particles are divided into two
categories:
1- Neutral particles: neutral particles particle were first presented in the
standard particle swarm algorithm. Neutral particles, the particles are
responsible for rapid convergence of the particles to the intended optimum.
2- Quantum particles: quantum particles were presented in [54] as a tool to
maintain a certain level of diversity within a group. They are affected by
atomic models. Quantum particles are placed in random positions to
maintain group diversity.
The mQSO algorithm is known as the particle swarm algorithm based on quantum
particles, and the total population in this algorithm is divided into several groups. It
includes three diversity operators namely quantum particles, disposal and anti-
convergence. The disposal operator uses a mechanism to prevent premature
convergence and create diversity. If the disposal operator finds two overlapping
groups, it will give initial values to the worse one again. When all groups are
convergent, the anti-convergence operator will give initial values to the worse one
again [54].
Kamoosi et al provided an optimization method for multi-population particle which
is suitable for dynamic environments and has been much better than the conventional
methods. The algorithm is based on the standard algorithm of particle swarm [55]. In
the method provided by Kamoosi et al, the algorithm begins its work by creating a
group. Then each child group will update their position and speed equations if they
are active. After updating the position and speed equations, a group is checked. In
case the convergence radius (the maximum distance between two particles in a group
determines its convergence radius) is less than the total convergence radius and the
fitness of the group is less than the fitness of the best position found among all
groups, the group will become inactive and instead of that, among all inactive groups
whose convergence radiuses are more than the total convergence radius, the one with
the highest level of fitness will become active. Also in this method if two groups are
within each other’s within radiuses, the one with less fitness will be removed from the
search space [56].

3 Method and Results

The quasi-code of the hen and chicken algorithm is shown in Figure 4.

For i=1 to cycle


For j=1 to h

Save( )
}
Else if ( )
{
//child nears to hen & nears to best & far from worst

}
Else if
{

}
Else {
Wander }
Fig. 1. The quasi-code of the proposed algorithm.

The default setting for moving peaks benchmark [57] is indicated in Table 1.

Table 1. Standard Setting for moving peaks benchmark [57].

Parameter Value
(number of peaks) 10
every 5000
evaluations
height severity 7.0
width severity 1.0
peak shape Cone
basic function No
shift length s 1.0
number of dimensions 5
[0, 100]
[30.0, 70.0]
[1, 12]
50.0

The default setting for proposed algorithm is indicated in Table 2.

Table 2. Standard Setting for proposed algorithm.

Parameter Value
5
1
10
100
5

The proposed algorithm was compared to mQSO [54], FMSO, CellularPSO [52]
and Multi-Swarm [56] algorithms. Here, for mQSO the 10 (5 + 5 q) configurations
were used in which 10 groups were created and each group had 5 neutral particles as
well as 5 quantum particles. In addition, the quantum radius for this algorithm was
considered 5/0 while the disposal radius and the convergence radius were equal to
5/31. For the FMSO algorithm, the maximum number of child groups is 10 while the
disposal radius between the children groups, the number of particles in the parental
group and in the child groups were 25, 100 and 10, respectively. For CellularPSO a 5-
dimensional cellular automata with105 cells and Moore neighborhood 1 with the cell
radius 2 has been used in the search space. The maximum particle speed equaled the
neighborhood radius and the maximum number of particles per cell equaled 10 and
also the local search radius was set 5/0. Besides, all particles do the local search for a
stage after observing some change in the environment. For the Multi-Swarm methods,
the number of particles for the parent group and the number of particles for the
children groups were 5 and 10, respectively. The disposal radius between the groups
of children and the radius of the quantum particles were respectively 30 and 5/0.
In this section the experiments conducted on the proposed model at frequencies of
500 to 10,000 as well as the number of peaks equal to 1 to 200 were explained. The
default setting for the moving peak function is shown in Table 2. The result of the
experiments for all algorithms is the mean offline error with 95 percent in 100 runs.
Hereby, the proposed algorithm was compared with mQSO10 (5 + 5q), FMSO,
CellularPSO and Multi-SwarmPSO, AmQSO [53], HdPSO [58], FTMPSO [43],
CDEPSO [59] and DPSABC [60] algorithms.
The offline error and the standard error obtained from the experiments are shown
in Tables 3 to 6 from the environments with different dynamics. Better results are
shown in bold. As can be seen, the difference between the offline error of the
proposed algorithm and other algorithms increases with the increase of the
environment frequency as well as the increase of the space complexity (an increase in
the number of peaks). The reason for this is that the proposed algorithm can obtain
better solutions faster after seeing the changes in the environment. In the proposed
method, due to the wide variety, almost all peaks are covered by the particles.

Table 3. A comparison between the offline error and the standard error of the proposed
method with other methods for f = 500.

Multi
Adaptive
m mQSO(5,5q) CellularPSO FMSO Swarm FTMPSO DPSABC HCSA
mQSO
PSO
1 33.67(3.42) 3.02(0.32) 13.46(0.73) 7.58(0.9) 5.46(0.30) 1.76(0.09) 2.77(0.00) 4.01(0.21)
5 11.91(0.76) 5.77(0.56) 9.63(0.49) 9.45(0.4) 5.48(0.19) 2.93(0.18) - 3.78(0.24)
10 9.62(0.34) 5.37(0.42) 9.35(0.37) 18.26(0.3) 5.95(0.09) 3.91(0.19) 3.42(0.00) 3.54(0.12)
20 9.07(0.25) 6.82(0.34) 8.84(0.28) 17.34(0.3) 6.45(0.16) 4.83(0.19) 3.12(0.00) 3.27(0.16)
30 8.80(0.21) 7.10(0.39) 8.81(0.24) 16.39(0.4) 6.60(0.14) 5.05(0.21) 3.69(0.00) 3.19(0.11)
40 8.55(0.21) 7.05(0.41) 8.94(0.24) 15.34(0.4) 6.85(0.13) - - 3.14(0.13)
50 8.72(0.20) 8.97(0.32) 8.62(0.23) 5.54(0.2) 7.04(0.10) 4.98(0.15) 3.22(0.00) 3.10(0.10)
100 8.54(0.16) 7.34(0.31) 8.54(0.21) 2.87(0.6) 7.39(0.13) 5.31(0.11) 3.01(0.00) 3.06(0.13)
200 8.19(0.17) 7.48(0.19) 8.28(0.18) 11.52(0.6) 7.52(0.12) 5.52(0.21) 3.16(0.00) 3.01(0.12)

4 Conclusion

In this paper a new approach has been proposed for dynamic optimization. This
method is based on a behavior from nature. This method has been completely
evaluated on moving peak problem. In all conditions of moving peak problem, the
proposed method has been effective.
Through experimental results it is concluded that this method is effective amongst the
best recent dynamic optimizers.

References

1. Baldonado, M., Chang, C.-C.K., Gravano, L., Paepcke, A.: The Stanford Digital Library
Metadata Architecture. Int. J. Digit. Libr. 1 (1997) 108–121
2. Bruce, K.B., Cardelli, L., Pierce, B.C.: Comparing Object Encodings. In: Abadi, M., Ito, T.
(eds.): Theoretical Aspects of Computer Software. Lecture Notes in Computer Science, Vol.
1281. Springer-Verlag, Berlin Heidelberg New York (1997) 415–438
3. van Leeuwen, J. (ed.): Computer Science Today. Recent Trends and Developments. Lecture
Notes in Computer Science, Vol. 1000. Springer-Verlag, Berlin Heidelberg New York
(1995)
4. Michalewicz, Z.: Genetic Algorithms + Data Structures = Evolution Programs. 3rd edn.
Springer-Verlag, Berlin Heidelberg New York (1996)

1. Minyoung Kim: Sparse inverse covariance learning of conditional Gaussian mixtures for
multiple-output regression. Applied Intelligence, 44(1): 17-29
2. M. Tanveer, K. Shubham, M. Aldhaifallah, K. S. Nisar: An efficient implicit regularized
Lagrangian twin support vector regression. Applied Intelligence, 44(4): 831-848
3. S. Balasundaram, Yogendra Meena: Training primal twin support vector regression via
unconstrained convex minimization. Applied Intelligence, 44(4): 931-955
4. Liming Yang, Yannan Qian: A sparse logistic regression framework by difference of convex
functions programming. Applied Intelligence, 45(2): 241-254
5. Sungwan Bang, HyungJun Cho, Myoungshic Jhun: Adaptive lasso penalised censored
composite quantile regression. IJDMB, 15(1): 22-46
7. Jianwei Ding, Yingbo Liu, Li Zhang, Jianmin Wang, Yonghong Liu: An anomaly detection
approach for multiple monitoring data series based on latent correlation probabilistic model.
Applied Intelligence, 44(2): 340-361
8. Qing Cai, Lijia Ma, Maoguo Gong, Dayong Tian: A survey on network community
detection based on evolutionary computation. IJBIC, 8(2): 84-98
9. Hamid Alishavandi, Gholam Hosein Gouraki, Hamid Parvin: An enhanced dynamic
detection of possible invariants based on best permutation of test cases. Comput. Syst. Sci.
Eng. 31(1) (2016)
10.H Parvin, B Minaei-Bidgoli, H Alinejad-Rokny A new imbalanced learning and dictions
tree method for breast cancer diagnosis - Journal of Bionanoscience, 2013, 7(6): 673-678
11.Hamid Parvin, Hamid Alinejad-Rokny, Behrouz Minaei-Bidgoli, Sajad Parvin: A new
classifier ensemble methodology based on subspace learning. J. Exp. Theor. Artif. Intell.
25(2): 227-250 (2013)
12.Hamid Parvin, Behrouz Minaei-Bidgoli, Hamid Alinejad-Rokny, William F. Punch: Data
weighing mechanisms for clustering ensembles. Computers & Electrical Engineering 39(5):
1433-1450 (2013)
13.Hamid Parvin, Behrouz Minaei-Bidgoli: A clustering ensemble framework based on elite
selection of weighted clusters. Adv. Data Analysis and Classification 7(2): 181-208 (2013)
14.Hosein Alizadeh, Behrouz Minaei-Bidgoli, Hamid Parvin: Optimizing Fuzzy Cluster
Ensemble in String Representation. IJPRAI 27(2) (2013)
15.Parvin H, Beigi A, Mozayani N. A clustering ensemble learning method based on the ant
colony clustering algorithm. Int J Appl Comput Math 2012;11(2):286–302.
16.Hamid Parvin, Behrouz Minaei-Bidgoli: A clustering ensemble framework based on
selection of fuzzy weighted clusters in a locally adaptive clustering algorithm. Pattern Anal.
Appl. 18(1): 87-112 (2015)
17.Pavel Novoa-Hernández, Carlos Cruz Corona, David A. Pelta: Self-adaptation in dynamic
environments - a survey and open issues. IJBIC, 8(1): 1-13
18.Aderemi Oluyinka Adewumi, Akugbe Martins Arasomwan: On the performance of particle
swarm optimisation with(out) some control parameters for global optimisation. IJBIC, 8(1):
14-32
19.Hui Wang, Wenjun Wang, Hui Sun, Shahryar Rahnamayan: Firefly algorithm with random
attraction. IJBIC, 8(1): 33-41
20.Mauro Castelli, Leonardo Vanneschi, Ales Popovic: Parameter evaluation of geometric
semantic genetic programming in pharmacokinetics. IJBIC, 8(1): 42-50
21.B. Srinivasa Rao, K. Vaisakh: Multi-objective adaptive clonal selection algorithm for
solving optimal power flow problem with load uncertainty. IJBIC, 8(2): 67-83
22.Luneque Del Rio de Souza e Silva Junior, Nadia Nedjah: Distributed strategy for robots
recruitment in swarm-based systems. IJBIC, 8(2): 99-108
23.Zhengxuan Jia, Haibin Duan, Yuhui Shi: Hybrid brain storm optimisation and simulated
annealing algorithm for continuous optimisation problems. IJBIC, 8(2): 109-121
24.Praveen Ranjan Srivastava: Test case optimisation a nature inspired approach using
bacteriologic algorithm. IJBIC, 8(2): 122-131
25.Zixiang Xu, Ahmet Ünveren, Adnan Acan: Probability collectives hybridised with
differential evolution for global optimisation. IJBIC, 8(3): 133-153
26.Valentín Osuna-Enciso, Erik Cuevas, Diego Oliva, Humberto Sossa, Marco A. Pérez
Cisneros: A bio-inspired evolutionary algorithm: allostatic optimisation. IJBIC, 8(3): 154-
169
27.Mitul Kumar Ahirwal, Anil Kumar, Girish Kumar Singh: Study of ABC and PSO
algorithms as optimised adaptive noise canceller for EEG/ERP. IJBIC, 8(3): 170-183
28.Taher Niknam, Abdollah Kavousi-Fard: Optimal energy management of smart renewable
micro-grids in the reconfigurable systems using adaptive harmony search algorithm. IJBIC,
8(3): 184-194
29.Muhammad Asif Khan, Waseem Shahzad, Abdul Rauf Baig: Protein classification via an
ant-inspired association rules-based classifier. IJBIC, 8(1): 51-65
30.Chien-Pang Lee, Wen-Shin Lin: Using the two-population genetic algorithm with distance-
based k-nearest neighbour voting classifier for high-dimensional data. IJDMB, 14(4): 315-
331
31.Mingmin Zhu, Sanyang Liu, Jiewei Jiang: A hybrid method for learning multi-dimensional
Bayesian network classifiers based on an optimization model. Applied Intelligence, 44(1):
123-148
32.Mariela Cerrada, René-Vinicio Sanchez, Fannia Pacheco, Diego Cabrera, Grover Zurita,
Chuan Li: Hierarchical feature selection based on relative dependency for gear fault
diagnosis. Applied Intelligence, 44(3): 687-703
33.Hamid Parvin, Hosein Alizadeh, Behrouz Minaei-Bidgoli: A New Method for Constructing
Classifier Ensembles. JDCTA 3(2): 62-66 (2009)
34.H. Parvin, H. Alinejad-Rokny, M. Asadi, An Ensemble Based Approach for Feature
Selection, Journal of Applied Sciences Research, 7(9), 33-43 (2011).
35.Parvin H, Alizadeh H, Minaei-Bidgoli B, Analoui M. CCHR: combination of classifiers
using heuristic retraining. In: International conference on networked computing and
advanced information management (NCM 2008); 2008.
36.Hamid Parvin, Hosein Alizadeh, Mahmood Fathy, Behrouz Minaei-Bidgoli: Improved Face
Detection Using Spatial Histogram Features. IPCV 2008: 381-386
37.H. Parvin, H. Alinejad-Rokny, S. Parvin, A Classifier Ensemble of Binary Classifier
Ensembles, International Journal of Learning Management Systems, 1(2), 37-47 (2013)
38.Hosein Alizadeh, Behrouz Minaei-Bidgoli, Hamid Parvin: To improve the quality of cluster
ensembles by selecting a subset of base clusters. J. Exp. Theor. Artif. Intell. 26(1): 127-150
(2014)
39.Hosein Alizadeh, Behrouz Minaei-Bidgoli, Hamid Parvin: Cluster ensemble selection based
on a new cluster stability measure. Intell. Data Anal. 18(3): 389-408 (2014)
40.Behrouz Minaei-Bidgoli, Hamid Parvin, Hamid Alinejad-Rokny, Hosein Alizadeh, William
F. Punch: Effects of resampling method and adaptation on clustering ensemble efficacy.
Artif. Intell. Rev. 41(1): 27-48 (2014)
41.Hamid Parvin, Miresmaeil Mirnabibaboli, Hamid Alinejad-Rokny: Proposing a classifier
ensemble framework based on classifier selection and decision tree. Eng. Appl. of AI 37:
34-42 (2015)
42.Parvin, H., Mohammadi, M., & Rezaei, Z. (2012). Face identification based on Gabor-
wavelet features. International Journal of Digital Content Technology & its Application,
6(1), 247-255.
43. D. Yazdani, B. Nasiri, A. Sepas-Moghaddam, and M. R. Meybodi, "A novel multi-swarm
algorithm for optimization in dynamic environments based on particle swarm
optimization," Applied Soft Computing, 2013.
44. Lung, R.I., Dumitrescu, D.: A Collaborative Model for Tracking Optima in Dynamic
Environments. In: IEEE Congress on Evolutionary Computation, (2007) pp.564–567.
45. F.B. Ozsoydan and A. Baykasoglu, "A multi-population firefly algorithm for dynamic
optimization problems", Evolving and Adaptive Intelligent Systems (EAIS), 2015 IEEE
International Conference. Pp 1-7. DOI: 10.1109/EAIS.2015.7368777.
46. X. Hu, R.C. Eberhart, “Adaptive particle swarm optimization: detection and response to
dynamic systems”. In: IEEE Congress on Evolutionary Computation, Honolulu, HI, USA,
vol. 2, pp. 1666–1670, 2002.
47. S. Sadeghi, H. Parvin and F. Rad, "Particle Swarm Optimization for Dynamic
Environments", Springer International Publishing, 14th Mexican International Conference
on Artificial intelligence, MICAI 2015, PP 260-269, October 2015.
48. A. Petrowski, “A clearing procedure as a niching method for genetic algorithms”. In: Proc.
of the 2003 Conference on Evolutionary Computation, pp.798–803.IEEE Press. 2003.
49. S. Yang and C. Li, “Fast Multi-Swarm Optimization for Dynamic Optimization Problems,”
Proc, Int’l Conf. Natural Computation, vol. 7, no. 3, pp. 624-628, 2008.
50. T.T. Nguyen, "Solving dynamic optimization problems by combining Evolutionary
Algorithms with KD-Tree",Soft Computing and Pttern Recogonition (SoCPaR),
International Conference, pp. 247-252, 2013.
51. S. Yang and C. Li, “A clustering particle swarm optimizer for dynamic optimization,” in
Proc. Congr. Evol. Comput., 2009, pp. 439–446.
52.A.B. Hashemi, M.R Meybodi, “Cellular PSO: A PSO for Dynamic Environments”.
Advances in Computation and Intelligence, pp. 422–433, 2009.
53.T. M. Blackwell and J. Branke and X. Li, “Particle swarms for dynamic optimization
problems.” Swarm Intelligence. Springer Berlin Heidelberg,. pp. 193-217, 2008.
54.T. M. Blackwell and J. Branke, “Multi-swarms, exclusion and anti-convergence in dynamic
environments”. IEEE Transactions on Evolutionary Computation, Vol.10, pp.459–472,
2006.
55.J. Kennedy, R.C. Eberhart, “Particle Swarm Optimization”, Proceedings of IEEE
International Conference on Neural Networks”, Piscataway, NJ, pp. 1942-1948, 1995.
56.M. Kamosi, A.B. Hashemi, M.R. Meybodi, “A New Particle Swarm Optimization
Algorithm for Dynamic Environments”. SEMCCO. pp. 129-138, 2010.
57.http://www.aifb.unikarlsruhe.de/~jbr/MovPeaks/
58.M. Kamosi, Hashemi, A. B. and Meybodi, M. R., "A Hibernating
Multi-Swarm Optimization Algorithm for Dynamic Environments," in
Proceedings of World Congress on Nature and Biologically Inspired Computing,
NaBIC, Kitakyushu, Japan, 2010, pp. 370-376.
59. J. K. Kordestani, A. Rezvanian, and M. R. Meybodi, "CDEPSO: a bi-population hybrid
approach for dynamic optimization problems," Applied intelligence, vol. 40, pp. 682-694,
2014.
60. N. Baktash and M. R. Meybodi, "A New Hybrid Model of PSO and
ABC Algorithms for Optimization in Dynamic Environment," Int’l Journal of
Computing Theory Engineering, vol. 4, pp. 362-364, 2012.

Вам также может понравиться