Академический Документы
Профессиональный Документы
Культура Документы
Nikolaos E. Karkalos
Angelos P. Markopoulos
J. Paulo Davim
Computational
Methods for
Application in
Industry 4.0
SpringerBriefs in Applied Sciences
and Technology
Series editor
Joao Paulo Davim, Aveiro, Portugal
More information about this series at http://www.springer.com/series/10623
Nikolaos E. Karkalos Angelos P. Markopoulos
•
J. Paulo Davim
Computational Methods
for Application
in Industry 4.0
123
Nikolaos E. Karkalos J. Paulo Davim
Laboratory of Manufacturing Technology, Department of Mechanical Engineering
School of Mechanical Engineering University of Aveiro
National Technical University of Athens Aveiro
Athens Portugal
Greece
Angelos P. Markopoulos
Laboratory of Manufacturing Technology,
School of Mechanical Engineering
National Technical University of Athens
Athens
Greece
This Springer imprint is published by the registered company Springer International Publishing AG
part of Springer Nature
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
v
vi Preface
vii
viii Contents
1.1 Introduction
Since the beginning of the first industrial revolution, engineers were always attempt-
ing to resolve problems related to the operation of machinery and their maintenance.
They also aimed at the improvement of the efficiency of manufacturing processes
and generally at the organization of the production and other relative subjects. As
it was anticipated, systematic approaches for the scientific study of industry-related
problems were established and the solutions were proposed. However, after the intro-
duction of computers and development of computational methods, a new promis-
ing era for solving industry-related problems emerged, as advanced computational
techniques were capable of providing approximate but significantly accurate solu-
tions. Especially, when it is desired to increase the efficiency of manufacturing pro-
cesses by determining the optimum process parameters or when the solution of hard
production-based problems, such as scheduling, is required, optimization methods
can be employed.
In advanced manufacturing, intelligence is a key element for future development
and progress. Intelligent production is at some level incorporated into industrial
practice, however, it is expected to play a major role in the near future; it is also
expected to affect manufacturing business globally, at any level, so that enterprises
will be flexible enough to respond to production changes swiftly. One very important
and contemporary concept, related to advanced and intelligent manufacturing, is the
concept of Industry 4.0.
Industry 4.0 can be conceived as the merge of manufacturing technology meth-
ods and processes and Information Technology (IT) [1–3]. The concept of Industry
4.0, a term which actually refers to the “4th industrial revolution”, was introduced
initially in the German-speaking areas, in 2011, by the name of “Industrie 4.0” [1,
3, 4] and has currently become a topic of high interest in the field of industrial engi-
neering. This industrial revolution was evoked by the considerable advances in IT
and electronics, such as advances in networks and Internet or embedded electronic
IoT is stated as the “key enabler for Industry 4.0”, along with CPS [1, 7, 15], as
it directly enables various constituents of the manufacturing system to integrate into
the network of the modern factory and to be able to interact by providing information
and cooperate with other components of this system, by using technological features
such as RFID, sensors or actuators, or other embedded electronics [3, 6, 7, 11].
The proper integration of IoT and CPS will eventually enable the creation of the
“smart factory”. In specific, smart factories are factories which have the abilities
to assist humans and machines to perform their tasks. This capability is offered
by CPS, which can effectively interconnect the physical and virtual world in these
factories [1]. Smart factories can achieve both vertically and horizontally integrated
production systems, e.g., networked manufacturing systems within the smart factory
and multiple smart factories, respectively [5, 7]. Within the smart factories, the goal of
decentralization of decision-making can be achieved by using CPS and IoT, alongside
big data analytics, and even artificial intelligence methods can be employed in order
to aid the machines to complete complex tasks and exhibit self-optimization and
self-configuration capabilities [6]. Finally, within the smart factories, end-to-end
integration will be also possible, related to the value chain of the product life cycle
[5].
Outside the German-speaking world, various relevant concepts have been intro-
duced in various technologically advanced countries or multinational entities, such
as: United States (US), China, European Union (EU), Japan, India, France, United
Kingdom (UK), South Korea, and Singapore [1, 2, 7, 11, 13]. More specifically, in
the US, there exist several initiatives close to the Industry 4.0 concept. For exam-
ple, the “Industrial Internet” concept proposed the integration of complex physical
machinery and embedded, networked systems in order to predict, control and plan
for the improvement of business and societal outcomes [7]. Other related concepts
in the US include the “Integrated Industry”, “Smart Industry”, and “Smart manu-
facturing” [1, 2, 7, 13]. In China, the initiative “Made in China 2025” proposes an
upgrade of the industry, relevant to Industry 4.0, in order to achieve a green, open
and shared development alongside the “Internet Plus” initiative [2, 13]. The EU has
also proposed an initiative entitled “Factories of the Future”, related also to Horizon
2020 programs [2, 13]. Apart from these countries or political entities, various large
companies have also taken similar initiatives such as the Industrial Industry Consor-
tium (IIC), which is a cooperation between AT&T, Cisco, General Electric, IBM and
Intel companies [13].
As it becomes evident, Industry 4.0 relies on IT technologies to a significant
degree. Apart from the implementation of the required infrastructure, e.g., hardware,
embedded systems, networks, it is also important to implement the software required,
e.g., for monitoring or decision-making processes for the CPS. For that reason, it
is possible to include algorithms and methods pertinent to soft computing, such as
Artificial Neural Networks (ANN) or computational methods for optimization, such
as Evolutionary Algorithms (EA), as these methods are capable of providing fast
simulation models or resolve industry-related problems efficiently. Especially, in the
case of hard problems such as the job shop scheduling problem, it is possible to
include a rapidly converging optimization method as a part of the decision-making
4 1 General Aspects of the Application of Computational …
system, which will receive information in real time from the physical processes in
the industrial environment and provide reliable results. Thus, it is possible to enable a
higher degree of reasoning for the CPS. During the last decades, nontraditional opti-
mization techniques, with generic capabilities, such as Genetic Algorithms (GA),
Swarm Optimization Algorithms, or others, were introduced to the field of industrial
engineering in order to facilitate the solution of industry-related problems. These
methods are intended to be presented in this book, with a view to inform the inter-
ested reader of the optimization techniques available and be assistive to the selection
of appropriate optimization techniques for each problem. Furthermore, the details
regarding the computational efficiency of each technique, which are provided in each
section of this book, are important to be considered in the case of actual integration
of these algorithms in the decision-making processes of CPS.
As it will be shown in this book, the field of optimization methods in indus-
trial practice is always active and of great importance and thus, it is necessary to
acquire a general knowledge of the most useful methods and their applications in
order to be able to apply them efficiently in real-life situations. Thus, computational
methods, which can be used by intelligent systems within the concept of Indus-
try 4.0, were gathered and presented. These methods include Evolutionary-Based
and Swarm Intelligence-Based methods; however, some more methods are included,
such as Simulated Annealing method. Each method is explained in its fundamental
aspects, while some notable bibliography is provided for further reading.
In specific, the outline of the present book is structured as follows: in the current
chapter, the general aspects of optimization are summarized and a brief literature sur-
vey is presented with a view to underline the popularity and relevance of the described
methods in the field of industrial engineering. In Chap. 2, several evolutionary-based
metaheuristics, e.g., the Imperialist Competitive algorithm, the Biogeography-Based
Optimization method or the Teaching-Learning Based Optimization method are pre-
sented, with emphasis set on the most popular of them, namely the Genetic Algorithm.
In Chap. 3, the Swarm Intelligence-Based methods, such as Particle Swarm Opti-
mization, Artificial Bee Colony optimization, Ant Colony Optimization, and others,
are described. Finally, in Chap. 4, other optimization techniques, not pertinent to the
aforementioned categories, such as Simulated Annealing method, are presented.
engineering are assembly line worker assignment, facility layout problem, workforce
allocation problem, cell formation and task scheduling, vehicle routing problem,
job shop scheduling problem, single or parallel machine scheduling, and optimal
allocation of manufacturing process among other.
Metaheuristics can be generally classified into several categories according to
some of their characteristics. In the present work, a classification is necessary as many
methods share common characteristics and is more proper to present them succes-
sively. Although classifications with different criteria may be possible, the classifi-
cation performed in the present work distinguishes between classical Evolutionary-
Based Metaheuristics and Swarm Intelligence-Based Metaheuristics with the fol-
lowing criterion: methods which involve the creation of offspring with improved
characteristics at each generation are classified in the first category, whereas algo-
rithms that involve the cooperation of individuals for the accomplishment of a com-
mon task are classified in the second category. Moreover, other important methods
which should be presented separately from the others are presented in Chap. 4. The
aforementioned methods are using a different type of metaphor than the majority of
methods described in the other sections, which were mostly bioinspired, population-
based methods or do not contain a metaphor. In fact, two of them are inspired from
physics, namely Simulated Annealing and Electromagnetism-like Mechanism, and
two of them are not related to any real-life process, i.e., Tabu Search and Response
Surface Methodology, which is not a metaheuristic. However, these methods play a
considerable role in the solution of production-related optimization problems, as it
will be shown in the next subsection and deserve being mentioned.
It is worth mentioning that the field of metaheuristics is a very rapidly progressing
one, and as new variants and hybrid algorithms emerge in a relatively high rate, it
could be difficult to cover every metaheuristic method in detail. However, as the
literature survey indicates, the methods presented cover a wide range of older and
newer metaheuristics frequently utilized for industrial problems. In the current work,
only algorithms which have been at least once employed for this type of problems are
presented. Each method is briefly presented aiming to inform the readers about the
basic characteristics and then details on the significance of method parameters and
notable variants are also described with a view to aid the selection of an appropriate
method for their needs. Before processing to the methods’ description, a short liter-
ature survey is conducted with a view to inform the readers about the significance
and popularity of each of the methods which will be presented afterwards.
In order to present the significance of the use of metaheuristics in the field of engi-
neering, but also to demonstrate the popularity of each of these methods, a literature
survey was conducted in journals of the field. In total, about 1500 scientific papers
were searched in online databases of major publishing companies such as Elsevier,
Springer, and Taylor & Francis during the literature survey. Results were obtained
8 1 General Aspects of the Application of Computational …
separately for each method, which will be presented in the current work and then
were aggregated according to the general category of methods to which they belong.
More specifically, in this survey, search results for each method belonging to each
of the major categories of methods present in the current work are summed for the
last 10 years, i.e., 2007–2017. In Fig. 1.1, the total results concerning the number
of journal articles relevant to each category of methods are presented, whereas in
Fig. 1.2, the detailed results for each year from 2007 to present can be observed.
From Fig. 1.1, it can be seen that about one out of two papers concerning the
application of metaheuristics for industrial engineering problems is related to evo-
lutionary algorithms and the rest of papers are almost equally divided between the
other two categories, namely 767, 316 and 410 papers for each category, respectively.
Thus, genetic algorithm and other related methods, including several variants, are
sufficiently suited for this type of problems, although they are not problem-based
methods. However, it should not be underestimated that the Swarm Intelligence (SI)
methods, although generally being more recent compared to other types of methods,
have considerable popularity during the last decade, as well as more “traditional”
methods such as Simulated Annealing and Tabu Search, which were categorized in
the third category.
From the more detailed graph in Fig. 1.2, some of the aforementioned observations
can be directly confirmed and other trends can be observed.
As for the EA-based algorithms, it can be easily observed that the results presented
in Fig. 1.1 are again confirmed, as the number of relevant publications is in every
year higher than the publications with all the other types of methods and almost
every year they constitute about 50% of the total number of publications. A trend of
temporary increase or decrease of the number of such publications is observed, but
every year about 60–80 publications are regularly reported. For the SI-based methods,
it is interesting to see that the number of publications has considerably risen since
2007, especially after 2011–2012, and it fluctuates around 30 each year, with some
exceptions. The number of publications using SI-based methods was initially lower
than that of the two other categories but since 2011 it is almost equal with the category,
related to the other computational methods. Finally, the total number of publications
is almost over 140 during the last years, implying that they constitute a considerable
part of industrial engineering-related research.
10 1 General Aspects of the Application of Computational …
References
1. Hermann M, Pentek T, Otto B (2016) Design principles for industrie 4.0 scenarios. In: 2016
49th Hawaii international conference on system sciences (HICSS). Koloa, USA, pp 3928–3937
2. Qian F, Zhong W, Du W (2017) Fundamental theories and key technologies for smart and
optimal manufacturing in the process industry. Engineering 3:154–160
3. Roblek V, Meško M, Krapež A (2016) A complex view of industry 4.0. SAGE Open
6:2158244016653987
4. Kagermann H, Wolf-Dieter L, Wahlster W (2011) Industrie 4.0: Mit dem Internet der Dinge
auf dem Weg zur 4. industriellen Revolution. VDI Nachrichten 13:11
5. Liu Y, Xu X (2016) Industry 4.0 and cloud manufacturing: a comparative analysis. J Manuf
Sci Eng 139:34701–34708
6. Aiman Kamarul Bahrin M, Othman F, Hayati Nor Azli N, Farihin Talib M (2016) Industry 4.0:
a review on industrial automation and robotic. Jurnal Teknologi 78:137–143
7. Thoben K-D, Wiesner S, Wuest T (2017) “Industrie 4.0” and smart manufacturing—a review
of research issues and application examples. Int J Autom Technol 11(1):4–16
8. Zhong RY, Xu X, Klotz E, Newman ST (2017) Intelligent manufacturing in the context of
industry 4.0: a review. Engineering 3:616–630
9. Lu Y (2017) Industry 4.0: a survey on technologies, applications and open research issues. J
Ind Inf Integr 6:1–10
10. Wang S, Wan J, Zhang D, Li D, Zhang C (2016) Towards smart factory for industry 4.0: a
self-organized multi-agent system with big data based feedback and coordination. Comput
Netw 101:158–168
11. Shrouf F, Ordieres J, Miragliotta G (2014) Smart factories in industry 4.0: a review of the
concept and of energy management approached in production based on the internet of things
paradigm. In: 2014 IEEE international conference on industrial engineering and engineering
management. Bandar Sunway, Malaysia, pp 697–701
12. Tamás P, Illes B, Dobos P (2016) Waste reduction possibilities for manufacturing systems in
the industry 4.0. IOP Conf Ser Mater Sci Eng 161(1):012074
13. Liao Y, Deschamps F, de Loures EFR, Ramos LFP (2017) Past, present and future of Industry
4.0—a systematic literature review and research agenda proposal. Int J Prod Res 55:3609–3629
14. Illés B, Tamás P, Dobos P, Skapinyecz R (2017) New challenges for quality assurance of
manufacturing processes in industry 4.0. Solid State Phenom 261:481–486
15. Tamás Péter, Illes B (2016) Process improvement trends for manufacturing systems in industry
4.0. Acad J Manuf Eng 14:119–125
16. Rao SS (2009) Introduction to optimization. Wiley, Hoboken
17. Lin M-H, Tsai J-F, Yu C-S (2012) A review of deterministic optimization methods in engineer-
ing and management. J Math Probl Eng 756023
18. Rothlauf F (2011) Design of modern heuristics: principles and application, 1st edn. Springer
Publishing Company Inc., Berlin
19. Wang X, Damodaran M (2000) Comparison of deterministic and stochastic optimization algo-
rithms for generic wing design problems. J Aircr 37:929–932
20. El-Ghazali T (2009) Metaheuristics: from design to implementation. Wiley Publishing, Hobo-
ken
Chapter 2
Evolutionary-Based Methods
2.1 Introduction
Among the most important metaheuristics of the first category presented in this
work is the genetic algorithm (GA). This method is a nature-inspired (biology-
inspired) method pertinent to natural selection process and inheritance of charac-
teristics through genes [1]. This method was developed initially by Holland [2] and
belongs to the greater family of evolutionary algorithms; this method is considered
as one of the earliest metaheuristic methods. Due to the fact that this method is by
far the most popular one in many fields of science, a considerable amount of modifi-
cations and variants of the original form of this algorithm have been proposed in the
relevant literature, aiming at the amelioration of its performance and considerable
of biological reproduction and crossover, as the new solution is derived from more
than one “parent” solutions, which belong to older generations. This technique is
actually the main exploration mechanism for GA [5]. Several techniques exist for
the implementation of the crossover operator, such as swapping data from parent
solutions based on a single or two points division of each solution vector, but also
combination of data from more than two parents can be performed.
The mutation operator aims at creating a “genetically diverse” solution for the
newer generation of the population by modifying some or all of the variables of a
candidate solution vector. This process is similar to the biological mutation process,
which alters the genes of a chromosome. This operator is also very important as it
can guarantee that new search areas will be visited in order to prevent the algorithm
from stopping in the area of local optimum. Usually, the occurrence of mutation for a
chromosome-candidate solution is related to a predefined mutation probability. This
factor can regulate the occurrence of mutation so as it is best suited to each problem
and phase of the optimization process.
in the population, as a small number of chromosomes will restrict the thorough search
in the search space and a large number of chromosomes will increase unnecessarily
the computational cost. Another interesting feature of the GA is its inherent paral-
lelism, which enables it to modify simultaneously a set of solutions [1]. For more
details on parallel GA implementations, the work of Cantú-Paz [5], which includes a
thorough analysis of a considerable amount of parallel algorithms, is recommended.
GA has been employed in almost every domain pertinent to industrial engineering.
Examples of such applications of GA include production of forged components [7],
flow shop scheduling [8], customer order scheduling problem [9], parallel machine
scheduling [10], hybrid flow shops [11], assembly/disassembly manufacturing net-
works [12], as well as supply chain problem [13]. For a more detailed view on the
applications of GA, the interested author is advised to consult relevant works in the
literature such as the comprehensive review by Chaudhry and Luo [14] or the work
of Aytug et al. [15].
Apart from the features of GA which were discussed in the previous subsection,
due to the popularity of GA method and due the fact that it constitutes one of the
earliest metaheuristics, several variants of GA exist in the relevant literature. These
variants add more features to the original GA formulation or consist of combinations
of other heuristics and metaheuristics with GA in order to improve its performance. It
is reported several times in the relevant literature that the simplest version of GA can
exhibit premature convergence without some enhanced features [1]. As the number
of such variants is large, it would be difficult to present an exhaustive list of these
variants in the present work but however, some notable variants of GA are presented
afterwards.
An interesting modification to the original GA is the use of adaptive parameter
control to modify the GA parameters, according to the evaluation of the candidate
solutions based on some appropriate measure, such as the ones related to the conver-
gence of the GA [16]. Usually, the parameters which can be adjusted are the crossover
or mutation probabilities with a view to prevent the algorithm from trapping in local
extrema. Another modified approach of the GA presented in the relevant literature
is the concurrent GA algorithm, which can be employed in cases of coupled prob-
lems, where it is required to consider multiple objectives simultaneously [17]. At
first, the whole optimization problem is replaced by a problem of finding feasible
solutions. Then, for the same population of candidate solutions, genetic operation
processes are performed in parallel. Afterwards, the new generation of solutions,
for each individual process, is combined to a new set of offspring, which becomes
the next population of candidate solutions. In fact, a number of chromosomes from
each individual problem are retained and the new set of offspring is created using
elements from all these chromosomes. The Concurrent Genetic Algorithm (CGA)
2.2 Genetic Algorithm 15
is performed using real number coding and every decision variable from individual
process is used as one gene.
The multi-population concept has also been successfully introduced for GA [18].
This variant involves a hierarchical structure within the population comprised of
clusters of individuals arranged in levels. Each cluster is composed of a leader,
which is the best individual of the cluster and supporters. Crossover is conducted
between members of the same cluster and new individuals replaced the previous only
if they are fitter. Additionally, a migration process occurs between all clusters and
then only the best individuals and migrated individuals are retained whereas all the
other individuals are replaced by new. This process is conducted after the populations
in the clusters have converged and new (better) individuals are not inserted.
For cases of bi-level problems, an approach called coevolutionary GA is often
adopted [19]. This type of problems involves two levels: the second or lower level
(“follower”) is in fact a part of the constraints of the main/upper level (“leader”)
problem. At some cases, the coevolutionary scheme is applied for different subpop-
ulations within the same search space, but applying this scheme for different search
spaces is also possible [19]. In the latter approach, the coevolutionary GA is hierar-
chical, applied in two different levels, namely the leader and follower search space.
For the search in the follower search space, instead of solving the exact problem, an
approximate solution is first computed and then some “good solutions” are evolved
by a modified GA to ensure feasibility as well as reduced the computational cost
considerably.
Coevolution algorithms can be further divided into two categories depending
on whether the subpopulations compete or cooperate [20]. By dividing the total
population into subpopulations, it is evident that the search space for each of the
sub-population is significantly smaller and it can be more easily searched. Another
multi-population coevolution method is described in [21].
Furthermore, Rajkumar and Shahabudeen [1] proposed the use of a Nawaz-
Enscore-Ham (NEH) heuristic along with the classical randomized initialization,
the use of several crossover operators, each with different probabilities, as well as
a set of different mutation operators. In order to avoid premature convergence, they
employed an elitist strategy and hypermutation strategy, which increased the muta-
tion probability to further continue the search procedure. Finally, Shukla et al. [22]
presented and compared the application of five variants of evolutionary algorithms
to the inventory routing problem.
Finally, it is usual in some cases to hybridize metaheuristics with other similar
techniques or soft computing methods in order to produce methods with enhanced
capabilities. As for the hybridizations of GA, i.e., combinations with other methods,
there exist several such as combination with Simulated Annealing method, Particle
Swarm Optimization or Response Surface Methodology [23], as well as combina-
tions with soft computing methods such as Artificial Neural Networks and Fuzzy
Logic System, which can result in effective and low-cost optimization frameworks.
16 2 Evolutionary-Based Methods
Although genetic algorithm is by far the most popular evolutionary algorithm, there
exist also some important evolutionary algorithms, closely related to GA, which are
worth mentioning. Thus, some of them, such as Differential Evolutions and Memetic
Algorithm will be presented afterwards in brief.
different points around the imperialist [44]; larger deviations enhance global search
whereas smaller deviations favor a local search [45]. The revolution operator is used
to generate random changes in some countries, e.g., to replace the weakest colony
with a random new solution [41]. This process favors exploration and can lead to
avoidance of local optima in the early stages of the optimization process [45], but it
was not included in the original version of the algorithm [41].
After this step, the total power of each empire is calculated and then, the compe-
tition operator is used by strong imperialist empires to take over colonies of weak
empires [40, 45], see Fig. 2.2; in fact, the weakest colony of the weakest empire is
released and the other empires can capture it according to a probability value related
to their power [41]. During the assimilation process, if a colony is more fit than the
imperialist, then it replaces it. When weak empires lose all their colonies, they are
eliminated and when the colonies of the last remaining empire are almost equally
strong, the algorithm stops [41, 46]. Alternatively, the algorithm can terminate after
a maximum number of iterations is attained [41]. Important parameters of ICA and
their effect on the algorithm are presented thoroughly in [45]. Generally, ICA can
be viewed as equivalent to the GA, as it can represent the evolutionary of human
societies instead of biological evolutionary.
ICA algorithm has been used for the solution of multi-objective problems by per-
forming the necessary modifications [47]. In the work of Karimi et al. [46], a term
used in the electromagnetism-like mechanism method is chosen for the implementa-
tion of movement during the assimilation step and Taguchi method was employed in
order to determine the algorithm parameters’ value. In another work [43], an elitist
approach is followed by copying the imperialists (best solutions) to the next gener-
ation, as well as an additional regrouping step is added, which aims at the increase
Fig. 2.2 Weakest colony takeover by the strongest empire [45] (reproduced with permission)
2.4 Imperialist Competitive Algorithm 19
Finally, the mutation operator accounts for a tremendous change in the habitat
due to several reasons and can modify several SIVs of a solution, functioning in a
similar way as the mutation operator of the GA (exploration) [52]. This operator
can be implemented in three different ways [56]. Mutation is governed by solution
probability value; the lowest solution probability is exhibited for very high or very
low HSI solution, as they are both considered extreme cases and subsequently, they
are more likely to mutate, increasing the diversity [51, 54]. For low HSI solution,
this approach is advantageous as it enables these solutions to evolve to better ones
and for high HSI solution, it gives them a chance to further improve their value [51].
However, an elitist feature in the algorithm ensures that high HSI solutions will be
carried to the next generation, by setting immigration rate to zero.
An improved BBO algorithm was proposed by Lin [52]. In this approach, the
initialization was performed by the Opposition-Based Learning (OBL) method and
NEH heuristic, a local search method is added in the migration step and the VLS
mechanism, i.e., variable local search, is used to modify the mutation operator. The
latter is used in order not to produce a completely random solution, which will
decrease considerably the HSI value of a habitat. In the work of Paslar et al. [54],
the migration operator was applied separately at each solution vector, a penalty
function method was used in order to handle constraints and a modified habitat
update procedure was implemented. For this algorithm, several variants as well as
hybridizations with other metaheuristics exist, such as Particle Swarm Optimization
and Artificial Bee Colony algorithm.
BBO has been applied several times for applications in the field of industrial engi-
neering such as permutation flow shop scheduling [52, 57], flexible manufacturing
system scheduling [54], minimization of nonproductive time [56], and supply chain
network design [58].
represents the teacher. The procedure is divided into two main parts, the teaching
and the learning phase. The teacher is a highly qualified person, attempting to transmit
his knowledge to the students so as their mean level of knowledge approaches his
level (teacher phase). It is considered that the better the quality of teaching, the
greater will be the influence on the students’ performance as the teacher will bring the
22 2 Evolutionary-Based Methods
students closer to his level [59]. During this process, students-candidate solutions are
updated according to the difference of their average performance and their teacher’s
performance, as can be also seen in Eq. 2.1:
In Eq. 2.1, X new denotes the new solution for a learner-student, X old denotes the
previous solution for the same learner, X teacher is the solution corresponding to the
teacher, r is a random number in the range [0,1], T F is the teaching factor which
can be interpreted as a heuristic step and can take the value 1 or 2, and Mean is the
mean of the group of learners at the current iteration. If the new solution is better
than the previous, it is accepted, otherwise the old value is retained [61]. The use of
random variables in the calculation of new solution implies that there are students
which learn the subjects to a high degree, students that are mediocre and students
that are not affected from the teacher and their only hope is to learn afterwards from
good students [62].
Moreover, students can also learn from other students, so as they discuss their
subjects and study in groups, something that is called the learning phase in TLBO.
In fact, students learn from other students with more knowledge; so, two students
are randomly chosen and compared and the new solution, calculated based on the
difference of their levels is accepted, is evaluated if it can bring an improvement
compared to the old [59]. The learning phase can be easily implemented as follows:
X new X old + r X i − X j (2.2)
In Eq. 2.2, it is evident that if learner j has better performance than i, the two terms
in the parenthesis are changed to (X j − X i ). Finally, the best student becomes teacher
in the end of the iteration. These processes of teaching and learning are repeated
until the termination criteria are met. Generally, the method is relatively simple in its
standard version, apart from the typical parameters, such as the size of population, the
number of maximum iterations and random variables, which regulate the movement
of students knowledge level towards the teachers, and those of the best students [63].
As in the case of other methods, modified versions of this method have also been
proposed. In an improved approach presented in [60], a training phase is added before
the teaching phase, in order to improve the quality of teachers. This approach also
considers more than one teacher in the population and two additional parameters,
namely the proportion of teachers in the population (η) and the training intensity
(ls ). The training phase is related to a local search procedure in the neighborhoods of
good solutions-teachers, which aims to intensify the search in favorable regions. In the
work of Rao [61] an elitist approach is also reported, in which the better solutions are
kept in the next generation as well. Moreover, in another work, modifications of the
original TLBO based on neighborhood search are presented [63]. At first, “learning
groups” consisting of several learners are formed and consequently, and changes in
the original teaching and learning phases are performed. As for the teaching stage, a
hybridization between the TLBO teaching phase and a Gaussian sampling learning
2.6 Teaching-Learning-Based Optimization Method 23
based on neighborhood search is proposed. Similarly during the learning stage, there
is a possibility of choosing the TLBO learning phase or a neighborhood search, which
is the equivalent of learning from the “neighborhood” teacher. These modifications
aim to improve the balance between global and local search. Furthermore, in the
Elitist-TLBO approach, a mutation operator is added to improve some of the flaws
of the original TLBO [64]. A more detailed analysis of TLBO algorithm is conducted
in [62].
Some of the applications of TLBO in engineering problems include hybrid flow
shop scheduling [59], optimal allocation of manufacturing resources [64], and assem-
bly line balancing [65].
to contribute more to the evolutionary of new ideas in the community [70] and
so this observation is used for the determination of individuals inside each sub-
memeplex, as the higher ranked frogs have higher probability to be selected [74].
The changes of meme components are related to leaping steps, which lead frogs to
new position. The frog communities are able to evolve separately but after some
iterations, a mixing (shuffling process) between individual of different communities
is performed in order to increase the quality of memes [70]. Furthermore, inside
each complex, sub-complexes are formed, usually by more than two individuals,
which act as parents and produce new solutions. This process corresponds to the
aforementioned local search process. The parents are ranked and there exists a chance
to produce new offspring originating from them or produce a random offspring. Then,
the offspring can replace the worst individual of the sub-complex [72]. When this
process is finished, evaluation of fitness of all frogs occurs, after which the new
global best is determined.
In fact, after the first generation is created, the individuals of every sub-complex
(sub-memeplex) are evaluated and the best and worst of each sub-complex, denoted
as X b and X w , respectively, as well as the global best individual, denoted as X g ,
are determined. Only the worst individual of the sub-complex can change during
each cycle, according to the best of the sub-complex or the global best, but there
is also a possibility that a random solution can be produced. More specifically, the
worst solution is attempted to be altered according to the local best, then if it is not
acceptable, it is attempted to be altered according to the global best and finally if
no improvement is observed, a random solution is generated [72]. The improvement
according to local best solution is performed as follows:
Di rand() ∗ (X b − X w ) (2.3)
X new,w X old,w + Di (2.4)
In Eqs. 2.3 and 2.4, Di represents the frog-leaping size for the i-th frog and lies
in the range [−Dmax , Dmax ], where Dmax is the maximum allowed step. Rand() is
a random number in the range [0,1] and X new,w and X old,w are the new solution
for the worst frog and previous solution for the worst frog, respectively. As it was
aforementioned, if X new,w is not an improved solution, X b is replaced in Eq. 2.3 by X g .
After several cycles are completed, the shuffling process starts [73]. The shuffling
process corresponds to a global information exchange and afterwards sorting of
individuals is conducted and new memeplexes are produced. Finally, the iterative
process ends when the termination criteria are met. Due to its influence of Memetic
Algorithm and the Swarm Intelligence characteristics, SFLA is considered to benefit
from both types of optimization methods [72].
In [73], the addition of a genetic mutation operator was proposed after the shuffling
process takes place in order to avoid premature convergence of the algorithm and
escape local optima. Furthermore, in [75], some modifications to enhance the speed
of SFLA are proposed such as the simultaneous update of each frog according to the
local and global best instead of the update of only the worst frog, creation of new
26 2 Evolutionary-Based Methods
memeplex not after every iteration, and replacement of the random update option
for the worst individual by a different strategy. In the work of Mora-Melia et al.
[71], the number of frogs inside the sub-memeplexes is defined more generally, as
a percentage and not an integer and leaping steps are allowed to be even larger than
the maximum leaping step in order to enhance the ability of preventing local optima.
For the interested reader, some other variants of the SFLA are reported in [75] and an
analysis of the importance of each parameter of the SFLA is presented in [71]. Some
applications of the SFLA in the field of engineering include long-term generation
maintenance scheduling [72], traveling salesman problem [74], gray project selection
scheduling [76], and multiprocessor scheduling [77].
Swimming is performed when current value of J is better than the last one when
moving in a direction up to a maximum number of movements. After tumbling and
swimming operations, the fitness of each bacterium is updated [80]. A special term
in the calculation of J serves for the representation of variations in attractiveness
of regions with different number of bacteria and for the prevention of two bacteria
having the same position using attraction and repellent terms [78, 81]. The swarming
step involves the aggregation of bacteria into groups and movement in concentric
paths [79]. Next, the reproduction steps involve the creation of the new generation
of bacteria; after the sorting of bacteria according to their fitness, the least healthy
bacteria die and the healthier can split into two individuals and produce new members
of the population in order to keep its size constant [79, 80]. Then, there is possibility
that gradual or sudden changes may occur to the population of bacteria and thus,
some random bacteria die and get replaced by others inside the search space; this
strategy is used to prevent the algorithm from stopping in local optima [79].
For more details on the algorithm, the interested reader is advised to consult a thor-
ough study on BFO algorithm by Das et al. [79]. During the years after the invention
of BFO, several modifications have been also proposed to this method. Kasaiezadeh
et al. [82] proposed a spiral bacterial foraging method in order to address deficiencies
of the original BFO such as premature convergence, need to tune algorithm param-
eters, and low speed of convergence. In this approach, gradient-based operators and
a multi-agent structure, which inherently possesses Swarm Intelligence character-
istics, are also employed to ameliorate the efficiency of BFO. Zhao et al. [83, 84]
added a Differential Evolutionary mutation operator to the chemotaxis step in order
to improve the motion of bacteria, if tumble fails. They also added a chaotic local
search process before the reproduction step, in order to further avoid local optima.
Muñoz et al. [81] noted the complexity of BFO regarding its nested architecture and
various parameters involved and proposed a simplified version of BFO with different
initialization processes and removal of cell-to-cell communication feature. Finally,
Li et al. [85] proposed a variant of BFO with varying population, which involves
the use of a more detailed model for the description of bacteria foraging process.
Some of the applications of SFLA algorithm in the field of production engineering
include task scheduling in cellular manufacturing system [80], permutation flow shop
scheduling [84], and job shop scheduling [81].
References
1. Rajkumar R, Shahabudeen P (2009) An improved genetic algorithm for the flowshop scheduling
problem. Int J Prod Res 47:233–249
2. Holland JH (1992) Adaptation in natural and artificial systems: an introductory analysis with
applications to biology, control and artificial intelligence. MIT Press, Cambridge
3. Wang L, Tang D (2011) An improved adaptive genetic algorithm based on hormone modulation
mechanism for job-shop scheduling problem. Expert Syst Appl 38:7243–7250
4. Goldberg DE (1989) Genetic algorithms in search, optimization and machine learning.
Addison-Wesley Longman Publishing Co., Inc., Boston
28 2 Evolutionary-Based Methods
5. Cantú-Paz E (1998) A survey of parallel genetic algorithms. Calc paralleles, Reseaux Syst
Repartis 10:141–171
6. Zhou A, Qu B-Y, Li H, Zhao S-Z, Suganthan PN, Zhang Q (2011) Multiobjective evolutionary
algorithms: a survey of the state of the art. Swarm Evol Comput 1:32–49
7. Denkena B, Behrens B-A, Charlin F, Dannenberg M (2012) Integrative process chain opti-
mization using a genetic algorithm. Prod Eng 6:29–37
8. Cui W-W, Lu Z, Zhou B, Li C, Han X (2016) A hybrid genetic algorithm for non-permutation
flow shop scheduling problems with unavailability constraints. Int J Comput Integr Manuf
29:944–961
9. Liu C-H (2009) Lot streaming for customer order scheduling problem in job shop environments.
Int J Comput Integr Manuf 22:890–907
10. Woo Y-B, Jung S, Kim BS (2017) A rule-based genetic algorithm with an improvement heuris-
tic for unrelated parallel machine scheduling problem with time-dependent deterioration and
multiple rate-modifying activities. Comput Ind Eng 109:179–190
11. Cho H-M, Jeong I-J (2017) A two-level method of production planning and scheduling for
bi-objective reentrant hybrid flow shops. Comput Ind Eng 106:174–181
12. Nahas N, Nourelfath M, Gendreau M (2014) Selecting machines and buffers in unreliable
assembly/disassembly manufacturing networks. Int J Prod Econ 154:113–126
13. Diabat A, Al-Salem M (2015) An integrated supply chain problem with environmental con-
siderations. Int J Prod Econ 164:330–338
14. Chaudhry SS, Luo W (2005) Application of genetic algorithms in production and operations
management: a review. Int J Prod Res 43:4083–4101
15. Aytug H, Khouja M, Vergara FE (2003) Use of genetic algorithms to solve production and
operations management problems: a review. Int J Prod Res 41:3955–4009
16. Akgündüz OS, Tunalı S (2010) An adaptive genetic algorithm approach for the mixed-model
assembly line sequencing problem. Int J Prod Res 48:5157–5179
17. Huang H, Wang Z (2010) Solving coupled task assignment and capacity planning problems
for a job shop by using a concurrent genetic algorithm. Int J Prod Res 48:7507–7522
18. Toledo CFM, França PM, Morabito R, Kimms A (2009) Multi-population genetic algorithm to
solve the synchronized and integrated two-level lot sizing and scheduling problem. Int J Prod
Res 47:3097–3119
19. Li H, Fang L (2014) Co-evolutionary algorithm: an efficient approach for bilevel programming
problems. Eng Optim 46:361–376
20. Maneeratana K, Boonlong K, Chaiyaratana N (2005) Co-operative co-evolutionary genetic
algorithms for multi-objective topology design. Comput Aided Des Appl 2:487–496
21. Yu B, Zhao H, Xue D (2017) A multi-population co-evolutionary genetic programming
approach for optimal mass customisation production. Int J Prod Res 55:621–641
22. Shukla N, Tiwari MK, Ceglarek D (2013) Genetic-algorithms-based algorithm portfolio for
inventory routing problem with stochastic demand. Int J Prod Res 51:118–137
23. Kucukkoc I, Karaoglan AD, Yaman R (2013) Using response surface design to determine the
optimal parameters of genetic algorithm and a case study. Int J Prod Res 51:5039–5054
24. Storn R, Price K (1997) Differential evolution—a simple and efficient heuristic for global
optimization over continuous spaces. J Glob Optim 11:341–359
25. Gnanavel Babu A, Jerald J, Noorul Haq A, Muthu Luxmi V, Vigneswaralu TP (2010) Scheduling
of machines and automated guided vehicles in FMS using differential evolution. Int J Prod Res
48:4683–4699
26. Peng W, Huang M (2014) A critical chain project scheduling method based on a differential
evolution algorithm. Int J Prod Res 52:3940–3949
27. Neri F, Tirronen V (2010) Recent advances in differential evolution: a survey and experimental
analysis. Artif Intell Rev 33:61–106
28. Nourmohammadi A, Zandieh M (2011) Assembly line balancing by a new multi-objective
differential evolution algorithm based on TOPSIS. Int J Prod Res 49:2833–2855
29. Chen H, Zhou S, Li X, Xu R (2014) A hybrid differential evolution algorithm for a two-stage
flow shop on batch processing machines with arbitrary release times and blocking. Int J Prod
Res 52:5714–5734
References 29
53. Rabiee M, Jolai F, Asefi H, Fattahi P, Lim S (2016) A biogeography-based optimisation algo-
rithm for a realistic no-wait hybrid flow shop with unrelated parallel machines to minimise
mean tardiness. Int J Comput Integr Manuf 29:1007–1024
54. Paslar S, Ariffin MKA, Tamjidy M, Hong TS (2015) Biogeography-based optimisation for
flexible manufacturing system scheduling problem. Int J Prod Res 53:2690–2706
55. Mukherjee R, Chakraborty S (2012) Selection of EDM process parameters using biogeography-
based optimization algorithm. Mater Manuf Process 27:954–962
56. Tamjidy M, Paslar S, Baharudin BTHT, Hong TS, Ariffin MKA (2015) Biogeography based
optimization (BBO) algorithm to minimise non-productive time during hole-making process.
Int J Prod Res 53:1880–1894
57. Lin J, Zhang S (2016) An effective hybrid biogeography-based optimization algorithm for the
distributed assembly permutation flow-shop scheduling problem. Comput Ind Eng 97:128–136
58. Yang G-Q, Liu Y-K, Yang K (2015) Multi-objective biogeography-based optimization for
supply chain network design under uncertainty. Comput Ind Eng 85:145–156
59. Rao RV, Savsani VJ, Vakharia DP (2011) Teaching–learning-based optimization: a novel
method for constrained mechanical design optimization problems. Comput Des 43:303–315
60. Shen J, Wang L, Zheng H (2016) A modified teaching–learning-based optimisation algorithm
for bi-objective re-entrant hybrid flowshop scheduling. Int J Prod Res 54:3622–3639
61. Rao RV (2016) Teaching learning based optimization algorithm: and its engineering applica-
tions. Springer International Publishing, Switzerland
62. Črepinšek M, Liu S-H, Mernik L (2012) A note on teaching–learning-based optimization
algorithm. Inf Sci 212:79–93
63. Zou F, Wang, L, Hei X, Chen D, Jiang Q, Li H (2014) Bare-bones teaching-learning-based
optimization. Sci World J 136920
64. Zhang W, Zhang S, Guo S, Yang Y, Chen Y (2017) Concurrent optimal allocation of distributed
manufacturing resources using extended teaching-learning-based optimization. Int J Prod Res
55:718–735
65. Tuncel G, Aydin D (2014) Two-sided assembly line balancing using teaching–learning based
optimization algorithm. Comput Ind Eng 74:291–299
66. Nara K, Takeyama T, Kim H (1999) A new evolutionary algorithm based on sheep flocks hered-
ity model and its application to scheduling problem. In: 1999 IEEE international conference
on systems, man, and cybernetics, Tokyo, Japan, pp 503–508
67. Kim H, Ahn B (2001) A new evolutionary algorithm based on sheep flocks heredity model.
In: 2001 IEEE Pacific Rim conference on communications, computers and signal processing,
Victoria, BC, Canada, pp 514–517
68. Chakaravarthy GV, Marimuthu S, Ponnambalam SG, Kanagaraj G (2014) Improved sheep
flock heredity algorithm and artificial bee colony algorithm for scheduling m-machine flow
shops lot streaming with equal size sub-lot problems. Int J Prod Res 52:1509–1527
69. Anandaraman C (2011) An improved sheep flock heredity algorithm for job shop scheduling
and flow shop scheduling problems. Int J Ind Eng Comput 2(4):749–764
70. Eusuff M, Lansey K, Pasha F (2006) Shuffled frog-leaping algorithm: a memetic meta-heuristic
for discrete optimization. Eng Optim 38:129–154
71. Mora-Melia D, Iglesias-Rey P, Martínez-Solano F, Muñoz-Velasco P (2016) The efficiency of
setting parameters in a modified shuffled frog leaping algorithm applied to optimizing water
distribution networks. Water 8:182
72. Samuel GG, Rajan CCA (2014) A modified shuffled frog leaping algorithm for long-term
generation maintenance scheduling. In: Pant M, Deep K, Nagar A, Bansal JC (eds) Proceedings
of the third international conference on soft computing for problem solving, Springer India,
New Delhi, pp 11–24
73. Bhattacharjee KK, Sarmah SP (2014) Shuffled frog leaping algorithm and its application to
0/1 knapsack problem. Appl Soft Comput 19:252–263
74. Luo X, Yang Y, Li X (2008) Solving TSP with shuffled frog-leaping algorithm. In: 2008 eighth
international conference on intelligent systems design and applications, Kaohsiung, Taiwan,
pp 228–232
References 31
75. Wang L, Gong Y (2013) A fast shuffled frog leaping algorithm. In: 2013 ninth international
conference on natural computation (ICNC), Shenyang, China, pp 369–373
76. Amirian H, Sahraeian R (2017) Solving a grey project selection scheduling using a simulated
shuffled frog leaping algorithm. Comput Ind Eng 107:141–149
77. Tripathy B, Dash S, Padhy SK (2015) Multiprocessor scheduling and neural network training
methods using shuffled frog-leaping algorithm. Comput Ind Eng 80:154–158
78. Passino KM (2002) Biomimicry of bacterial foraging for distributed optimization and control.
IEEE Control Syst 22:52–67
79. Das S, Biswas A, Dasgupta S, Abraham A (2009) Bacterial foraging optimization algorithm:
theoretical foundations, analysis, and applications. In: Abraham A, Hassanien A-E, Siarry P,
Engelbrecht A (eds) Foundations of computational intelligence volume 3: global optimization.
Springer, Berlin, Heidelberg, pp 23–55
80. Liu C, Wang J, Leung JY-T, Li K (2016) Solving cell formation and task scheduling in cellular
manufacturing system by discrete bacteria foraging algorithm. Int J Prod Res 54:923–944
81. Muñoz MA, Halgamuge SK, Alfonso W, Caicedo EF (2010) Simplifying the bacteria foraging
optimization algorithm. In: IEEE congress on evolutionary computation, Barcelona, Spain, pp
1–7
82. Kasaiezadeh A, Khajepour A, Waslander SL (2014) Spiral bacterial foraging optimization
method: Algorithm, evaluation and convergence analysis. Eng Optim 46:439–464
83. Zhao F, Jiang X, Zhang C, Wang J (2015) A chemotaxis-enhanced bacterial foraging algorithm
and its application in job shop scheduling problem. Int J Comput Integr Manuf 28:1106–1121
84. Zhao F, Liu Y, Shao Z, Jiang X, Zhang C, Wang J (2016) A chaotic local search based bacterial
foraging algorithm and its application to a permutation flow-shop scheduling problem. Int J
Comput Integr Manuf 29:962–981
85. Li MS, Ji TY, Tang WJ, Wu QH, Saunders JR (2010) Bacterial foraging algorithm with varying
population. Biosystems 100:185–197
Chapter 3
Swarm Intelligence-Based Methods
3.1 Introduction
The term “Swarm Intelligence” refers directly to the collective behavior of a group
of animals, which follow very basic rules, or to an Artificial Intelligence approach,
which aims at the solution of a problem using algorithms based on collective behavior
of social animals. For over three decades, several algorithms based on the observa-
tion of the behavior of groups of animals were developed, such as Particle Swarm
Optimization, from the observation of flocks of birds. Some of the most established
Swarm Intelligence (SI) methods include the Ant Colony Optimization method, the
Harmony Search method, and the Artificial Bee Colony algorithm. From the short
literature survey which was conducted and presented in Chap. 1, it was derived that
Particle Swarm Optimization is by far the most popular method, followed by Ant
Colony Optimization, Artificial Bee Colony, and Harmony Search.
Due to the large amount of metaphor based methods developed in the category
of SI-based methods, several scientists have expressed their concern on the novelty
of some of them and thus the use of these methods and derivation of new ones has
also attracted criticism [1]. In the present work, methods which have been employed
at least a couple of times in the field of engineering, as well as their variants, are
presented.
Kennedy and Eberhart [2]. According to its inventors, this method has ties to artifi-
cial life and swarming theory, but also possesses elements of evolutionary methods.
In fact, this method was created based on sociobiology observations of the birds’
behavior and then some basic observations were transformed into algorithm. Birds
always travel in groups, without collisions between them and adjust their position
and velocity because it reduces the effort to search for food and appropriate shelter
[2, 3].
The basic concept behind PSO method is that the whole number of particles, rep-
resenting actually the candidate solutions, is displaced in the search space according
to their best known position and the best known position of the swarm in order to
approach the global optimum point. Particles are assumed as having no mass and
volume and the update of their positions is performed using fundamental equations
from physics with some additional elements in order to ensure that the basic functions
of metaheuristics are performed [4], see Fig. 3.1. It is to be noted that PSO and SI
methods differ from other population-based algorithms such as Genetic Algorithms
and other Evolutionary-Based Algorithms because generally the total number of
population members does not change, but information is shared between members,
so as they can be directed towards the optimum point [5].
At first, a randomly initialized population of n particles-candidate solutions, which
can move in the multidimensional search space with velocity updated according to
personal and global best values, is generated [5]. The number of particles is problem
Fig. 3.1 Swarm updates location in each iteration [7] (reproduced with permission)
3.2 Particle Swarm Optimization 35
dependent and usually no rules are used to determine it [3]. A local search may also
be applied to a group of particles to enhance the exploitation capabilities [5].
The particle position (X i ) and velocity (V i ) are updated as follows:
Vit+1 wVit + c1 ∗ rand() ∗ Pit − X it + c2 ∗ rand() ∗ Pgt − X it (3.1)
X it+1 X it + Vit+1 (3.2)
As it can be seen from Eq. 3.1, three different terms are applied simultaneously
in order to determine the new position of a particle [6, 7]. The first term of Eq. 3.1
represents the current speed of the particle (inertia of the movement) and can be
balanced by a weight coefficient w. The second term represents the cognition term,
which causes the swarm to have the ability to search the whole search space and
avoid a local minimum, based on its own previous best value Pit or experience [3].
The third term is called the social term and reflects the information shared between
the swarm and the particles and leads towards good solutions [8]. The two constants
in the last two terms, i.e., c1 and c2 , regulate the effect of personal and global best
value Pgt to the outcome and are also called acceleration factors [3, 6]. Finally, the
rand() functions return a number in the range [0,1]. Average velocity is shown to
decrease when approaching a good solution and it is larger for a large-scale problem
[3].
As PSO is the most popular and old SI method, several modifications and numer-
ous variants have been introduced since its invention to address some of its shortcom-
ings [8]. For example in [6], an enhanced PSO method including mutation, crossover,
and shift operators to help the method to escape more easily from local optima is
presented. For the interested reader, more detailed information on PSO method, its
variants, and general applications is presented in [4, 9]. Some of these applications
concerning manufacturing engineering are single machine tardiness problem [5],
process planning optimization [10], no-wait flow shop problem [11], flexible flow
shop problem [12], assembly sequence planning and assembly line balancing [13],
and truck scheduling cross-dock [14].
A forager bee, initially called unemployed, flies away from the beehive in order
to find and evaluate food sources. Food sources are evaluated based on their distance
from the beehive, their nutrient value and taste, and the degree of difficulty for their
extraction. The forager bee associated to a food source is called an employed bee and
transfers information about food sources back to the beehive and informs the other
bees about these sources. Moreover, a percentage of the bees are either searching
for food source randomly (scout) or use the information from the employed bees to
find a suitable food source (onlooker). A very particular fact about bees is that the
exchange of information is conducted in the form of a “dance”, which is dependent
on the quality of food sources. After returning to the beehive, an employed bee can
attempt to recruit new foragers after dancing and return to the same food source,
continuing to forage with recruiting bees or stop foraging [16].
In this algorithm, candidate solutions (feasible solutions) are represented by the
position of food sources and the evaluation of the candidate solution is represented
by the evaluation of the nectar amount of each food source, i.e., the quality of food
source [16]. The number of employed bees or the onlooker bees is equal to the
number of candidate solutions. Then, it is assumed that half of the population of bees
are employed and half of them are onlookers [17]. These two type of bees carry out
the exploitation process, as it will be described afterwards [16, 18]. Initially, food
sources are randomly distributed in the search space, according to Eq. 3.3:
xi j x min
j + x max − x min ∗ r
j j (3.3)
In the last equation, i = 1,…, NP and j = 1,…, D, where NP represents the number of
food sources and D the number of decision variables. The term r represents a random
number in the range 0–1. At each step of the algorithm, the search for better solutions
is conducted by means of three food search processes by employed, onlooker, and
scout bees. Employed bees choose a modified food source position, slightly different
from the one stored in their memory, and evaluate it; if it has a higher amount of
nectar, it is chosen as a new food source and is memorized by the bee, while the old
is abandoned [16]. The route of a bee foraging for nectar can be seen in Fig. 3.2. The
neighborhood solutions for this phase of the algorithm are produced as follows:
vi j xi j + ϕi j xi j − xk j (3.4)
In Eq. 3.4, j is a randomly chosen integer in the range 1–D and k ∈ {1, …,
NP} is a random food source, different from food source x i . The term ϕ ij is random
number in the range [−1,1] and vij is the modified solution in the neighborhood of
x ij . After employed bees have completed their work, they share their information
with the onlookers, which evaluate all current food sources and choose one of them
according to a probability value, dependent on the fitness of each food source. The
onlookers also modify the food source position to produce a new one and choose
it, if it is better than the previous one. At last, the scout bees are sent to find new
food sources randomly, which is implemented by a random selection process and
3.3 Artificial Bee Colony Method 37
Fig. 3.2 Bee route in the nectar foraging procedure [18] (reproduced with permission)
serves for diversification purposes [16, 19]. These new random food sources replace
abandoned food sources, i.e., food position, which were not improved after some
iterations. This is implemented in the algorithm as follows:
j j j j
xi xmin + rand() ∗ xmax − xmin (3.5)
j
In Eq. 3.5, xi is the new food source to replace the abandoned x i and j is a random
integer, in the range 1–D. It is to be noted that the determination of the abandoned
food sources is implemented using a counter, which increases each time an employed
bee cannot find any good modified solution, and it is set to zero once a better solution
is found [19]. Moreover, initially, the ABC algorithm allowed only one scout bee per
generation [20]. These steps are performed in the same way until termination criteria
are met. Basic parameters of the ABC algorithm are the number of food sources, the
38 3 Swarm Intelligence-Based Methods
number of each bee type members [21], the maximum number of iterations, and the
“limit” variable, determining the abandonment of a food position.
As for applications of ABC to industrial engineering, there exist several to prob-
lems such as job shop scheduling [22], capacitated vehicle routing problem [23],
multi-factory parallel machine problem [24], two-machine flow shops [25], and job
shop scheduling problem [26].
Ant Colony Optimization (ACO) constitutes one of the most popular SI metaheuris-
tics. It was originally proposed by Dorigo in his Ph.D. thesis [27]. Ants constitute
a social species and have determined their foraging behavior after millions of years
of evolutionary. In specific, they are able to communicate with other ants by the
deposition of chemical substances as they travel to find food in order to guide them
and effectively pass around every object that lies in their path, thus determining the
shortest path in every case. This type of behavior of some species, which involves a
self-organization method without need of direct communication, is termed as stig-
mergy in the field of biology.
In this method, the candidate solutions are represented by ants, which wander in
a specific area around their nest, actually the search space, and seek for food. Ants
create different paths on the ground when searching for food in various directions
[28]. When they find their food they travel back to their nest leaving a substance
called pheromone along their path [29]. When other ants are near this path they may
get attracted by the pheromone trails and follow this path. An important factor for
determining the best paths is the attractive strength of pheromone in a path. If this
path is shorter it is more frequently traversed by ants and thus pheromone levels are
high, attracting more other individuals. On the contrary, if the path is longer, the ants
require more time to travel to the food source and pheromone levels are reduced due
to evaporation. This evaporation mechanism is essential to the method as it is directly
related to the diversification capability and can lead the method to evade local optima
[28].
The algorithm is implemented based on natural observations. Initially, a popula-
tion of ants in randomly distributed in a graph and initial pheromone for each edge
is defined [30]. Usually, the initial value of pheromone is very low and almost equal
for each path. More specifically, for each ant, a starting node is randomly selected
and the ant moves to the next nodes until a tour is completed. The movements are
conducted according to a transition rule [31] and local information on the nodes. In
fact, moves are conducted based on a probability function dependent on the amount
of pheromone in an edge and the desirability of this path. The desirability can be
computed by a heuristic and is defined in a different way according to each problem.
For example, for the case in which short edges are desired, it is defined as the inverse
of the length of each path [32]. When a path is completed, it represents a potential
solution to the problem, which is subsequently evaluated [31]. For each edge/path
3.4 Ant Colony Optimization Algorithm 39
crossed, the pheromone level is updated based on the fitness of the candidate solution
corresponding to the path and the process continues until the termination criteria are
met [30]. In Fig. 3.3, pheromone trails development and the eventual dominance of
the pheromone level of the shortest path are shown for various different cases.
Two additional features are present in the ACO method, namely pheromone trail
evaporation and daemon actions [31]. Pheromone trail evaporation helps to avoid
premature convergence of the algorithms, as it allows ants to follow new paths and
daemon actions can act as an alternative to pheromone evaporation. Daemon actions
are generally optional processes that cannot be carried by individual ants and require
a rather centralized way of application; an example for these actions is a local search
procedure. Important parameters of the method include the number of ants, maximum
number of iterations, relative importance of pheromone, and evaporation rate.
As ACO method is one of the earliest metaheuristics, several modifications have
been proposed since its invention, aiming at the improvement of its performance.
In the work of Adubi and Misra [29], four main variants of the ACO method are
presented. In the work of Wong et al. [33], a modification for the ACO method is
proposed, in which a two-stage procedure is considered. Finally, for a more compre-
hensive review of the ACO method, the interested reader can consult works in the
relevant literature such as [31, 34, 35].
As ACO method originates from a metaphor concerning the determination of
shortest path, it is very popular for various scheduling problems which occur in
production engineering such as vehicle routing [36] and course timetabling. More
specifically, some of the problems solved with ACO are job shop scheduling [37,
38], process planning and scheduling [33], makespan minimization on parallel batch
processing machines [39], flexible manufacturing problems [40], and safety stock
placement [41].
Fig. 3.3 Pheromone trails for a undisturbed route from nest to food source, b introduction of obsta-
cle, c ants on two different routes and d eventual dominance of the shortest route [32] (reproduced
with permission)
40 3 Swarm Intelligence-Based Methods
soil content update, elite IWDs are defined to widen the global update process,
and local search process is added, as well. Furthermore, Alijla et al. [45] proposed
modifications to the selection process of the second node by replacing the fitness
proportionate selection method with two other ranking methods. IWD method has
been applied to some engineering-related problems in job shop scheduling [43, 46].
Wang et al. [49] and a thorough review and analysis of this method is also presented
in the work of Manjarres et al. [52].
Concerning engineering, several applications of the HS method can be observed
in the relevant literature with subjects such as maximization of production rate and
workload balancing in assembly line [53], single machine scheduling problem [54],
supply chain, routing [55], and medium-term sales forecasting [56].
This algorithm is inspired by the communication between fireflies using flash signals,
termed generally as the social behavior of fireflies [57]. The two basic applications
of flash signals are to attract partners or potential prey or even act as a warning for
predators [58]. It was proposed by Yang [59] and since then, several modifications
have been proposed. This algorithm is functioning in a similar way to standard SI
methods such as the PSO algorithm, and has been mostly employed for continu-
ous optimization problems, but works on discrete optimization problems exist as
well [57, 60]. The candidate solutions are represented by a population of fireflies,
which move towards more attractive fireflies, with higher levels of light intensity
[61]. By observing the basic characteristics of firefly behavior, a metaheuristic was
constructed.
More specifically, in the initial formulation of firefly metaheuristic, the first step
involves the random generation of an initial population of fireflies and the definition
of initial parameter values [61]. The basic parameters of the firefly algorithm are the
attractiveness β 0 , the light absorption coefficient γ , and the randomization parameter
α [57]. Then, the fireflies can move, updating their current position and based on three
basic rules [60–62]:
1. Fireflies are assumed to be unisexual, so the attraction does not depend on their
sex.
2. The brighter fireflies attract the less bright ones, but if there is no brighter firefly
a random move within the search space is performed.
3. The brightness of a firefly is directly correlated to the objective function values.
Consequently, at every iteration, the update of a firefly position (X i ) is conducted
as follows:
2 1
X it+1 X it + β0 e−γ ri j x tj − xit + a rand − (3.6)
2
The first term of Eq. 3.6 represents the current position of the firefly, the second
term represents the attraction mechanism, and the third represents a random walk.
The form of this equation shows that both movement towards the better solution and
random move are conducted at the same time. Randomization parameter values can
affect the convergence to great extent as large values of a increase the exploration
3.7 Firefly Algorithm 43
capabilities of the algorithm and low values of a increase the exploitation capabilities
of the algorithm. In order to compute the distance between fireflies (r ij ), Cartesian
distance is usually employed [61]. From Eq. 3.6, it can be seen that attractiveness is
correlated to distance as light intensity decreases exponentially with the distance and
attractiveness is considered proportional to the light intensity. In some formulation
of the Firefly Algorithm, the third term is written as α εi , with εi being a random
vector, derived from a Gaussian distribution of a Lévy distribution [62–64].
The quality of the candidate solutions/fireflies is assessed by the values of the
objective function, which represents the desirability of fireflies according to their
brightness. So, each firefly is compared to another firefly and if it is less bright,
updates its position, otherwise it does not move. After this process is applied to all
fireflies, evaluation is performed again and new brightness levels are computed and
the process continues until termination criteria are met.
A disadvantage of the original formulation of firefly algorithm is that the random-
ization parameter a is assumed constant for all iterations. It has been reported that this
assumption can lead to local optima instead of the global optimum, as the similarity
between fireflies is increasing after a number of iterations. This is anticipated, as the
definition of a does not take into account each firefly separately and thus it cannot be
changed dynamically according to each phase of optimization process. Furthermore,
analysis of the trajectories followed by the fireflies has shown that the global best
firefly acts as the attractor of the whole population. Thus, a good strategy to avoid
potential trapping in local optima is for a firefly to have separate searching ability
[61]. So, it was proposed that this parameter should be varied by a stepwise strategy
as follows:
ai (t + 1) αi (t) − αi (t) − amin ∗ e−(| X gbest (t)−X i,best (t)|∗( maxiter ))
t
(3.7)
From Eq. 3.7, it can be seen that the value of randomization parameter depends on
both personal (X i,best ) and global (X gbest ) best positions and the number of iterations.
In other approach, a values are allowed to vary between a lower and an upper value,
according to the number of iterations performed [58]. Other variants, which aim to
enhance the algorithm’s capabilities, include the use of clusters of fireflies instead of
a unified population [63] or the use of chaotic maps for tuning parameters γ and β
[64]. Finally, a detailed review of the various firefly algorithm variants can be found
in [62]. As far as optimization problems in engineering are concerned, several works
conducted with the aid of Firefly algorithm are reported on subjects such as flow
shop problem [57], job shop scheduling problem [58], manufacturing cell formation
[60], vehicle routing problem [65], and cross-docks scheduling [66].
Cuckoo Search (CS) algorithm is inspired by the breeding behavior of some species
of cuckoos, which lay their eggs in other species’ nests and was proposed as an
optimization algorithm by Yang and Deb [67]. Since then, the algorithm has achieved
44 3 Swarm Intelligence-Based Methods
Fruit Fly Optimization (FFO) algorithm is another metaheuristic for finding the global
optimum based on the food search conducted by fruit flies. It was proposed by Pan
and was applied for the first time in financial test cases in 2012 [74]. In this algorithm,
the search for the optimum point is conducted by fruit flies, which search for the ideal
food location and according to the results of their search, guide the whole swarm
towards favorable points in the search space. Fruit flies are capable of smelling food
from remote locations around the swarm central location, even at 40 km distance
[74, 75] and afterwards fly towards this location [76]. Thus, for the search process,
both smell (“osphresis”) and vision capabilities of fruit flies are utilized [74].
Initially, the fruit fly swarm is randomly generated and for each fruit fly, a random
direction is generated in respect to the initial position of the swarm. This is calculated
according to the previous swarm location and random values. Then, the smell foraging
search can take place, during which the distance between the food source location
and the origin is computed to find the smell concentration judgment value. Finally,
smell concentration is determined according to the objective function values [74].
If the random direction can be determined by X i and Y i , the smell concentration
judgment value can be calculated as follows:
Disti X i2 + Yi2 (3.8)
Si 1/Disti (3.9)
Smelli f (Si ) (3.10)
the fruit fly to move towards good solutions in a chaotic way. Finally, some other
variants can be found in the work of Xing and Gao [75].
Concerning production-related problems, there are a few reported to be solved
with the aid of FFO method such as: blocking flow shop scheduling [78] and flexible
job shop scheduling [76].
In Eq. 3.11, rand is a random number in the range 0–1, x i is the previous position
of a hunter, xi is the new position of the hunter, and xiL is the position of the leader. In
contrast to the real case, the hunters do not know, in advance, the optimum solution
but rather follow the current best solution dynamically. If the movement towards the
leader is not successful and a worse point is found, the hunter returns to its previous
position [79]. Positions are also corrected according to the relative positions of each
hunter in order to ensure that they cooperate better and hunt more efficiently [79,
80]. This movement is regulated by the HGCR and PCR parameters or HGCR and
Ra parameters in a later version of the algorithm [80], which aim to improve the
3.10 Hunting Search Algorithm 47
search with better solutions globally and locally, respectively [79]. If a new position
is worse than the previous, the hunter moves back to its previous position.
Moreover, if the positions of hunters are too close, their positions are properly
reorganized before the process continues with the next step. This process is necessary
in order to avoid local optima [79]. Alternatively, the reorganization process may
occur after a number of iterations [82]. When a member of the herd finds a better
solution, then it becomes the leader. A reorganization marks the end of an epoch; an
epoch is reached after a predefined number of iterations or after a local optimum is
detected [80, 82, 83]. Reorganization can be performed as follows:
In Eq. 3.12, parameters α and β are related to the convergence rate of the algo-
rithm; if their value is large, a slow convergence is obtained, which is useful for
problems with several local optima. Parameter EN counts the times, e.g., number of
epoch that the hunting group is trapped until the current step. It is to be noted that in a
couple of studies [81, 84], a different way of repositioning of the hunters is presented,
by adding a perturbation to the hunters’ position according to their distance from the
prey. In HuS method, the past vectors are stored in the hunting group matrix, similarly
to Tabu Search and HS methods but the difference is that comparison is conducted
only with the previous solution [80]. In order to enhance the method’s efficiency,
modifications such as Lévy flight for the search process are also proposed [85].
This method is already applied at several occasions for production-related subjects
such as track scheduling in multi-door cross-dock problem [81], and no-wait flow
shop scheduling [84].
Migrating Birds Optimization (MBO) algorithm was proposed by Duman et al. [86]
and is inspired from the details of flight of a flock of migrating birds. More specifi-
cally, this algorithm attempts to imitate the famous “V-formation” of migrating birds,
used by birds to fly at long distances [86, 87]. One of the birds is assumed to lead the
entire flock and the other birds follow the leader along two lines. This V-formation
is instinctive for the migrating birds and is shown to be beneficial for the flight. It
minimizes the total energy required for the flight, because the birds at the two lines
use the air turbulence from the leader’s flapping [87, 88]. As the first bird-leader is
consuming more energy, it gets tired after some time and is replaced by another bird
[88].
At each iteration of the MBO algorithm, candidate solutions at the neighborhood
of each bird are evaluated, starting from the leading bird, and then improved candidate
solutions replace the older ones [86, 89]. It is to be noted that a unique mechanism, the
so-called “benefit mechanism”, exists in this method; if no improved solutions can
be found among the neighbors of a bird, the unused neighbors of the front solution
48 3 Swarm Intelligence-Based Methods
are shared between the two birds and the best of them can be chosen instead [86, 89].
This mechanism is actually the most important feature that distinguishes MBO from
the other metaheuristics and is considerably favorable for the rapid convergence of
the algorithm [87].
Generally, the numerical parameters of the algorithm include [86, 88, 89] the
number of initial birds-solutions (n), the number of neighbor solutions considered
(k), the number of neighbor solutions to be shared with the following solution (x),
and the number of tours or laps to be conducted (m) and maximum iterations number
(K). Some of these parameters are compared to actual parameters of the flock; for
example k is thought as the speed of birds in real situations, x represents the wing
distance of bird and m the number of flutter or flapping of birds [87, 88]. Careful
choice of the various MBO parameters is crucial in order to find the optimum solution
within a reasonable time period; suggestions on optimum values of these parameters
can be found in [86] and [88].
At first, n initial birds-candidate solutions are randomly generated and positioned
in a V-formation. Then, the neighborhood of the leading solution is searched for
potentially better solutions and after that the same process, including the “benefit”
mechanism, is applied to the other birds in the formation until the “tails” of the
formation [86, 89]. This process is repeated for several times, called tours, after
which the leading bird becomes the last and one of the second row birds becomes
the first. It is to be noted that the leading bird will move to the tail of the line, from
which the current leading bird originated, but the next time this movement will be
conducted in the opposite line [87]. In another approach, this movement was also
performed by exchanging the leader with the best performing bird [90]. When the
termination criteria are met, the process is finished.
As for other metaheuristics, several modifications have been proposed also for
MBO. In order to improve the global search ability, it was proposed that multiple
flocks of birds could be used simultaneously (Enhanced MBO method), generated by
heuristic methods or randomly [89]. After m tours are completed, the best solutions
are shared among the various flocks [89]. Furthermore, Alkaya et al. [91] designed an
effective MBO algorithm with a specific neighbor generating function. According to
this approach, neighbors are obtained only within D-dimensional spheres. In another
study focusing on neighbor search part of the MBO algorithm, several neighbor
operators were compared according to their performance on various benchmarks.
Moreover, in the work of Soto et al. [90], parallel procedures for the various sorting
processes conducted in MBO are used, in order to further enhance its efficiency.
Finally, other ways of neighborhood search and suggestions on values of MBO
parameters are presented in the work of Benkalai et al. [92]. As for the applications
of MBO in engineering, although this method is relatively new, it has been already
employed to solve no-wait flow shop scheduling problem [89], flow shop sequencing
problem [88], and machine-part cell formation problems [92, 93].
3.12 Flower Pollination Algorithm 49
References
1. Sörensen K (2015) Metaheuristics—the metaphor exposed. Int Trans Oper Res 22:3–18
2. Kennedy J, Eberhart R (1995) Particle swarm optimization. In: IEEE international conference
on neural networks, Perth, WA, Australia, pp 1942–1948
3. Khare A, Rangnekar S (2013) A review of particle swarm optimization and its applications
in solar photovoltaic system. Appl Soft Comput 13:2997–3006
4. Poli R, Kennedy J, Blackwell T (2007) Particle swarm optimization. Swarm Intell 1:33–57
5. Fatih Tasgetiren M, Liang Y-C, Sevkli M, Gencyilmaz G (2006) Particle swarm optimization
and differential evolution for the single machine total weighted tardiness problem. Int J Prod
Res 44:4737–4754
6. Guo YW, Li WD, Mileham AR, Owen GW (2009) Optimisation of integrated process planning
and scheduling using a particle swarm optimisation approach. Int J Prod Res 47:3775–3796
7. Tsai C-Y, Kao I-W (2011) Particle swarm optimization with selective particle regeneration
for data clustering. Expert Syst Appl 38:6565–6576
8. Esmin AAA, Coelho RA, Matwin S (2015) A review on particle swarm optimization algorithm
and its variants to clustering high-dimensional data. Artif Intell Rev 44:23–45
9. Zhang Y, Wang S, Ji G (2015) A comprehensive survey on particle swarm optimization
algorithm and its applications. J Math Probl Eng 931256
10. Wang YF, Zhang YF, Fuh JYH (2012) A hybrid particle swarm based method for process
planning optimisation. Int J Prod Res 50:277–292
11. Samarghandi H, ElMekkawy TY (2014) Solving the no-wait flow-shop problem with
sequence-dependent set-up times. Int J Comput Integr Manuf 27:213–228
12. Attar SF, Mohammadi M, Tavakkoli-Moghaddam R, Yaghoubi S (2014) Solving a new multi-
objective hybrid flexible flowshop problem with limited waiting times and machine-sequence-
dependent set-up time constraints. Int J Comput Integr Manuf 27:450–469
13. Che ZH (2017) A multi-objective optimization algorithm for solving the supplier selection
problem with assembly sequence planning and assembly line balancing. Comput Ind Eng
105:247–259
14. Keshtzari M, Naderi B, Mehdizadeh E (2016) An improved mathematical model and a hybrid
metaheuristic for truck scheduling in cross-dock problems. Comput Ind Eng 91:197–204
15. Karaboga D (2005) An idea based on honey bee swarm for numerical optimization. Techical
report tr-06, Erciyes Engineering Faculty, Kayseri
16. Karaboga D, Akay B (2009) A comparative study of artificial bee colony algorithm. Appl
Math Comput 214:108–132
17. Dunder E, Gumustekin S, Cengiz MA (2018) Variable selection in gamma regression models
via artificial bee colony algorithm. J Appl Stat 45:8–16
18. Karaboga D, Basturk B (2008) On the performance of artificial bee colony (ABC) algorithm.
Appl Soft Comput 8:687–697
19. Bulut O, Tasgetiren MF (2014) An artificial bee colony algorithm for the economic lot schedul-
ing problem. Int J Prod Res 52:1150–1170
20. Li X, Yang G (2016) Artificial bee colony algorithm with memory. Appl Soft Comput
41:362–372
21. Hemamalini S, Simon SP (2010) Artificial bee colony algorithm for economic load dispatch
problem with non-smooth cost functions. Electr Power Components Syst 38:786–803
22. Lei D, Guo X (2013) Scheduling job shop with lot streaming and transportation through a
modified artificial bee colony. Int J Prod Res 51:4930–4941
23. Ng KKH, Lee CKM, Zhang SZ, Wu K, Ho W (2017) A multiple colonies artificial bee
colony algorithm for a capacitated vehicle routing problem and re-routing strategies under
time-dependent traffic congestion. Comput Ind Eng 109:151–168
24. Yazdani M, Gohari S, Naderi B (2015) Multi-factory parallel machine problems: improved
mathematical models and artificial bee colony algorithm. Comput Ind Eng 81:36–45
25. Wang X, Xie X, Cheng TCE (2013) A modified artificial bee colony algorithm for order
acceptance in two-machine flow shops. Int J Prod Econ 141:14–23
52 3 Swarm Intelligence-Based Methods
26. Zhang R, Song S, Wu C (2013) A hybrid artificial bee colony algorithm for the job shop
scheduling problem. Int J Prod Econ 141:167–178
27. Dorigo M (1992) Optimization, LEARNING AND NATURAL ALGorithms (in Italian).
Dipartimento di Elettronica, Politecnico di Milano
28. Osman H, Baki MF (2014) Balancing transfer lines using benders decomposition and ant
colony optimisation techniques. Int J Prod Res 52:1334–1350
29. Adubi SA, Misra S (2014) A comparative study on the ant colony optimization algorithms.
In: 2014 11th international conference on electronics, computer and computation (ICECCO),
Abuja, Nigeria, pp 1–4
30. Shyu SJ, Yin PY, Lin BMT, Haouari M (2003) Ant-tree: an ant colony optimization approach
to the generalized minimum spanning tree problem. J Exp Theor Artif Intell 15:103–112
31. Cordon O, Herrera F, Stützle T (2003) A review on the ant colony optimization metaheuristic:
basis, models and new trends. Mathware Soft Comput 9 (2–3)
32. Zecchin AC, Simpson AR, Maier HR, Leonard M, Roberts AJ, Berrisford MJ (2006) Appli-
cation of two ant colony optimisation algorithms to water distribution system optimisation.
Math Comput Model 44:451–468
33. Wong TN, Zhang S, Wang G, Zhang L (2012) Integrated process planning and
scheduling—multi-agent system with two-stage ant colony optimisation algorithm. Int J Prod
Res 50:6188–6201
34. Maniezzo V, Gambardella LM, de Luigi F (2004) Ant colony optimization. In: Onwubolu GC,
Babu BV (eds) New optimization techniques in engineering. Springer, Berlin, pp 101–121
35. Blum C (2005) Ant colony optimization: introduction and recent trends. Phys Life Rev
2:353–373
36. Yakıcı E (2017) A heuristic approach for solving a rich min-max vehicle routing problem
with mixed fleet and mixed demand. Comput Ind Eng 109:288–294
37. Seo M, Kim D (2010) Ant colony optimisation with parameterised search space for the job
shop scheduling problem. Int J Prod Res 48:1143–1154
38. Huang R-H (2010) Multi-objective job-shop scheduling with lot-splitting production. Int J
Prod Econ 124:206–213
39. Chen H, Du B, Huang GQ (2010) Metaheuristics to minimise makespan on parallel batch
processing machines with dynamic job arrivals. Int J Comput Integr Manuf 23:942–956
40. Yagmahan B, Yenisey MM (2008) Ant colony optimization for multi-objective flow shop
scheduling problem. Comput Ind Eng 54:411–420
41. Moncayo-Martínez LA, Zhang DZ (2013) Optimising safety stock placement and lead time
in an assembly supply chain using bi-objective MAX–MIN ant system. Int J Prod Econ
145:18–28
42. Hosseini HS (2007) Problem solving by intelligent water drops. In: 2007 IEEE congress on
evolutionary computation, Singapore, Singapore, pp 3226–3231
43. Niu SH, Ong SK, Nee AYC (2012) An improved intelligent water drops algorithm for achiev-
ing optimal job-shop scheduling solutions. Int J Prod Res 50:4192–4205
44. Hosseini HS (2009) The intelligent water drops algorithm, a nature inspired swarm based
optimization algorithm. J Int J Bio-Inspired Comput 1:71–79
45. Alijla BO, Wong L-P, Lim CP, Khader AT, Al-Betar MA (2014) A modified intelligent water
drops algorithm and its application to optimization problems. Expert Syst Appl 41:6555–6569
46. Niu SH, Ong SK, Nee AYC (2013) An improved intelligent water drops algorithm for solving
multi-objective job shop scheduling. Eng Appl Artif Intell 26:2431–2442
47. Geem ZW, Kim JH, Loganathan GV (2001) A new heuristic optimization algorithm: harmony
search. Simulation 76:60–68
48. Yang X-S (2009) Harmony search as a metaheuristic algorithm. In: Geem ZW (ed) Music-
inspired harmony search algorithm: theory and applications. Springer, Berlin, pp 1–14
49. Wang X, Gao X-Z, Zenger K (2015) The overview of harmony search. In: Wang X, Gao X-Z,
Zenger K (eds) An introduction to harmony search optimization method. Springer Interna-
tional Publishing, Cham, pp 5–11
References 53
50. Gao XZ, Govindasamy V, Xu H, Wang X, Zenger K (2015) Harmony search method: theory
and applications. Comput Intell Neurosci 258491
51. Mahdavi M, Fesanghary M, Damangir E (2007) An improved harmony search algorithm for
solving optimization problems. Appl Math Comput 188:1567–1579
52. Manjarres D, Landa-Torres I, Gil-Lopez S, Del Ser J, Bilbao MN, Salcedo-Sanz S, Geem
ZW (2013) A survey on applications of the harmony search algorithm. Eng Appl Artif Intell
26:1818–1831
53. Purnomo HD, Wee H-M (2014) Maximizing production rate and workload balancing in a
two-sided assembly line using harmony search. Comput Ind Eng 76:222–230
54. Zammori F, Braglia M, Castellano D (2014) Harmony search algorithm for single-machine
scheduling problem with planned maintenance. Comput Ind Eng 76:333–346
55. Alaei S, Setak M (2015) Multi objective coordination of a supply chain with routing and
service level consideration. Int J Prod Econ 167:271–281
56. Wong WK, Guo ZX (2010) A hybrid intelligent model for medium-term sales forecasting in
fashion retail supply chains using extreme learning machine and harmony search algorithm.
Int J Prod Econ 128:614–624
57. Vahedi Nouri B, Fattahi P, Ramezanian R (2013) Hybrid firefly-simulated annealing algorithm
for the flow shop problem with learning effects and flexible maintenance activities. Int J Prod
Res 51:3501–3515
58. Rohaninejad M, Kheirkhah AS, Vahedi Nouri B, Fattahi P (2015) Two hybrid tabu search—
firefly algorithms for the capacitated job shop scheduling problem with sequence-dependent
setup cost. Int J Comput Integr Manuf 28:470–487
59. Yang X-S (2009) Firefly algorithms for multimodal optimization. In: Watanabe O, Zeugmann
T (eds) Stochastic algorithms: foundations and applications. Springer, Berlin, pp 169–178
60. Sayadi MK, Hafezalkotob A, Naini SGJ (2013) Firefly-inspired algorithm for discrete opti-
mization problems: an application to manufacturing cell formation. J Manuf Syst 32:78–84
61. Yu S, Su S, Lu Q, Huang L (2014) A novel wise step strategy for firefly algorithm. Int J
Comput Math 91:2507–2513
62. Fister I, Fister I, Yang X-S, Brest J (2013) A comprehensive review of firefly algorithms.
Swarm Evol Comput 13:34–46
63. Hackl A, Magele C, Renhart W (2016) Extended firefly algorithm for multimodal optimiza-
tion. In: 2016 19th international symposium on electrical apparatus and technologies (SIELA),
Bourgas, Bulgaria, pp 1–4
64. Gandomi AH, Yang X-S, Talatahari S, Alavi AH (2013) Firefly algorithm with chaos. Com-
mun Nonlinear Sci Numer Simul 18:89–98
65. Alinaghian M, Naderipour M (2016) A novel comprehensive macroscopic model for time-
dependent vehicle routing problem with multi-alternative graph to reduce fuel consumption:
a case study. Comput Ind Eng 99:210–222
66. Madani-Isfahani M, Tavakkoli-Moghaddam R, Naderi B (2014) Multiple cross-docks
scheduling using two meta-heuristic algorithms. Comput Ind Eng 74:129–138
67. Yang XS, Deb S (2009) Cuckoo search via levy flights. In: 2009 world congress on nature &
biologically inspired computing (NaBIC), Coimbatore, India, pp 210–214
68. Yildiz AR (2013) Cuckoo search algorithm for the selection of optimal machining parameters
in milling operations. Int J Adv Manuf Technol 64:55–61
-
69. Bulatović RR, Ðordević -
SR, Ðordević VS (2013) Cuckoo search algorithm: a metaheuristic
approach to solving the problem of optimum synthesis of a six-bar double dwell linkage.
Mech Mach Theory 61:1–13
70. Gandomi AH, Yang X-S, Alavi AH (2013) Cuckoo search algorithm: a metaheuristic approach
to solve structural optimization problems. Eng Comput 29:17–35
71. Mohamad AB, Zain AM, Nazira Bazin NE (2014) Cuckoo search algorithm for optimization
problems—a literature review and its applications. Appl Artif Intell 28:419–448
72. Valian E, Tavakoli S, Mohanna S, Haghi A (2013) Improved cuckoo search for reliability
optimization problems. Comput Ind Eng 64:459–468
54 3 Swarm Intelligence-Based Methods
73. Kanagaraj G, Ponnambalam SG, Jawahar N (2013) A hybrid cuckoo search and genetic
algorithm for reliability–redundancy allocation problems. Comput Ind Eng 66:1115–1124
74. Pan W-T (2012) A new fruit fly optimization algorithm: taking the financial distress model
as an example. Knowl-Based Syst 26:69–74
75. Xing B, Gao W-J (2016) Innovative computational intelligence: a rough guide to 134 clever
algorithms. Springer International Publishing, Switzerland
76. Zheng X, Wang L (2016) A knowledge-guided fruit fly optimization algorithm for dual
resource constrained flexible job-shop scheduling problem. Int J Prod Res 54:5554–5566
77. Mitić M, Vuković N, Petrović M, Miljković Z (2015) Chaotic fruit fly optimization algorithm.
Knowl-Based Syst 89:446–458
78. Han Y, Gong D, Li J, Zhang Y (2016) Solving the blocking flow shop scheduling problem
with makespan using a modified fruit fly optimisation algorithm. Int J Prod Res 54:6782–6797
79. Oftadeh R, Mahjoob MJ (2009) A new meta-heuristic optimization algorithm: hunting search.
In: 2009 fifth international conference on soft computing, computing with words and percep-
tions in system analysis, decision and control, Famagusta, Cyprus, pp 1–5
80. Oftadeh R, Mahjoob MJ, Shariatpanahi M (2010) A novel meta-heuristic optimization
algorithm inspired by group hunting of animals: hunting search. Comput Math with Appl
60:2087–2098
81. Yazdani M, Naderi B, Mousakhani M (2015) a model and metaheuristic for truck scheduling
in multi-door cross-dock problems. Intell Autom Soft Comput 21:633–644
82. Bouzaida S, Sakly A, M’Sahli F (2014) Extracting TSK-type neuro-fuzzy model using the
hunting search algorithm. Int J Gen Syst 43:32–43
83. Zare K, Hashemi SM (2012) A solution to transmission-constrained unit commitment using
hunting search algorithm. In: 2012 11th international conference on environment and electrical
engineering, Venice, Italy, pp 941–946
84. Naderi B, Khalili M, Khamseh AA (2014) Mathematical models and a hunting search
algorithm for the no-wait flowshop scheduling with parallel machines. Int J Prod Res
52:2667–2681
85. Dogan E (2014) Solving design optimization problems via hunting search algorithm with
Levy flights. Struct Eng Mech 52(2):351–358
86. Duman E, Uysal M, Alkaya AF (2012) Migrating birds optimization: a new metaheuristic
approach and its performance on quadratic assignment problem. Inf Sci 217:65–77
87. Tongur V, Ülker E (2016) The analysis of migrating birds optimization algorithm with neigh-
borhood operator on traveling salesman problem. In: Lavangnananda K, Phon-Amnuaisuk
S, Engchuan W, Chan JH (eds) Intelligent and evolutionary systems. Springer International
Publishing, Cham, pp 227–237
88. Tongur V, Erkan Ü (2014) Migrating birds optimization for flow shop sequencing problem. J
Comput Commun 2:142
89. Gao KZ, Suganthan PN, Chua TJ (2013) An enhanced migrating birds optimization algorithm
for no-wait flow shop scheduling problem. In: 2013 IEEE symposium on computational
intelligence in scheduling (CISched), Singapore, Singapore, pp 9–13
90. Soto R, Crawford B, Almonacid B, Paredes F (2016) Efficient parallel sorting for migrating
birds optimization when solving machine-part cell formation problems. Sci Program 9402503
91. Alkaya AF, Algin R, Sahin Y, Agaoglu M, Aksakalli V (2014) Performance of migrating
birds optimization algorithm on continuous functions. In: Tan Y, Shi Y, Coello CAC (eds)
Advances in swarm intelligence. Springer International Publishing, Cham, pp 452–459
92. Benkalai I, Rebaine D, Gagné C, Baptiste P (2017) Improving the migrating birds optimization
metaheuristic for the permutation flow shop with sequence-dependent set-up times. Int J Prod
Res 55:6145–6157
93. Yang X-S (2012) Flower pollination algorithm for global optimization. In: Durand-Lose J,
Jonoska N (eds) Unconventional computation and natural computation. Springer, Berlin, pp
240–249
94. Zhang M, Pratap S, Huang GQ, Zhao Z (2017) Optimal collaborative transportation service
trading in B2B e-commerce logistics. Int J Prod Res 55:5485–5501
References 55
95. Ibanez S (2012) Optimizing size thresholds in a plant-pollinator interaction web: towards a
mechanistic understanding of ecological networks. Oecologia 170:233–242
96. Nabil E (2016) A modified flower pollination algorithm for global optimization. Expert Syst
Appl 57:192–203
97. Abdelaziz AY, Ali ES (2015) Static VAR compensator damping controller design based on
flower pollination algorithm for a multi-machine power system. Electr Power Compon Syst
43:1268–1277
98. He X, Yang X-S, Karamanoglu M, Zhao Y (2017) Global convergence analysis of the
flower pollination algorithm: a discrete-time Markov chain approach. Procedia Comput Sci
108:1354–1363
99. Bozorgi A, Bozorg-Haddad O, Chu X (2018) Anarchic society optimization (ASO) algorithm.
In: Bozorg-Haddad O (ed) Advanced optimization by nature-inspired algorithms. Springer
Singapore, Singapore, pp 31–38
100. Ahmadi-Javid A (2011) Anarchic society optimization: a human-inspired method. In: 2011
IEEE congress of evolutionary computation (CEC), New Orleans, LA, pp 2586–2592
101. Ahmadi-Javid A, Hooshangi-Tabrizi P (2015) A mathematical formulation and anarchic soci-
ety optimisation algorithms for integrated scheduling of processing and transportation oper-
ations in a flow-shop environment. Int J Prod Res 53:5988–6006
102. Bozorg-Haddad O, Latifi M, Bozorgi A, Rajabi M-M, Naeeni S-T, Loáiciga HA (2018)
Development and application of the anarchic society algorithm (ASO) to the optimal operation
of water distribution networks. Water Sci Technol Water Supply 18:318–332
103. Shayeghi H (2012) Anarchic society optimization based pid control of an automatic voltage
regulator (AVR) system. Electr Electron Eng 2(4):199–207
104. Ahmadi-Javid A, Hooshangi-Tabrizi P (2017) Integrating employee timetabling with schedul-
ing of machines and transporters in a job-shop environment: a mathematical formulation and
an anarchic society optimization algorithm. Comput Oper Res 84:73–91
Chapter 4
Other Computational Methods
for Optimization
4.1 Introduction
The last chapter of the present work is dedicated to methods that contain a few or
no similarities to the methods presented in the two previous chapters but however,
is worth mentioning due to their popularity or promising capabilities in the field of
industrial engineering. These methods include Simulated Annealing, Tabu search,
Electromagnetism-like Mechanism, and Response Surface Methodology methods.
More specifically, Simulated Annealing method is related to the metallurgical pro-
cess of annealing and its objective function is related to the reduction of the internal
energy of the system, by appropriate variation of its temperature. Tabu Search method
exhibits essentially no nature-inspired characteristics, as its basic feature is a list of
unacceptable moves, which is used to prevent the solution process to get trapped
in a local optimum point. Electromagnetism-like Mechanism is using the natural
mechanism of attraction-repulsion in electromagnetism, in order to lead the solution
process to the global optimum point. Finally, Response Surface Methodology is a
more generic method including not only optimization capabilities but also Design of
Experiments and experimental results analysis features. As in the previous chapters,
apart from the description of basic features of each method, examples from indus-
trial engineering problems solved by means of the aforementioned methods are also
presented.
Step1. Create the initial state of the system (initial temperature and random
initial feasible solution)
Step 2. While termination criteria are not met
Step 2a. Compute temperature with an appropriate function (annealing
schedule)
Step 2b. Pick a random neighbor (by a small change / perturbation of
the previous solution)
Step 2c. If probability value is higher than a random value in (0,1) then
the state of the random neighbor is the new state
Step 3. The final state is provided
As the SA method is one of the earliest computational methods for solving opti-
mization problems, it is anticipated that a large amount of relative works exist in the
literature. More specifically, in the field of industrial and manufacturing engineering,
several works exist, on subjects such as dynamic facility layout [8–10], cell forma-
tion [3], hybrid vehicle routing [5], integrated process planning and scheduling [11],
integrated lot sizing and scheduling [12], and truck scheduling [13].
Tabu Search (TS) constitutes one of the earliest metaheuristic methods and was
proposed by Glover [14]. TS method is based on the local search, but its rules
improve the performance of local search methods in order to prevent the trapping
in a local minimum point. The simplest form of TS method, as described by its
inventor, contains the basic element of the method, which is a short-term memory
list with the forbidden moves. In the work of Glover [14], several applications of TS
methods are reported and it is considered that TS method can be rather beneficial for
combinatorial optimization problems such as scheduling [15].
TS involves a local search procedure, in which an initial feasible solution is
improved until the optimum point is reached. A basic feature of TS method is the
use of several memory structures, which can guide the search to avoid premature
convergence and search a satisfactory portion of the total search space. The simplest
list is the short-term list, in which the last n visited solutions are stored in order not
to get visited until they are eventually deleted from the list [16]. The size of a tabu
list is often dependent on the size of the problem and there exists no single rule to
provide the exact size for every problem [17, 18]. This list can essentially prevent
60 4 Other Computational Methods for Optimization
the algorithm from returning back to recently searched solutions and reach a local
optimum [15]. Apart from the search procedure, intensification and diversification
schemes can also be applied during the search, aided by two special types of tabu lists
[19]. An intermediate-term list is related to the intensification process and implements
thorough exploration of promising areas of the search space. Intermediate-term list
records the number of consecutive iterations that various solutions components exist
in the current solution [17, 18]. The third list type is the long-term list, which serves
for diversification processes and helps the search procedure to move to new search
regions, when a local optimum is reached. This memory is a frequency memory,
storing the number of iterations of performed moves of visited solutions and can be
used to restart search from a different initial solution (restart diversification) or just
penalize frequently performed moves (continuous diversification) [17, 18]. The three
different memory lists can occasionally overlap.
It is important to note that although the elements in a short-term tabu list indicate
restricted moves, exceptions can be allowed in the case, when there is no current
solution better than the restricted one. The rule that enables the use of a restricted
move is also called an aspiration criterion and the most popular of them is, in fact,
the use of a tabu move that is better than the current best solution [15]. The use
of all types of lists is recommended in the case of hard optimization problems, but
otherwise the short-term list can be sufficient.
At first, an initial solution is generated and is set as the best solution. The tabu
list is also initialized and the search begins with moves in the search space until
the termination criteria, e.g., maximum number of iterations or threshold fitness
value or maximum number of iterations with no improvement in fitness value, are
met. These moves are essentially transitions between various points in the search
space [17]. At every iteration, a candidate solutions list, namely the neighborhood
solutions, is generated and each of these solutions is checked whether it is a tabu one
and also whether it improves the best solution. This neighborhood search applies a
small perturbation to the previous solution, in order to get a slightly modified one
[15]. In order not to evaluate a large number of candidates, sometimes the first non-
tabu move, which is better than the current solution, or a solution that satisfies the
aspiration criteria, is chosen (best feasible one) [15, 19]. When the tabu list is full,
the first element is pushed out, so that a new one can enter the list.
TS method is used in conjunction with other methods in order to enhance their
capabilities. This method is rather simple to be implemented in the original formula-
tion as it involves only three parameters, namely the tabu size (or tenure), maximum
number of iterations and number of neighbors [16]. As it is one of the earliest meta-
heuristics, many modifications have been proposed to improve this method. In the
work of Fiechter [20], the earliest effort to parallelize the TS method efficiently is
presented, in conjunction with the various types of memory lists, intended to apply
it in large optimization problems. Furthermore, in the work of Hung et al. [21], an
enhanced parallel TS is presented in order to address two serious problems of the
previous TS variants, such as the question about the optimality of the final solution
and the quality of the final solution. For that reason, a hash table is employed to save
all local optimum points. The algorithm starts by assigning a random initial solution
4.3 Tabu Search 61
to each processor and the search continues until a local optimum is found. If the local
optimum is stored in the hash table, search is terminated and restarted from a new
random solution, otherwise the solution is added to the hash table and the process
continues [21]. Thus, the hash table acts as a long-term memory, including all local
minima encountered, which can effectively end a search when a local optimum is
reached. Furthermore, in this approach, the subspaces associated to each local opti-
mum are determined as the set of all points that lead to the same local optimum. This
definition is also useful to indirectly determine the percentage of the explored area.
In the work of Dhingra and Bennage [22], the comparison of three different
strategies for TS, regarding the use of aspiration criterion and penalty function for
infeasible moves, is conducted. Demir et al. [17] employed a dynamic tabu tenure
(tabu list size) and a long-term memory function, after investigating various param-
eters of the algorithm, in order to ameliorate the performance of TS. In the work of
Lei and Wu [23], the creation of a multi-objective TS algorithm is described.
In the work of Glover et al. [24], a more detailed reference to the TS method
is conducted for the interested reader to consult it. Moreover, a brief summary of
advances concerning the TS method can be found in the work of Gendreau [25].
Various applications, related to industry-related problems solved by TS method,
are reported such as robotic cell scheduling problem [15], bin-packing problem
[19], quadratic assignment and generalized assignment problem [16, 18], and buffer
allocation in production lines [17].
to find potentially better solutions in the proximity of the initial points. In order for
the candidate points to be closer to the optimum, selection of a neighboring point is
also governed by a random variable [26, 29]. If the selected new position is better
than a previous one, it replaces the previous one and the search ends; if one of these
positions is better than the global best, the global best is also updated [27]. Local
search procedure provides the EM with a good balance between exploration and
exploitation and it can be applied either to all points, as aforementioned, or only to
the current best point [30].
During the third stage, a force is exerted on each charged particle-candidate solu-
tion, according to the contribution from other particles, by using the superposition
principle. The calculation of force is the “core” of the EM method as it determines
the direction and step of movement [26]. In a real electromagnetic field, the force
is proportional to the charges of particles and inversely proportional to the distance
between them. Thus, at first, a “charge” value is calculated for each particle according
to its fitness and the fitness of other particles in such a way that particles with higher
fitness have higher charge values; however, these charges are not characterized as
positive or negative [27]. This calculation is performed as follows:
i best
f x − f x
q i exp −n N (4.1)
k − f x best
k1 f x
In Eq. 4.1, qi represents the charge of particle x i and n is the number of dimensions
of the problem. When charges are calculated, the force can be computed as follows,
for each of the m particles:
⎧
m ⎪
⎪ x j − x i q i q j
if f x j < f x i
⎨ x j −x i ∨2
Fi
⎪ x i − x j q i q j (4.2)
ji ⎪
⎩ if f x j ≥ f x i
x −x
j i 2
Fi
x i x i + λ i (RNG) (4.3)
F
Response Surface Methodology (RSM) is a generally used method for the determina-
tion of the correlation between several explanatory variables and response variables
for a specific problem. This method was first proposed by Box and Wilson in 1951
[33] and consists of several steps, including the Design of Experiments (DoE) for a
given problem, the analysis of results, and the determination of optimum parameters
by fitting the results to a second-degree polynomial model. As RSM method involves
such a variety of processes, it can be also defined as collection of mathematical and
statistical techniques [34].
It is common to conduct experiments by a factorial or fractional factorial design
before analyzing the results and using the RSM method to determine optimum param-
eters [35, 36]; usually a Central Composite Design (CCD) or Box–Behnken design
is preferred. More specifically, if a first-order model is to be next selected, 2k facto-
rial design or Plackett–Burman design is more suitable, and if second-order model
is to be selected, 3k factorial design, CCD, or Box–Behnken design is preferred
[37]. After the results are obtained, the first step involves the selection of the most
appropriate function, which can model the correlation between input and output
variables adequately. Usually, for that reason, functions containing first- or second-
order terms, namely linear, squared, or cross-product terms are employed to model
the majority of problems [38]. It should be noted that the failure of a lower order
64 4 Other Computational Methods for Optimization
model is not directly implying that a higher order is more suitable, as higher order
models have other deficiencies [35]. For the determination of model coefficients, the
least square method is usually used and statistical tests such as Analysis of Variance
(ANOVA) can be performed to compute the goodness of fit of the model and the
statistical importance of its terms [35, 36]; if it is proven that some of the terms can
be excluded of the model, this indicates that the effect of a variable or its combi-
nation with other variables cannot contribute to the explanation of the correlation
between input and output variables. The use of an appropriate DoE method in the
previous step can be advantageous not only because it efficiently reduces the number
of necessary experiments and also time and cost, but serves for the establishment of
a more accurate model as it can provide minimal errors [34].
The next step of the RSM procedure is actually the optimization process, in order
to find the optimum parameters that minimize the response. After a starting point is
selected, the optimization process is conducted in two steps. The first step involves
the use of a first-order model and steepest descent method in order to obtain rapid
convergence from the initial point to the vicinity of the optimum point. Then, the
second step involves the use of a second-order model in order to obtain higher
accuracy, when searching for the optimum point in a smaller area, which is indicated
by the first step procedure. After a possible point is detected (stationary point), it is
examined in order to determine that it is in fact a point of minimum response and
not a saddle point [36].
Some of the fundamental RSM properties include the orthogonality, the rotata-
bility, and the uniformity and are hereafter described:
RSM has been applied to several problems related to industry such as no-wait two-
machine flow shop scheduling [39], truck scheduling problem [40, 41], maintenance
planning policy, and preventive maintenance [42, 43].
4.6 General Conclusions 65
The book presents the main metaheuristics related to industrial and manufacturing
engineering problems. As it is attempted to cover a significant area of the relevant
literature, this work is divided into three main chapters, each containing a category
of computational methods for optimization, namely Evolutionary-Based methods,
Swarm Intelligence-Based methods, as well as other important methods (SA, TS,
EM, RSM), and their variants. The fundamental features of both well-established
and newly proposed, but nevertheless promising methods, are clearly described and
selected literature references, pertinent to Industry 4.0 applications, are provided in
each subsection. The application of these methods for the time and cost-efficient
solution of important and hard industrial problems renders them appropriate to be
included within the framework of Industry 4.0, e.g., towards the creation of more
sophisticated decision-making systems. Statistical analysis concerning the popularity
of the presented methods is conducted in the present work, revealing the frequent use
of these methods in industry-related problems during the last decade and emphasizing
the necessity of presenting the significant amount of gathered knowledge. Thus,
the presentation of computational and statistical methods conducted in this book
is considered to be helpful towards the selection of appropriate methods for the
solution of hard industry-related problems as well as towards the implementation of
these methods.
References
38. Onwubolu GC (2006) Selection of drilling operations parameters for optimal tool loading using
integrated response surface methodology: a tribes approach. Int J Prod Res 44:959–980
39. Rabiee M, Zandieh M, Jafarian A (2012) Scheduling of a no-wait two-machine flow shop with
sequence-dependent setup times and probable rework using robust meta-heuristics. Int J Prod
Res 50:7428–7446
40. Amini A, Tavakkoli-Moghaddam R (2016) A bi-objective truck scheduling problem in a cross-
docking center with probability of breakdown for trucks. Comput Ind Eng 96:180–191
41. Vahdani B, Zandieh M (2010) Scheduling trucks in cross-docking systems: robust meta-
heuristics. Comput Ind Eng 58:12–24
42. Rivera-Gómez H, Gharbi A, Kenné JP (2013) Joint production and major maintenance planning
policy of a manufacturing system with deteriorating quality. Int J Prod Econ 146:575–587
43. Berthaut F, Gharbi A, Kenné J-P, Boulet J-F (2010) Improved joint preventive maintenance
and hedging point policy. Int J Prod Econ 127:60–72