Вы находитесь на странице: 1из 74

SPRINGER BRIEFS IN APPLIED SCIENCES AND

TECHNOLOGY  MANUFACTURING AND SURFACE ENGINEERING

Nikolaos E. Karkalos
Angelos P. Markopoulos
J. Paulo Davim

Computational
Methods for
Application in
Industry 4.0
SpringerBriefs in Applied Sciences
and Technology

Manufacturing and Surface Engineering

Series editor
Joao Paulo Davim, Aveiro, Portugal
More information about this series at http://www.springer.com/series/10623
Nikolaos E. Karkalos Angelos P. Markopoulos

J. Paulo Davim

Computational Methods
for Application
in Industry 4.0

123
Nikolaos E. Karkalos J. Paulo Davim
Laboratory of Manufacturing Technology, Department of Mechanical Engineering
School of Mechanical Engineering University of Aveiro
National Technical University of Athens Aveiro
Athens Portugal
Greece

Angelos P. Markopoulos
Laboratory of Manufacturing Technology,
School of Mechanical Engineering
National Technical University of Athens
Athens
Greece

ISSN 2191-530X ISSN 2191-5318 (electronic)


SpringerBriefs in Applied Sciences and Technology
ISSN 2365-8223 ISSN 2365-8231 (electronic)
Manufacturing and Surface Engineering
ISBN 978-3-319-92392-5 ISBN 978-3-319-92393-2 (eBook)
https://doi.org/10.1007/978-3-319-92393-2
Library of Congress Control Number: 2018942914

© The Author(s) 2019


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, express or implied, with respect to the material contained herein or
for any errors or omissions that may have been made. The publisher remains neutral with regard to
jurisdictional claims in published maps and institutional affiliations.

Printed on acid-free paper

This Springer imprint is published by the registered company Springer International Publishing AG
part of Springer Nature
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface

The concept of Industry 4.0 is closely related to the incorporation of advancements


in Information Technology into manufacturing technology and systems. As its
name implies, it is a revolution in industry, aiming at a higher level of automation
and digitization that will lead to the overall improvement of processes, services, and
products. Several tools and technologies, such as Internet of Things, cloud com-
puting, and big data analytics are already used for this purpose. However, the shift
from the automated manufacturing concept to intelligent manufacturing is of par-
ticular importance. In advanced manufacturing, intelligence is a key element for
future development and progress. The aforementioned shift can be accomplished by
including algorithms and methods relevant to soft computing or computational
methods, for optimization in industrial practice. These methods are capable of
providing fast simulation models or resolve industry-related problems efficiently.
Especially, in the case of hard engineering problems, it is possible to adopt a rapidly
converging optimization method as a part of the decision-making system, which
will receive information in real time from the physical processes in the industrial
environment and provide reliable results. Computational optimization techniques
have proven to be critical for this purpose and a lot of research has already been
performed in this field. As a result, many methods and various techniques have
been proposed and applied in manufacturing technology problems.
It is necessary to acquire a general knowledge of the most useful optimization
methods in order to be able to apply them efficiently in real life situations. In this
book of the SpringerBriefs series, computational methods, which can be used by
intelligent systems within the concept of Industry 4.0, are gathered and presented. In
Chap. 1, an introduction to the concepts of Industry 4.0 and optimization is provided.
Furthermore, a literature survey is conducted in order to exhibit the importance of the
computational methods that will be presented in the next chapters of the book.
Chapter 2 includes some of the most frequently used Evolutionary-Based methods,
starting with Genetic Algorithm and its main variations, presented over the years of
application of this method. In addition, Imperialist Competitive Algorithm,
Biogeography-Based Optimization, Teaching-Learning-Based Optimization, Sheep
Flock Heredity Algorithm, Shuffled Frog-Leaping Algorithm and Bacteria Foraging

v
vi Preface

Optimization are presented and discussed. In Chap. 3, Swarm Intelligence-Based


methods are introduced. The chapter includes Particle Swarm Optimization,
Artificial Bee Colony, Ant Colony Optimization, Intelligent Water Drops
Algorithm, Harmony Search Algorithm, Firefly Algorithm, Cuckoo Search
Algorithm, Fruit Fly Optimization, Hunting Search, Migrating Birds Optimization,
Flower Pollination Algorithm, and Anarchic Society Optimization. Finally, Chap. 4
includes methods that contain some or no similarities to the methods presented in the
two previous chapters but, however, are worth mentioning due to their application in
the field of industrial engineering. Simulated Annealing, Tabu Search,
Electromagnetism-like Mechanism, and Response Surface Methodology are
analyzed.
The purpose of the book is not to be exhaustive in the list of methods and
algorithms that exist in the relevant literature. It rather aims at presenting and
discussing some important or promising techniques, which, however, already have
some application. With more than 20 methods presented, a thorough demonstration
of their capabilities and applications, and a vast bibliography of uses, it is expected
that the reader will get acquainted with this kind of computational methods. The
experienced user may review the advancements through all the past years and get
new ideas to move on or use it as a reference book. Most of the cited works pertain
to intelligent manufacturing applications, appropriate to be used within the
framework of Industry 4.0.

Athens, Greece Nikolaos E. Karkalos


Athens, Greece Angelos P. Markopoulos
Aveiro, Portugal J. Paulo Davim
Contents

1 General Aspects of the Application of Computational Methods in


Industry 4.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 General Aspects and Definitions on Optimization Methods . . . . . 4
1.3 Literature Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2 Evolutionary-Based Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.1 Basic Concepts and Terminology . . . . . . . . . . . . . . . . . . 12
2.2.2 GA Pseudocode and Further Details . . . . . . . . . . . . . . . . 13
2.2.3 Notable Variants of GA Algorithm . . . . . . . . . . . . . . . . . 14
2.3 Other Evolutionary Algorithms Related to GA . . . . . . . . . . . . . . 16
2.3.1 Differential Evolutionary . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3.2 Memetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4 Imperialist Competitive Algorithm . . . . . . . . . . . . . . . . . . . . . . . 17
2.5 Biogeography-Based Optimization Algorithm . . . . . . . . . . . . . . . 19
2.6 Teaching-Learning-Based Optimization Method . . . . . . . . . . . . . 20
2.7 Sheep Flock Heredity Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 23
2.8 Shuffled Frog-Leaping Algorithm . . . . . . . . . . . . . . . . . . . . . . . 24
2.9 Bacteria Foraging Optimization Algorithm . . . . . . . . . . . . . . . . . 26
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3 Swarm Intelligence-Based Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.2 Particle Swarm Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.3 Artificial Bee Colony Method . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.4 Ant Colony Optimization Algorithm . . . . . . . . . . . . . . . . . . . . . 38
3.5 Intelligent Water Drops Algorithm . . . . . . . . . . . . . . . . . . . . . . . 40

vii
viii Contents

3.6 Harmony Search Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41


3.7 Firefly Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.8 Cuckoo Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.9 Fruit Fly Optimization Algorithm . . . . . . . . . . . . . . . . . . . . . . . 45
3.10 Hunting Search Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.11 Migrating Birds Optimization Algorithm . . . . . . . . . . . . . . . . . . 47
3.12 Flower Pollination Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.13 Anarchic Society Optimization Algorithm . . . . . . . . . . . . . . . . . 50
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4 Other Computational Methods for Optimization . . . . . . . . . . . . . . . 57
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.2 Simulated Annealing Method . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.3 Tabu Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.4 Electromagnetism-Like Mechanism Algorithm . . . . . . . . . . . . . . 61
4.5 Response Surface Methodology . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.6 General Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Chapter 1
General Aspects of the Application
of Computational Methods
in Industry 4.0

1.1 Introduction

Since the beginning of the first industrial revolution, engineers were always attempt-
ing to resolve problems related to the operation of machinery and their maintenance.
They also aimed at the improvement of the efficiency of manufacturing processes
and generally at the organization of the production and other relative subjects. As
it was anticipated, systematic approaches for the scientific study of industry-related
problems were established and the solutions were proposed. However, after the intro-
duction of computers and development of computational methods, a new promis-
ing era for solving industry-related problems emerged, as advanced computational
techniques were capable of providing approximate but significantly accurate solu-
tions. Especially, when it is desired to increase the efficiency of manufacturing pro-
cesses by determining the optimum process parameters or when the solution of hard
production-based problems, such as scheduling, is required, optimization methods
can be employed.
In advanced manufacturing, intelligence is a key element for future development
and progress. Intelligent production is at some level incorporated into industrial
practice, however, it is expected to play a major role in the near future; it is also
expected to affect manufacturing business globally, at any level, so that enterprises
will be flexible enough to respond to production changes swiftly. One very important
and contemporary concept, related to advanced and intelligent manufacturing, is the
concept of Industry 4.0.
Industry 4.0 can be conceived as the merge of manufacturing technology meth-
ods and processes and Information Technology (IT) [1–3]. The concept of Industry
4.0, a term which actually refers to the “4th industrial revolution”, was introduced
initially in the German-speaking areas, in 2011, by the name of “Industrie 4.0” [1,
3, 4] and has currently become a topic of high interest in the field of industrial engi-
neering. This industrial revolution was evoked by the considerable advances in IT
and electronics, such as advances in networks and Internet or embedded electronic

© The Author(s) 2019 1


N. E. Karkalos et al., Computational Methods for Application in Industry 4.0,
Manufacturing and Surface Engineering, https://doi.org/10.1007/978-3-319-92393-2_1
2 1 General Aspects of the Application of Computational …

systems, which have given rise to technologies, such as Cyber-Physical Systems


(CPS), Internet of Things (IoT), cloud computing, and big data analytics [5, 6].
One of the basic characteristics of the 4th industrial revolution is the shift from the
automated manufacturing concept to the intelligent manufacturing concept, which
involves the integration of organizations, equipment, humans, and products, as well
as the cooperation between them and their ability to communicate in real time [2, 3,
7]. Thus, a higher level of automation and digitization will be achieved and more than
before, artificial intelligence will play a considerable role, enhanced by the ability
to manipulate and evaluate large amount of data and exchange information within a
fully networked system [3, 8]. Through the use of technologies of many disciplines,
the establishment of efficient, collaborative, and sustainable industrial production is
intended.
A considerable economic impact is expected from the introduction of Industry
4.0 concept into actual industries, due to the increase of effectiveness and efficiency
[1, 9], as well as the creation of new business models, which can take into account
the contribution of Industry 4.0 methods [1]. The most important capability of the
industries of the new era will be their higher level of adaptability to the dynamic
requirements of individual customers. The latter demand highly customized products
with a relatively small lot size [3, 7, 9, 10]. In order to fulfill the requirements of
such customers, several characteristics relevant to Industry 4.0 methods should be
harnessed. For example, the ability to transmit information in real time can effectively
reduce the time required in order to meet the customer needs, and the ability of using
flexible and reconfigurable processes for “smart products” facilitates the creation of
specialized products, when at the same time it becomes feasible for the industry to
make profit, even at low production sizes [3, 7, 8, 11]. Furthermore, it is important
to note that the ability to monitor energy consumption in real time leads also to high
levels of energy efficiency and consequently to reduction of costs and implementation
of green manufacturing concept [10–12].
Despite the fact that Industry 4.0 concept has already become a topic of major
importance in the industrial engineering sector, it still lacks a proper definition and it
appears that there is no universal agreement on the fact that it can be accepted as a new
industrial revolution [1, 13]. However, several researchers relying on the description
of Industry 4.0 by Kagermann [4] were able to determine its basic constituents and
characteristics. Thus, CPS, IoT and “smart factory” can be identified as key features
of Industry 4.0 concept [3]. More specifically, CPS constitute actually a fusion of
the physical and virtual world of the modern industries; embedded systems can
monitor and control physical processes, e.g., product quality inspection, machine tool
maintenance, energy consumption management, and physical systems’ responses
have an impact on the calculations performed in the virtual world [1, 14]. Essentially,
the virtual world is created as a model or “copy” of the physical world in a factory
in order to facilitate monitoring and decision-making processes by the embedded
systems [5, 8]. There is direct and real-time communication between the physical
and the virtual worlds, with the capability of acting autonomously [7]; in order to
achieve this, many aforementioned technologies such as IoT, big data analytics, and
cloud computing are employed [10].
1.1 Introduction 3

IoT is stated as the “key enabler for Industry 4.0”, along with CPS [1, 7, 15], as
it directly enables various constituents of the manufacturing system to integrate into
the network of the modern factory and to be able to interact by providing information
and cooperate with other components of this system, by using technological features
such as RFID, sensors or actuators, or other embedded electronics [3, 6, 7, 11].
The proper integration of IoT and CPS will eventually enable the creation of the
“smart factory”. In specific, smart factories are factories which have the abilities
to assist humans and machines to perform their tasks. This capability is offered
by CPS, which can effectively interconnect the physical and virtual world in these
factories [1]. Smart factories can achieve both vertically and horizontally integrated
production systems, e.g., networked manufacturing systems within the smart factory
and multiple smart factories, respectively [5, 7]. Within the smart factories, the goal of
decentralization of decision-making can be achieved by using CPS and IoT, alongside
big data analytics, and even artificial intelligence methods can be employed in order
to aid the machines to complete complex tasks and exhibit self-optimization and
self-configuration capabilities [6]. Finally, within the smart factories, end-to-end
integration will be also possible, related to the value chain of the product life cycle
[5].
Outside the German-speaking world, various relevant concepts have been intro-
duced in various technologically advanced countries or multinational entities, such
as: United States (US), China, European Union (EU), Japan, India, France, United
Kingdom (UK), South Korea, and Singapore [1, 2, 7, 11, 13]. More specifically, in
the US, there exist several initiatives close to the Industry 4.0 concept. For exam-
ple, the “Industrial Internet” concept proposed the integration of complex physical
machinery and embedded, networked systems in order to predict, control and plan
for the improvement of business and societal outcomes [7]. Other related concepts
in the US include the “Integrated Industry”, “Smart Industry”, and “Smart manu-
facturing” [1, 2, 7, 13]. In China, the initiative “Made in China 2025” proposes an
upgrade of the industry, relevant to Industry 4.0, in order to achieve a green, open
and shared development alongside the “Internet Plus” initiative [2, 13]. The EU has
also proposed an initiative entitled “Factories of the Future”, related also to Horizon
2020 programs [2, 13]. Apart from these countries or political entities, various large
companies have also taken similar initiatives such as the Industrial Industry Consor-
tium (IIC), which is a cooperation between AT&T, Cisco, General Electric, IBM and
Intel companies [13].
As it becomes evident, Industry 4.0 relies on IT technologies to a significant
degree. Apart from the implementation of the required infrastructure, e.g., hardware,
embedded systems, networks, it is also important to implement the software required,
e.g., for monitoring or decision-making processes for the CPS. For that reason, it
is possible to include algorithms and methods pertinent to soft computing, such as
Artificial Neural Networks (ANN) or computational methods for optimization, such
as Evolutionary Algorithms (EA), as these methods are capable of providing fast
simulation models or resolve industry-related problems efficiently. Especially, in the
case of hard problems such as the job shop scheduling problem, it is possible to
include a rapidly converging optimization method as a part of the decision-making
4 1 General Aspects of the Application of Computational …

system, which will receive information in real time from the physical processes in
the industrial environment and provide reliable results. Thus, it is possible to enable a
higher degree of reasoning for the CPS. During the last decades, nontraditional opti-
mization techniques, with generic capabilities, such as Genetic Algorithms (GA),
Swarm Optimization Algorithms, or others, were introduced to the field of industrial
engineering in order to facilitate the solution of industry-related problems. These
methods are intended to be presented in this book, with a view to inform the inter-
ested reader of the optimization techniques available and be assistive to the selection
of appropriate optimization techniques for each problem. Furthermore, the details
regarding the computational efficiency of each technique, which are provided in each
section of this book, are important to be considered in the case of actual integration
of these algorithms in the decision-making processes of CPS.
As it will be shown in this book, the field of optimization methods in indus-
trial practice is always active and of great importance and thus, it is necessary to
acquire a general knowledge of the most useful methods and their applications in
order to be able to apply them efficiently in real-life situations. Thus, computational
methods, which can be used by intelligent systems within the concept of Indus-
try 4.0, were gathered and presented. These methods include Evolutionary-Based
and Swarm Intelligence-Based methods; however, some more methods are included,
such as Simulated Annealing method. Each method is explained in its fundamental
aspects, while some notable bibliography is provided for further reading.
In specific, the outline of the present book is structured as follows: in the current
chapter, the general aspects of optimization are summarized and a brief literature sur-
vey is presented with a view to underline the popularity and relevance of the described
methods in the field of industrial engineering. In Chap. 2, several evolutionary-based
metaheuristics, e.g., the Imperialist Competitive algorithm, the Biogeography-Based
Optimization method or the Teaching-Learning Based Optimization method are pre-
sented, with emphasis set on the most popular of them, namely the Genetic Algorithm.
In Chap. 3, the Swarm Intelligence-Based methods, such as Particle Swarm Opti-
mization, Artificial Bee Colony optimization, Ant Colony Optimization, and others,
are described. Finally, in Chap. 4, other optimization techniques, not pertinent to the
aforementioned categories, such as Simulated Annealing method, are presented.

1.2 General Aspects and Definitions on Optimization


Methods

An optimization problem is essentially the problem of determining the best solu-


tion from the feasible solutions of this problem. Optimization problems may involve
either continuous or discrete value variables; the latter problems are also called com-
binatorial optimization problems. Furthermore, one of the most important aspects of
optimization problems is that they can be also subjected, or not, to several restrictions.
1.2 General Aspects and Definitions on Optimization Methods 5

A continuous optimization problem can be formulated generally as follows [16]:


• minimize F(x), in respect to x,
• subject to gi (x) ≤ 0, i = 1, …, m and hi (x) = 0, i = 1, …, p
where F(x) is the function that is required to be optimized and is referred to as the
objective function. F(x) is usually a function of multiple variables, so x is usually
defined as a vector; this vector is referred to as the vector of design variables of the
problem. For most engineering optimization problems, the lower and upper bound
of design variables value is defined according to the exact requirements of each
problem, or defined as a reasonable range for each problem, otherwise. Furthermore,
hi (x) represents the equality constraints and gi (x) represents the inequality constraints
that are imposed.
When the purpose of optimization is the determination of the best solution accord-
ing to multiple goals, then a multi-objective optimization problem arises. In order to
solve these problems, some modifications are required and the use of Pareto charts
is essential for the determination of favorable solutions, according to the criteria set.
Although the optimization methods are actually related with the mathematical
problems of determining the extrema of a function, optimization problems in the
field of engineering are nowadays usually resolved with the aid of special compu-
tational methods. Optimization can be conducted either by a deterministic method,
in which an initial solution is guided towards the region of optimum solution in a
deterministic way, by taking advantage of analytical properties of the problem [17]
or by a stochastic method, which incorporates stochastic processes and random vari-
ables in order to provide more flexibility and be capable to avoid reaching a local
optimum instead of the global one. Deterministic or exact optimization methods can
guarantee the determination of an optimal solution, while stochastic methods cannot
[18]. Generally, there are methods that may require no further information except for
the objective function, e.g., zero-order methods, or they may require the computation
of first, second, and rarely third derivative of this function in respect to the design
variables in order to establish a better guidance of the solution towards the optimum
point, i.e., a technique mostly related to deterministic optimization methods. Some
of the main differences between deterministic and stochastic techniques are the con-
vergence speed or computational cost and also the ability to prevent the optimization
process from premature convergence in local optima. Usually, deterministic algo-
rithms can provide a quicker solution but most of them cannot handle the situation
when the solution is attracted to a local optimum [19]. All stochastic optimization
algorithms described in the present work do not require other information for the
problem, which is attempted to be solved except for the optimization function, the
design variables bound and the constraints, so they can be employed as “black-box”
without the need to intervene with the physics or other details of the optimization
problem.
As far as computational optimization techniques are concerned, many categories
of methods exist; especially before presenting details of the methods, which will
be presented in the current work, it is important to define two basic categories of
methods closely related to optimization, namely heuristics and metaheuristics.
6 1 General Aspects of the Application of Computational …

Heuristic techniques constitute a common approach for determining appropriate


and feasible solutions for various problems without ensuring that these solutions are
optimal. These techniques often involve the use of practical methods or even trial-
and-error method in order to approach the solution of a problem. Regarding computer
science, heuristic techniques are employed to solve a large variety of problems. The
characteristics of these techniques are the relatively low computational cost and high
speed but also lower accuracy and no guarantee that the optimal solution can be
determined; however in many practical problems for which the optimal solution is
hard to be found in a reasonable time, solutions of heuristic can be acceptable. It is
to be noted that these algorithms can be combined with optimization techniques to
enhance their capabilities.
Metaheuristics are considered to be a heuristic or generally a similar procedure of
a higher level which manages the optimization procedure. In fact, they are capable
of guiding underlying heuristics to solve optimization problems [20]. These algo-
rithms do not ensure that a globally optimum solution will be determined, but they
are capable to determine good solutions by conducting search in large regions and
perform better than other optimization techniques or simple heuristics. It is usual
for metaheuristics to employ stochastic procedures and they can be used for generic,
problem independent applications. One of the basic features of these methods is the
appropriate way to balance between global and local search capability, termed also
as exploration and exploitation (or diversification and intensification), respectively.
Methods should be capable of exploring many promising regions of the search space
and move away from regions of local optima before converging to a less optimal
solution. However, they also should be able to conduct local search in the proximity
of promising solutions in order to find a potentially better neighboring solution.
A considerable amount of metaheuristics and actually the majority of recently
invented ones are based on metaphors related to nature. It is generally assumed from
the researchers that natural processes, derived from thousands or even millions of
years of evolutionary, possess incredibly optimum features, which can be interpreted
as algorithms. However, this cannot be sufficient in some cases and hybridization of
different metaheuristics or other techniques is performed. The reason for the devel-
opment of such methods is that some of the metaheuristics are based on metaphors
which cannot guarantee efficient performance. For example, some algorithms are
weak in the exploration part and are prone to trapping in local minima, whereas
others cannot conduct efficient local search in the vicinity of promising candidate
solutions.
As far as optimization problems are concerned, it should be necessary to mention
some of the most important problems encountered in the field of engineering. These
problems arise usually from the everyday practice in industry and most of them are
shown to be almost impossible to be solved by regular optimization methods, such
as exact optimization methods [20], which are capable of solving problems whose
cost grows polynomially with the problem size [18]. For difficult problems (NP-
Hard), the cost of solution required by an exact method grows exponentially with the
problem size, so it is necessary to use a metaheuristic in order to obtain a solution
in reasonable time [18]. Some of the problems often encountered in production
1.2 General Aspects and Definitions on Optimization Methods 7

engineering are assembly line worker assignment, facility layout problem, workforce
allocation problem, cell formation and task scheduling, vehicle routing problem,
job shop scheduling problem, single or parallel machine scheduling, and optimal
allocation of manufacturing process among other.
Metaheuristics can be generally classified into several categories according to
some of their characteristics. In the present work, a classification is necessary as many
methods share common characteristics and is more proper to present them succes-
sively. Although classifications with different criteria may be possible, the classifi-
cation performed in the present work distinguishes between classical Evolutionary-
Based Metaheuristics and Swarm Intelligence-Based Metaheuristics with the fol-
lowing criterion: methods which involve the creation of offspring with improved
characteristics at each generation are classified in the first category, whereas algo-
rithms that involve the cooperation of individuals for the accomplishment of a com-
mon task are classified in the second category. Moreover, other important methods
which should be presented separately from the others are presented in Chap. 4. The
aforementioned methods are using a different type of metaphor than the majority of
methods described in the other sections, which were mostly bioinspired, population-
based methods or do not contain a metaphor. In fact, two of them are inspired from
physics, namely Simulated Annealing and Electromagnetism-like Mechanism, and
two of them are not related to any real-life process, i.e., Tabu Search and Response
Surface Methodology, which is not a metaheuristic. However, these methods play a
considerable role in the solution of production-related optimization problems, as it
will be shown in the next subsection and deserve being mentioned.
It is worth mentioning that the field of metaheuristics is a very rapidly progressing
one, and as new variants and hybrid algorithms emerge in a relatively high rate, it
could be difficult to cover every metaheuristic method in detail. However, as the
literature survey indicates, the methods presented cover a wide range of older and
newer metaheuristics frequently utilized for industrial problems. In the current work,
only algorithms which have been at least once employed for this type of problems are
presented. Each method is briefly presented aiming to inform the readers about the
basic characteristics and then details on the significance of method parameters and
notable variants are also described with a view to aid the selection of an appropriate
method for their needs. Before processing to the methods’ description, a short liter-
ature survey is conducted with a view to inform the readers about the significance
and popularity of each of the methods which will be presented afterwards.

1.3 Literature Survey

In order to present the significance of the use of metaheuristics in the field of engi-
neering, but also to demonstrate the popularity of each of these methods, a literature
survey was conducted in journals of the field. In total, about 1500 scientific papers
were searched in online databases of major publishing companies such as Elsevier,
Springer, and Taylor & Francis during the literature survey. Results were obtained
8 1 General Aspects of the Application of Computational …

separately for each method, which will be presented in the current work and then
were aggregated according to the general category of methods to which they belong.
More specifically, in this survey, search results for each method belonging to each
of the major categories of methods present in the current work are summed for the
last 10 years, i.e., 2007–2017. In Fig. 1.1, the total results concerning the number
of journal articles relevant to each category of methods are presented, whereas in
Fig. 1.2, the detailed results for each year from 2007 to present can be observed.
From Fig. 1.1, it can be seen that about one out of two papers concerning the
application of metaheuristics for industrial engineering problems is related to evo-
lutionary algorithms and the rest of papers are almost equally divided between the
other two categories, namely 767, 316 and 410 papers for each category, respectively.
Thus, genetic algorithm and other related methods, including several variants, are
sufficiently suited for this type of problems, although they are not problem-based
methods. However, it should not be underestimated that the Swarm Intelligence (SI)
methods, although generally being more recent compared to other types of methods,
have considerable popularity during the last decade, as well as more “traditional”
methods such as Simulated Annealing and Tabu Search, which were categorized in
the third category.
From the more detailed graph in Fig. 1.2, some of the aforementioned observations
can be directly confirmed and other trends can be observed.

Fig. 1.1 Percentage of articles referring to Evolutionary-Based, Swarm Intelligence-Based and


other methods
1.3 Literature Survey 9

Fig. 1.2 Number of publications per year for each method

As for the EA-based algorithms, it can be easily observed that the results presented
in Fig. 1.1 are again confirmed, as the number of relevant publications is in every
year higher than the publications with all the other types of methods and almost
every year they constitute about 50% of the total number of publications. A trend of
temporary increase or decrease of the number of such publications is observed, but
every year about 60–80 publications are regularly reported. For the SI-based methods,
it is interesting to see that the number of publications has considerably risen since
2007, especially after 2011–2012, and it fluctuates around 30 each year, with some
exceptions. The number of publications using SI-based methods was initially lower
than that of the two other categories but since 2011 it is almost equal with the category,
related to the other computational methods. Finally, the total number of publications
is almost over 140 during the last years, implying that they constitute a considerable
part of industrial engineering-related research.
10 1 General Aspects of the Application of Computational …

References

1. Hermann M, Pentek T, Otto B (2016) Design principles for industrie 4.0 scenarios. In: 2016
49th Hawaii international conference on system sciences (HICSS). Koloa, USA, pp 3928–3937
2. Qian F, Zhong W, Du W (2017) Fundamental theories and key technologies for smart and
optimal manufacturing in the process industry. Engineering 3:154–160
3. Roblek V, Meško M, Krapež A (2016) A complex view of industry 4.0. SAGE Open
6:2158244016653987
4. Kagermann H, Wolf-Dieter L, Wahlster W (2011) Industrie 4.0: Mit dem Internet der Dinge
auf dem Weg zur 4. industriellen Revolution. VDI Nachrichten 13:11
5. Liu Y, Xu X (2016) Industry 4.0 and cloud manufacturing: a comparative analysis. J Manuf
Sci Eng 139:34701–34708
6. Aiman Kamarul Bahrin M, Othman F, Hayati Nor Azli N, Farihin Talib M (2016) Industry 4.0:
a review on industrial automation and robotic. Jurnal Teknologi 78:137–143
7. Thoben K-D, Wiesner S, Wuest T (2017) “Industrie 4.0” and smart manufacturing—a review
of research issues and application examples. Int J Autom Technol 11(1):4–16
8. Zhong RY, Xu X, Klotz E, Newman ST (2017) Intelligent manufacturing in the context of
industry 4.0: a review. Engineering 3:616–630
9. Lu Y (2017) Industry 4.0: a survey on technologies, applications and open research issues. J
Ind Inf Integr 6:1–10
10. Wang S, Wan J, Zhang D, Li D, Zhang C (2016) Towards smart factory for industry 4.0: a
self-organized multi-agent system with big data based feedback and coordination. Comput
Netw 101:158–168
11. Shrouf F, Ordieres J, Miragliotta G (2014) Smart factories in industry 4.0: a review of the
concept and of energy management approached in production based on the internet of things
paradigm. In: 2014 IEEE international conference on industrial engineering and engineering
management. Bandar Sunway, Malaysia, pp 697–701
12. Tamás P, Illes B, Dobos P (2016) Waste reduction possibilities for manufacturing systems in
the industry 4.0. IOP Conf Ser Mater Sci Eng 161(1):012074
13. Liao Y, Deschamps F, de Loures EFR, Ramos LFP (2017) Past, present and future of Industry
4.0—a systematic literature review and research agenda proposal. Int J Prod Res 55:3609–3629
14. Illés B, Tamás P, Dobos P, Skapinyecz R (2017) New challenges for quality assurance of
manufacturing processes in industry 4.0. Solid State Phenom 261:481–486
15. Tamás Péter, Illes B (2016) Process improvement trends for manufacturing systems in industry
4.0. Acad J Manuf Eng 14:119–125
16. Rao SS (2009) Introduction to optimization. Wiley, Hoboken
17. Lin M-H, Tsai J-F, Yu C-S (2012) A review of deterministic optimization methods in engineer-
ing and management. J Math Probl Eng 756023
18. Rothlauf F (2011) Design of modern heuristics: principles and application, 1st edn. Springer
Publishing Company Inc., Berlin
19. Wang X, Damodaran M (2000) Comparison of deterministic and stochastic optimization algo-
rithms for generic wing design problems. J Aircr 37:929–932
20. El-Ghazali T (2009) Metaheuristics: from design to implementation. Wiley Publishing, Hobo-
ken
Chapter 2
Evolutionary-Based Methods

2.1 Introduction

In the current section, several metaheuristics involving the evolutionary of a pop-


ulation in order to create new generations of genetically superior individuals are
presented. These algorithms are usually significantly influenced by the most promi-
nent (and earliest) among them, the Genetic Algorithm (GA). Details about their basic
characteristics and function, as well as some important variants, are described and
applications in the field of industrial engineering are highlighted. A detailed descrip-
tion of the basic features of the genetic algorithm is presented at the beginning of
this chapter and afterwards, other Evolutionary Algorithms (EA) are summarized.
In specific, both relatively older and well established, as well as newer but promis-
ing methods are included, namely Differential Evolutionary, Memetic Algorithm,
Imperialist Competitive Algorithm, Biogeography-Based Optimization algorithm,
Teaching-Learning-Based optimization, Sheep Flock Heredity algorithm, Shuffled
Frog-Leaping algorithm and Bacteria Foraging Optimization algorithm.

2.2 Genetic Algorithm

Among the most important metaheuristics of the first category presented in this
work is the genetic algorithm (GA). This method is a nature-inspired (biology-
inspired) method pertinent to natural selection process and inheritance of charac-
teristics through genes [1]. This method was developed initially by Holland [2] and
belongs to the greater family of evolutionary algorithms; this method is considered
as one of the earliest metaheuristic methods. Due to the fact that this method is by
far the most popular one in many fields of science, a considerable amount of modifi-
cations and variants of the original form of this algorithm have been proposed in the
relevant literature, aiming at the amelioration of its performance and considerable

© The Author(s) 2019 11


N. E. Karkalos et al., Computational Methods for Application in Industry 4.0,
Manufacturing and Surface Engineering, https://doi.org/10.1007/978-3-319-92393-2_2
12 2 Evolutionary-Based Methods

reduction of its computational cost. Furthermore, from the detailed presentation of


various methods in the current work, it will become clear that many of these meth-
ods were influenced by the GA by imitating the function of the evolutionary process,
which is a basic characteristic of this algorithm. Moreover, some of the other methods
have adopted some other features of GA, such as the mutation operator, which aims
at the establishment of a better balance between the exploration and exploitation
capabilities of these methods.

2.2.1 Basic Concepts and Terminology

GA considers candidate solutions as chromosomes, which are composed of a


sequence of genes [3]. Initially, a set of randomly generated candidate solutions is
created and constitutes the first generation of the population. These initial candidate
solutions usually cover a large area of the search space. Each candidate solution-
gene is evaluated according to the objective function, which is related to the target of
the optimization problem, and then subsequent processes lead to the creation of the
next generations until the algorithm is terminated. The purpose is to determine high
fitness individuals and ultimately inherit their superior genes to the next generations
to produce potentially better individuals, i.e., the principle of survival of the fittest
according to Goldberg [4]; however, although it is supposed that unfavorable char-
acteristics are eliminated, according to the Darwinian theory of evolutionary [5], in
GA, less fit individuals are not totally excluded from the selection and reproduction
process, as there is a possibility that they can contribute to an offspring with better
characteristics. In general, the various stages involved in the optimization process
apart from the initialization stage are the process of creating the new individuals and
the application of various genetic operators, which are particularly useful in order
to direct the algorithm efficiently towards the optimum solution.
The candidate solutions, as in almost every other metaheuristic, need to be properly
encoded before the algorithm starts. In the GA metaheuristic, the chromosomes
can be represented in binary form, i.e., classical or modified such as binary gray
encoding, by integers, floating point numbers or even symbols [3]. Afterwards, the
initial population can be generated, respecting the lower and upper bounds for each
decision variable-gene and the fitness is evaluated by the objective function of the
problem.
The selection process is essential for the creation of new generations and occurs
after individuals from the previous generation are evaluated. In general, high-quality
individuals are the most probable to be selected but there is also a possibility that
lower ranked solutions will contribute to the creation of offspring as some of their
genes may provide favorable characteristics to the new population. Various methods
have been used to implement the selection process such as tournament methods and
ranking methods, just to name some [3].
The crossover operator aims at combining elements from the existing solutions,
with a view to form the new generation of solutions. It is considered as the equivalent
2.2 Genetic Algorithm 13

of biological reproduction and crossover, as the new solution is derived from more
than one “parent” solutions, which belong to older generations. This technique is
actually the main exploration mechanism for GA [5]. Several techniques exist for
the implementation of the crossover operator, such as swapping data from parent
solutions based on a single or two points division of each solution vector, but also
combination of data from more than two parents can be performed.
The mutation operator aims at creating a “genetically diverse” solution for the
newer generation of the population by modifying some or all of the variables of a
candidate solution vector. This process is similar to the biological mutation process,
which alters the genes of a chromosome. This operator is also very important as it
can guarantee that new search areas will be visited in order to prevent the algorithm
from stopping in the area of local optimum. Usually, the occurrence of mutation for a
chromosome-candidate solution is related to a predefined mutation probability. This
factor can regulate the occurrence of mutation so as it is best suited to each problem
and phase of the optimization process.

2.2.2 GA Pseudocode and Further Details

According to the basic characteristics of a GA presented in the previous subsection,


it is possible to present the GA in the form of pseudocode, see Fig. 2.1.
It is to be noted that in the case of multi-objective optimization, techniques
such Non-dominated Sorting Genetic Algorithm (NSGA) are employed to deter-
mine the best solutions, using also a Pareto chart. A comprehensive review about
multi-objective optimization with evolutionary algorithms is presented in [6]. One of
the important details, which should be carefully treated, is the number of individuals

Step 0. Start the algorithm.


Step 1. Generate initial random population.
Step 2. Calculate fitness of individuals.
Step 3. Is stop criterion satisfied? – If yes, then End.
Step 4. Else,
Select individuals (new chromosomes) for producing the next generation.
If p<pcross perform crossover process between two or more selected individuals.
If P<pmut perform mutation to individuals.
Go to step 2.

Fig. 2.1 Pseudocode for GA


14 2 Evolutionary-Based Methods

in the population, as a small number of chromosomes will restrict the thorough search
in the search space and a large number of chromosomes will increase unnecessarily
the computational cost. Another interesting feature of the GA is its inherent paral-
lelism, which enables it to modify simultaneously a set of solutions [1]. For more
details on parallel GA implementations, the work of Cantú-Paz [5], which includes a
thorough analysis of a considerable amount of parallel algorithms, is recommended.
GA has been employed in almost every domain pertinent to industrial engineering.
Examples of such applications of GA include production of forged components [7],
flow shop scheduling [8], customer order scheduling problem [9], parallel machine
scheduling [10], hybrid flow shops [11], assembly/disassembly manufacturing net-
works [12], as well as supply chain problem [13]. For a more detailed view on the
applications of GA, the interested author is advised to consult relevant works in the
literature such as the comprehensive review by Chaudhry and Luo [14] or the work
of Aytug et al. [15].

2.2.3 Notable Variants of GA Algorithm

Apart from the features of GA which were discussed in the previous subsection,
due to the popularity of GA method and due the fact that it constitutes one of the
earliest metaheuristics, several variants of GA exist in the relevant literature. These
variants add more features to the original GA formulation or consist of combinations
of other heuristics and metaheuristics with GA in order to improve its performance. It
is reported several times in the relevant literature that the simplest version of GA can
exhibit premature convergence without some enhanced features [1]. As the number
of such variants is large, it would be difficult to present an exhaustive list of these
variants in the present work but however, some notable variants of GA are presented
afterwards.
An interesting modification to the original GA is the use of adaptive parameter
control to modify the GA parameters, according to the evaluation of the candidate
solutions based on some appropriate measure, such as the ones related to the conver-
gence of the GA [16]. Usually, the parameters which can be adjusted are the crossover
or mutation probabilities with a view to prevent the algorithm from trapping in local
extrema. Another modified approach of the GA presented in the relevant literature
is the concurrent GA algorithm, which can be employed in cases of coupled prob-
lems, where it is required to consider multiple objectives simultaneously [17]. At
first, the whole optimization problem is replaced by a problem of finding feasible
solutions. Then, for the same population of candidate solutions, genetic operation
processes are performed in parallel. Afterwards, the new generation of solutions,
for each individual process, is combined to a new set of offspring, which becomes
the next population of candidate solutions. In fact, a number of chromosomes from
each individual problem are retained and the new set of offspring is created using
elements from all these chromosomes. The Concurrent Genetic Algorithm (CGA)
2.2 Genetic Algorithm 15

is performed using real number coding and every decision variable from individual
process is used as one gene.
The multi-population concept has also been successfully introduced for GA [18].
This variant involves a hierarchical structure within the population comprised of
clusters of individuals arranged in levels. Each cluster is composed of a leader,
which is the best individual of the cluster and supporters. Crossover is conducted
between members of the same cluster and new individuals replaced the previous only
if they are fitter. Additionally, a migration process occurs between all clusters and
then only the best individuals and migrated individuals are retained whereas all the
other individuals are replaced by new. This process is conducted after the populations
in the clusters have converged and new (better) individuals are not inserted.
For cases of bi-level problems, an approach called coevolutionary GA is often
adopted [19]. This type of problems involves two levels: the second or lower level
(“follower”) is in fact a part of the constraints of the main/upper level (“leader”)
problem. At some cases, the coevolutionary scheme is applied for different subpop-
ulations within the same search space, but applying this scheme for different search
spaces is also possible [19]. In the latter approach, the coevolutionary GA is hierar-
chical, applied in two different levels, namely the leader and follower search space.
For the search in the follower search space, instead of solving the exact problem, an
approximate solution is first computed and then some “good solutions” are evolved
by a modified GA to ensure feasibility as well as reduced the computational cost
considerably.
Coevolution algorithms can be further divided into two categories depending
on whether the subpopulations compete or cooperate [20]. By dividing the total
population into subpopulations, it is evident that the search space for each of the
sub-population is significantly smaller and it can be more easily searched. Another
multi-population coevolution method is described in [21].
Furthermore, Rajkumar and Shahabudeen [1] proposed the use of a Nawaz-
Enscore-Ham (NEH) heuristic along with the classical randomized initialization,
the use of several crossover operators, each with different probabilities, as well as
a set of different mutation operators. In order to avoid premature convergence, they
employed an elitist strategy and hypermutation strategy, which increased the muta-
tion probability to further continue the search procedure. Finally, Shukla et al. [22]
presented and compared the application of five variants of evolutionary algorithms
to the inventory routing problem.
Finally, it is usual in some cases to hybridize metaheuristics with other similar
techniques or soft computing methods in order to produce methods with enhanced
capabilities. As for the hybridizations of GA, i.e., combinations with other methods,
there exist several such as combination with Simulated Annealing method, Particle
Swarm Optimization or Response Surface Methodology [23], as well as combina-
tions with soft computing methods such as Artificial Neural Networks and Fuzzy
Logic System, which can result in effective and low-cost optimization frameworks.
16 2 Evolutionary-Based Methods

2.3 Other Evolutionary Algorithms Related to GA

Although genetic algorithm is by far the most popular evolutionary algorithm, there
exist also some important evolutionary algorithms, closely related to GA, which are
worth mentioning. Thus, some of them, such as Differential Evolutions and Memetic
Algorithm will be presented afterwards in brief.

2.3.1 Differential Evolutionary

Differential Evolutionary (DE) method belongs also to the evolutionary algorithms


group of optimization methods. This method was proposed by Storn and Price [24]
and it is considered to be a rather simple and inexpensive optimization method. Its
function is similar to that of other EA methods but it has also special characteristics.
Furthermore, as DE is a long established method, several variants exist, with the
main difference between them being the mutation operator.
In its original form, the procedure of the DE method is starting form the initializa-
tion of the population of candidate solutions of the optimization problem; candidate
solutions are randomly generated and should respect the defined lower and upper
bounds for each design variable. A basic characteristic of DE is the computation
of differences between random solutions from the population, denoted as difference
vectors [25]. After evaluation of the initial population is carried out, the best fit
individual is defined as the base vector for the difference calculations. Then, the
difference between random solutions is calculated and multiplied by a random mul-
tiplication factor F, called also differential weight [24]. By adding this vector with
the base vector, several new candidate solutions are created, namely the challenger
vector population [25]. These individuals are later crossed over with other individu-
als. Before the new iteration begins, the new best fit is defined as the base vector and
the process carries on until termination criteria are met. In DE, mutation is imple-
mented by the use of the differential weight in order to help the search procedure, if
it is close to premature convergence; this operator can be also implemented in other
ways in order to balance exploration and exploitation better or speed up the conver-
gence [26]. For more details on DE method, it is recommended to consult the work
of Neri and Tirronen [27]. DE method is often applied for industrial engineering
optimization problems such as critical chain project scheduling [26], assembly line
balancing [28], flow shop or job shop scheduling problems [29, 30].

2.3.2 Memetic Algorithm

The Memetic Algorithm (MA) is an evolutionary algorithm based on the notion of


memes. Memes are directly related to ideas and useful information and are thought to
2.3 Other Evolutionary Algorithms Related to GA 17

be equivalent to the chromosomes of the GA. Actually, MA represents the transmis-


sion of knowledge in a society but also its modification and evolutionary throughout
the years [31]. Memetic Algorithm was proposed by Moscato [32] and combines
a global search GA (population-based) and an individual-based local search [33].
The local search feature is very important for the mimetic algorithm and there is
a need for robust local search operators to conduct effective exploitation [34]. The
local search procedure can also involve learning mechanisms such as a Lamarckian
or Baldwinian learning process [31].
At first, the initial population is randomly generated and evaluated. Then parents
for new solutions are selected according to their fitness by a stochastic process such
as tournament selection [35]. Afterwards, a crossover operator is applied to create
new offspring and guide the algorithm towards better solutions. The next step is the
local search, which corresponds to the lifetime learning process of individuals in
order to improve their knowledge. In the final step, it is decided whether the new
solution can be added to the population in order to replace older members. Some of
the applications of MA in production-related problems are arc routing problem [33],
traveling salesman problem [36], job shop scheduling problem [37], scheduling of
machines and Autonomous Guided Vehicles (AGV) [38], and selective pickup and
delivery problem [39].

2.4 Imperialist Competitive Algorithm

The Imperialist Competitive Algorithm (ICA) is another interesting evolutionary


algorithm, which was proposed by Atashpaz-Gargari and Lucas [40]. The concept
of imperialism is that a country desires to extend its influence out of its borders [40].
Thus, the basic concept of this algorithm is related to the creation of empires by pow-
erful and wealthy countries, by accumulating several other countries as colonies. For
the ICA, the candidate solutions are represented as countries and the best countries-
solutions after the evaluation are considered as “imperialist” countries, which possess
colonies and form empires [41]. The power of each empire is computed according
to the power of the imperialist country and a percentage of the mean power of its
colonies and during the process, and imperialist countries try to enlarge their empire
by conquering other countries [40, 41].
The basic function of the ICA can be described as follows: at first, a population of
candidate countries-solutions is randomly generated within the search space of the
optimization problem. The objective/cost function is used to determine the power
of each country and subsequently determine the type of each country, i.e., which
countries will become imperialists and which will become colonies [40]. Three are
the most important operators of ICA: assimilation, imperialistic competition, and
revolution. The assimilation operator is implemented using crossover and mutation
processes and is used to direct the colonies towards the imperialist countries and get
improved by altering their attributes [41–43]. During this step, colonies do not move
straightforward towards the imperialist, but a deviation can also occur to examine
18 2 Evolutionary-Based Methods

different points around the imperialist [44]; larger deviations enhance global search
whereas smaller deviations favor a local search [45]. The revolution operator is used
to generate random changes in some countries, e.g., to replace the weakest colony
with a random new solution [41]. This process favors exploration and can lead to
avoidance of local optima in the early stages of the optimization process [45], but it
was not included in the original version of the algorithm [41].
After this step, the total power of each empire is calculated and then, the compe-
tition operator is used by strong imperialist empires to take over colonies of weak
empires [40, 45], see Fig. 2.2; in fact, the weakest colony of the weakest empire is
released and the other empires can capture it according to a probability value related
to their power [41]. During the assimilation process, if a colony is more fit than the
imperialist, then it replaces it. When weak empires lose all their colonies, they are
eliminated and when the colonies of the last remaining empire are almost equally
strong, the algorithm stops [41, 46]. Alternatively, the algorithm can terminate after
a maximum number of iterations is attained [41]. Important parameters of ICA and
their effect on the algorithm are presented thoroughly in [45]. Generally, ICA can
be viewed as equivalent to the GA, as it can represent the evolutionary of human
societies instead of biological evolutionary.
ICA algorithm has been used for the solution of multi-objective problems by per-
forming the necessary modifications [47]. In the work of Karimi et al. [46], a term
used in the electromagnetism-like mechanism method is chosen for the implementa-
tion of movement during the assimilation step and Taguchi method was employed in
order to determine the algorithm parameters’ value. In another work [43], an elitist
approach is followed by copying the imperialists (best solutions) to the next gener-
ation, as well as an additional regrouping step is added, which aims at the increase

Fig. 2.2 Weakest colony takeover by the strongest empire [45] (reproduced with permission)
2.4 Imperialist Competitive Algorithm 19

of diversity of the optimization process. Finally, Moradinasab et al. [48] proposed


an adaptive ICA and introduced a new operator, called “global war” operator, which
creates new countries and after merging and sorting of the old and new populations,
a new population is formed.
Some of the production-related problems, for which the ICA method has been
used, include process planning and scheduling [41], flexible flow shop scheduling
[46], assembly flow shop scheduling [42], single machine scheduling [43], production
network scheduling [49], and cellular manufacturing problem [50] among others.

2.5 Biogeography-Based Optimization Algorithm

Biogeography-Based Optimization (BBO) is an optimization method inspired of


the choice of appropriate habitat by animal species and was initially proposed by
Simon [51]. The term biogeography describes the scientific study of animal and
plant species distribution in geographic space since the beginning of evolutionary
of species (geological history) and biogeography constitutes a separate branch of
biology. Biogeography models can describe the migration of species from one place
to another, the way they arise and become extinct. As in the natural process, BBO
algorithm employs two main mechanisms-operators, namely migration of species and
mutation, and evaluates the candidate solutions according to the Habitat Suitability
Index (HSI) [51].
At first, an initial population of candidate solutions-habitats, of size N, consti-
tuting an ecosystem, is randomly generated within the search space. Each design
variable is considered as a Suitability Index Variable (SIV) and the objective func-
tion is related to the HSI. Habitats with high HSI value are highly suitable habitats,
with many species and the number of species in each habitat is affected by migration,
i.e., both emigration and immigration. High HSI habitats have high emigration rate
and low immigration rate due to the greater number of species which exist there and
are considered more static than low HSI habitats. These good solutions can exist for
longer time than low HSI habitats [51]. The migration operator is actually imple-
menting the exploitation process [52]; for the modification of solutions according
to migration, namely habitat modification, an immigration (λ) and an emigration
rate (μ), which are dependent on the fitness values of candidate solutions [53], are
defined and they are employed for the exchange of “information” between different
solutions-habitats, especially from good to poor solutions [51, 54]. More specifically,
if a solution is chosen to be modified, the immigration rate is used to determine which
SIVs from a solution should be modified and if it should be modified, the emigration
rate determines which of the other solutions should give a randomly chosen SIV to
the former solution [51]. It is to be noted that immigration rate is a monotonically
nonincreasing function of HSI and emigration rate is a monotonically nondecreasing
function of HSI [55]. This operator can preserve good solutions in order not to vanish
early [56].
20 2 Evolutionary-Based Methods

Finally, the mutation operator accounts for a tremendous change in the habitat
due to several reasons and can modify several SIVs of a solution, functioning in a
similar way as the mutation operator of the GA (exploration) [52]. This operator
can be implemented in three different ways [56]. Mutation is governed by solution
probability value; the lowest solution probability is exhibited for very high or very
low HSI solution, as they are both considered extreme cases and subsequently, they
are more likely to mutate, increasing the diversity [51, 54]. For low HSI solution,
this approach is advantageous as it enables these solutions to evolve to better ones
and for high HSI solution, it gives them a chance to further improve their value [51].
However, an elitist feature in the algorithm ensures that high HSI solutions will be
carried to the next generation, by setting immigration rate to zero.
An improved BBO algorithm was proposed by Lin [52]. In this approach, the
initialization was performed by the Opposition-Based Learning (OBL) method and
NEH heuristic, a local search method is added in the migration step and the VLS
mechanism, i.e., variable local search, is used to modify the mutation operator. The
latter is used in order not to produce a completely random solution, which will
decrease considerably the HSI value of a habitat. In the work of Paslar et al. [54],
the migration operator was applied separately at each solution vector, a penalty
function method was used in order to handle constraints and a modified habitat
update procedure was implemented. For this algorithm, several variants as well as
hybridizations with other metaheuristics exist, such as Particle Swarm Optimization
and Artificial Bee Colony algorithm.
BBO has been applied several times for applications in the field of industrial engi-
neering such as permutation flow shop scheduling [52, 57], flexible manufacturing
system scheduling [54], minimization of nonproductive time [56], and supply chain
network design [58].

2.6 Teaching-Learning-Based Optimization Method

Teaching-Learning-Based Optimization (TLBO) method is a metaheuristic based on


the influence of teaching on the progress of young students. This method was first
proposed by Rao et al. [59] for mechanical design optimization problems as other
algorithms, such as GA, were considered complicated and computationally expensive
for these types of problems. This method models several phases of the teaching and
learning process in education; good teachers can ameliorate more significantly the
average performance of students in a class than mediocre teachers and students can
also learn from the knowledge of better performing students [59]. In Fig. 2.3, the
flowchart of the method is shown.
In TLBO, the design variables are interpreted as subjects taught to the students
and the objective function is related to their performance on these subjects. The
teacher, being the best solution, has the highest performance on the subjects [59]. At
first, a population of students-candidate solutions is randomly generated and after
evaluation is conducted on them, namely a ranking phase [60], the best solution
2.6 Teaching-Learning-Based Optimization Method 21

Fig. 2.3 TLBO flowchart [59] (reproduced with permission)

represents the teacher. The procedure is divided into two main parts, the teaching
and the learning phase. The teacher is a highly qualified person, attempting to transmit
his knowledge to the students so as their mean level of knowledge approaches his
level (teacher phase). It is considered that the better the quality of teaching, the
greater will be the influence on the students’ performance as the teacher will bring the
22 2 Evolutionary-Based Methods

students closer to his level [59]. During this process, students-candidate solutions are
updated according to the difference of their average performance and their teacher’s
performance, as can be also seen in Eq. 2.1:

X new  X old + r (X teacher − (TF ) ∗ Mean) (2.1)

In Eq. 2.1, X new denotes the new solution for a learner-student, X old denotes the
previous solution for the same learner, X teacher is the solution corresponding to the
teacher, r is a random number in the range [0,1], T F is the teaching factor which
can be interpreted as a heuristic step and can take the value 1 or 2, and Mean is the
mean of the group of learners at the current iteration. If the new solution is better
than the previous, it is accepted, otherwise the old value is retained [61]. The use of
random variables in the calculation of new solution implies that there are students
which learn the subjects to a high degree, students that are mediocre and students
that are not affected from the teacher and their only hope is to learn afterwards from
good students [62].
Moreover, students can also learn from other students, so as they discuss their
subjects and study in groups, something that is called the learning phase in TLBO.
In fact, students learn from other students with more knowledge; so, two students
are randomly chosen and compared and the new solution, calculated based on the
difference of their levels is accepted, is evaluated if it can bring an improvement
compared to the old [59]. The learning phase can be easily implemented as follows:
 
X new  X old + r X i − X j (2.2)

In Eq. 2.2, it is evident that if learner j has better performance than i, the two terms
in the parenthesis are changed to (X j − X i ). Finally, the best student becomes teacher
in the end of the iteration. These processes of teaching and learning are repeated
until the termination criteria are met. Generally, the method is relatively simple in its
standard version, apart from the typical parameters, such as the size of population, the
number of maximum iterations and random variables, which regulate the movement
of students knowledge level towards the teachers, and those of the best students [63].
As in the case of other methods, modified versions of this method have also been
proposed. In an improved approach presented in [60], a training phase is added before
the teaching phase, in order to improve the quality of teachers. This approach also
considers more than one teacher in the population and two additional parameters,
namely the proportion of teachers in the population (η) and the training intensity
(ls ). The training phase is related to a local search procedure in the neighborhoods of
good solutions-teachers, which aims to intensify the search in favorable regions. In the
work of Rao [61] an elitist approach is also reported, in which the better solutions are
kept in the next generation as well. Moreover, in another work, modifications of the
original TLBO based on neighborhood search are presented [63]. At first, “learning
groups” consisting of several learners are formed and consequently, and changes in
the original teaching and learning phases are performed. As for the teaching stage, a
hybridization between the TLBO teaching phase and a Gaussian sampling learning
2.6 Teaching-Learning-Based Optimization Method 23

based on neighborhood search is proposed. Similarly during the learning stage, there
is a possibility of choosing the TLBO learning phase or a neighborhood search, which
is the equivalent of learning from the “neighborhood” teacher. These modifications
aim to improve the balance between global and local search. Furthermore, in the
Elitist-TLBO approach, a mutation operator is added to improve some of the flaws
of the original TLBO [64]. A more detailed analysis of TLBO algorithm is conducted
in [62].
Some of the applications of TLBO in engineering problems include hybrid flow
shop scheduling [59], optimal allocation of manufacturing resources [64], and assem-
bly line balancing [65].

2.7 Sheep Flock Heredity Algorithm

Sheep Flock Heredity Algorithm (SFHA) is a metaheuristics, inspired by the behav-


ior of sheep in flocks controlled by shepherds, proposed by Nara et al. [66]. This
algorithm exhibits many characteristics similar to the other evolutionary computa-
tional algorithms, as it is related to the inheritance of genetic characteristics between
generations of sheep both for sheep within the same flock and sheep from other
flocks [66]. Usually sheep are living within a specific flock, which is supervised by
the shepherds. This leads to inheritance of characteristics only locally but rarely,
sheep from different flocks can mix, creating individuals with mixed characteristics.
More specifically, in this method, the candidate solution is represented by a sheep
flock (chromosome—string) and it is composed of sub-chromosomes/sub-strings
with the same structural configuration as the chromosome (sheep) [66, 67]. A real-
life example of such structure is the maintenance schedule for a series of machines.
After the initial population is generated and the initial flocks-solutions are eval-
uated, the crossover and mutation operators are applied on individual parameters of
each flock (sub-chromosome) and then these operators are applied between different
flocks (chromosome level). It is to be noted that the application of genetic operators
is conducted with a specific probability value, as in GA. An improved variant of
SFHA is the Improved SFHA, in which two modifications are introduced. First, the
use of a single mutation process instead of pairwise mutation in order to control the
mutation behavior and improve the local search procedure, and second the use of a
robust-replace heuristic to improve the exploring ability of the algorithm [68, 69].
Although this algorithm is not very commonly used, it is closely related with
topics in the field of production engineering. For example, it was recently applied in
a flow shop scheduling problem [68].
24 2 Evolutionary-Based Methods

2.8 Shuffled Frog-Leaping Algorithm

Shuffled Frog-Leaping Algorithm (SFLA) is a metaheuristic algorithm introduced


by Eusuff, Lansey, and Pasha in 2003 [70]. This optimization algorithm is related to
the behavior of a group of frogs, in search for the location with the optimum amount
of food. In order to achieve this goal, there is a continuous information (meme)
exchange between the frogs. This information exchange can lead the population
of frogs to desirable regions. In SFLA, the candidate solutions are represented by
frogs/carriers of memes and design variables are represented by memotypes [71].
Frogs are randomly generated and after their initial evaluation, they are sorted in
descending order and they are divided into groups/communities called memeplexes
[70, 72]. In SFLA, at each iteration, the population consists of several such complexes
(memeplexes), each of them performing a local search [72]. In total, there are m
memeplexes containing n frogs each for a total population of m * n frogs. Starting
from the first to the mth frog, each frog is assigned to a memeplex, i.e., first, second,
until mth, and after that the next goes again to the first until all m memeplexes contain
n frogs each [73]. After that step, the local search process is performed for N s steps
in order to evolve the frogs within the memeplexes, as can be seen in Fig. 2.4; for
that reason, a subset of the memeplex called sub-memeplex is formed, consisting of
q frogs, with q < n [71]. This process aims to prevent the premature convergence in
local minima and enables an efficient local exploration step. The value of q is crucial
as it affects the efficiency of local search and the level of meme transference. The
value of m affects the level of global meme transference and n regulates the variance
of solution inside the memeplex [74].
Inside a memeplex, information and ideas are exchanged between members with
a view to improve the quality of their memes. Frogs with better memes are allowed

Fig. 2.4 Memetic


evolutionary at an
evolutionary step [71]
2.8 Shuffled Frog-Leaping Algorithm 25

to contribute more to the evolutionary of new ideas in the community [70] and
so this observation is used for the determination of individuals inside each sub-
memeplex, as the higher ranked frogs have higher probability to be selected [74].
The changes of meme components are related to leaping steps, which lead frogs to
new position. The frog communities are able to evolve separately but after some
iterations, a mixing (shuffling process) between individual of different communities
is performed in order to increase the quality of memes [70]. Furthermore, inside
each complex, sub-complexes are formed, usually by more than two individuals,
which act as parents and produce new solutions. This process corresponds to the
aforementioned local search process. The parents are ranked and there exists a chance
to produce new offspring originating from them or produce a random offspring. Then,
the offspring can replace the worst individual of the sub-complex [72]. When this
process is finished, evaluation of fitness of all frogs occurs, after which the new
global best is determined.
In fact, after the first generation is created, the individuals of every sub-complex
(sub-memeplex) are evaluated and the best and worst of each sub-complex, denoted
as X b and X w , respectively, as well as the global best individual, denoted as X g ,
are determined. Only the worst individual of the sub-complex can change during
each cycle, according to the best of the sub-complex or the global best, but there
is also a possibility that a random solution can be produced. More specifically, the
worst solution is attempted to be altered according to the local best, then if it is not
acceptable, it is attempted to be altered according to the global best and finally if
no improvement is observed, a random solution is generated [72]. The improvement
according to local best solution is performed as follows:

Di  rand() ∗ (X b − X w ) (2.3)
X new,w  X old,w + Di (2.4)

In Eqs. 2.3 and 2.4, Di represents the frog-leaping size for the i-th frog and lies
in the range [−Dmax , Dmax ], where Dmax is the maximum allowed step. Rand() is
a random number in the range [0,1] and X new,w and X old,w are the new solution
for the worst frog and previous solution for the worst frog, respectively. As it was
aforementioned, if X new,w is not an improved solution, X b is replaced in Eq. 2.3 by X g .
After several cycles are completed, the shuffling process starts [73]. The shuffling
process corresponds to a global information exchange and afterwards sorting of
individuals is conducted and new memeplexes are produced. Finally, the iterative
process ends when the termination criteria are met. Due to its influence of Memetic
Algorithm and the Swarm Intelligence characteristics, SFLA is considered to benefit
from both types of optimization methods [72].
In [73], the addition of a genetic mutation operator was proposed after the shuffling
process takes place in order to avoid premature convergence of the algorithm and
escape local optima. Furthermore, in [75], some modifications to enhance the speed
of SFLA are proposed such as the simultaneous update of each frog according to the
local and global best instead of the update of only the worst frog, creation of new
26 2 Evolutionary-Based Methods

memeplex not after every iteration, and replacement of the random update option
for the worst individual by a different strategy. In the work of Mora-Melia et al.
[71], the number of frogs inside the sub-memeplexes is defined more generally, as
a percentage and not an integer and leaping steps are allowed to be even larger than
the maximum leaping step in order to enhance the ability of preventing local optima.
For the interested reader, some other variants of the SFLA are reported in [75] and an
analysis of the importance of each parameter of the SFLA is presented in [71]. Some
applications of the SFLA in the field of engineering include long-term generation
maintenance scheduling [72], traveling salesman problem [74], gray project selection
scheduling [76], and multiprocessor scheduling [77].

2.9 Bacteria Foraging Optimization Algorithm

Bacteria Foraging Optimization (BFO) algorithm is a bioinspired optimization algo-


rithm based on the foraging behavior of bacteria species such as the E. coli bacteria
and was proposed by Passino [78]. More specifically, this algorithm is based on the
chemotaxis behavior of this type of bacteria, which enables them to detect nutrients in
their environment and move appropriately towards them, also avoiding regions with
noxious environment. The bacteria can detect nutrients from gradients of chemical
substances and move in response to these chemical stimuli in small steps. Further-
more, communication is also conducted between different bacteria. The search for
nutrients is very important as it serves for maximization of the energy obtained by the
bacteria. Two types of movement of bacteria can be performed by flagella, namely
tumbling and swimming, according to the direction towards which the flagella is
rotated [79]. Bacteria tend to tumble frequently in less favorable environment, but in
a friendly environment, when traveling at a direction with increasing nutrient con-
centrations [80], they move for longer distances and after consuming a sufficient
quantity of food, they can increase in length and produce copies of themselves [79].
Finally, when a sudden change occurs, bacteria may be eliminated or moved to remote
locations [79].
Thus in this method, an initial population of bacteria candidate solutions is gener-
ated and then after the initial population of bacteria is randomly generated, individuals
are evaluated. The objective function J is represented by the nutrient concentration
value; the function J represents the “cost” of being in a specific position on the nutri-
ent surface; for J < 0, the bacterium is in a nutrient-rich area, for J = 0 in a neutral
area and for J > 0 in a noxious environment [80].
Four basic processes-steps can be observed in BFO algorithm namely chemotaxis,
swarming, reproduction, and elimination-dispersal, resembling the natural process
of bacteria foraging [79]. It is noteworthy to mention that some researchers consider
chemotaxis and swarming as a single process [80]. During the chemotaxis step,
bacteria swim, tumble or perform both processes alternatively, when, i.e., a tumble
is followed by several swimming steps [78, 80]. Tumbling can change the direction
of movement but swimming cannot, so direction is only determined by tumbling.
2.9 Bacteria Foraging Optimization Algorithm 27

Swimming is performed when current value of J is better than the last one when
moving in a direction up to a maximum number of movements. After tumbling and
swimming operations, the fitness of each bacterium is updated [80]. A special term
in the calculation of J serves for the representation of variations in attractiveness
of regions with different number of bacteria and for the prevention of two bacteria
having the same position using attraction and repellent terms [78, 81]. The swarming
step involves the aggregation of bacteria into groups and movement in concentric
paths [79]. Next, the reproduction steps involve the creation of the new generation
of bacteria; after the sorting of bacteria according to their fitness, the least healthy
bacteria die and the healthier can split into two individuals and produce new members
of the population in order to keep its size constant [79, 80]. Then, there is possibility
that gradual or sudden changes may occur to the population of bacteria and thus,
some random bacteria die and get replaced by others inside the search space; this
strategy is used to prevent the algorithm from stopping in local optima [79].
For more details on the algorithm, the interested reader is advised to consult a thor-
ough study on BFO algorithm by Das et al. [79]. During the years after the invention
of BFO, several modifications have been also proposed to this method. Kasaiezadeh
et al. [82] proposed a spiral bacterial foraging method in order to address deficiencies
of the original BFO such as premature convergence, need to tune algorithm param-
eters, and low speed of convergence. In this approach, gradient-based operators and
a multi-agent structure, which inherently possesses Swarm Intelligence character-
istics, are also employed to ameliorate the efficiency of BFO. Zhao et al. [83, 84]
added a Differential Evolutionary mutation operator to the chemotaxis step in order
to improve the motion of bacteria, if tumble fails. They also added a chaotic local
search process before the reproduction step, in order to further avoid local optima.
Muñoz et al. [81] noted the complexity of BFO regarding its nested architecture and
various parameters involved and proposed a simplified version of BFO with different
initialization processes and removal of cell-to-cell communication feature. Finally,
Li et al. [85] proposed a variant of BFO with varying population, which involves
the use of a more detailed model for the description of bacteria foraging process.
Some of the applications of SFLA algorithm in the field of production engineering
include task scheduling in cellular manufacturing system [80], permutation flow shop
scheduling [84], and job shop scheduling [81].

References

1. Rajkumar R, Shahabudeen P (2009) An improved genetic algorithm for the flowshop scheduling
problem. Int J Prod Res 47:233–249
2. Holland JH (1992) Adaptation in natural and artificial systems: an introductory analysis with
applications to biology, control and artificial intelligence. MIT Press, Cambridge
3. Wang L, Tang D (2011) An improved adaptive genetic algorithm based on hormone modulation
mechanism for job-shop scheduling problem. Expert Syst Appl 38:7243–7250
4. Goldberg DE (1989) Genetic algorithms in search, optimization and machine learning.
Addison-Wesley Longman Publishing Co., Inc., Boston
28 2 Evolutionary-Based Methods

5. Cantú-Paz E (1998) A survey of parallel genetic algorithms. Calc paralleles, Reseaux Syst
Repartis 10:141–171
6. Zhou A, Qu B-Y, Li H, Zhao S-Z, Suganthan PN, Zhang Q (2011) Multiobjective evolutionary
algorithms: a survey of the state of the art. Swarm Evol Comput 1:32–49
7. Denkena B, Behrens B-A, Charlin F, Dannenberg M (2012) Integrative process chain opti-
mization using a genetic algorithm. Prod Eng 6:29–37
8. Cui W-W, Lu Z, Zhou B, Li C, Han X (2016) A hybrid genetic algorithm for non-permutation
flow shop scheduling problems with unavailability constraints. Int J Comput Integr Manuf
29:944–961
9. Liu C-H (2009) Lot streaming for customer order scheduling problem in job shop environments.
Int J Comput Integr Manuf 22:890–907
10. Woo Y-B, Jung S, Kim BS (2017) A rule-based genetic algorithm with an improvement heuris-
tic for unrelated parallel machine scheduling problem with time-dependent deterioration and
multiple rate-modifying activities. Comput Ind Eng 109:179–190
11. Cho H-M, Jeong I-J (2017) A two-level method of production planning and scheduling for
bi-objective reentrant hybrid flow shops. Comput Ind Eng 106:174–181
12. Nahas N, Nourelfath M, Gendreau M (2014) Selecting machines and buffers in unreliable
assembly/disassembly manufacturing networks. Int J Prod Econ 154:113–126
13. Diabat A, Al-Salem M (2015) An integrated supply chain problem with environmental con-
siderations. Int J Prod Econ 164:330–338
14. Chaudhry SS, Luo W (2005) Application of genetic algorithms in production and operations
management: a review. Int J Prod Res 43:4083–4101
15. Aytug H, Khouja M, Vergara FE (2003) Use of genetic algorithms to solve production and
operations management problems: a review. Int J Prod Res 41:3955–4009
16. Akgündüz OS, Tunalı S (2010) An adaptive genetic algorithm approach for the mixed-model
assembly line sequencing problem. Int J Prod Res 48:5157–5179
17. Huang H, Wang Z (2010) Solving coupled task assignment and capacity planning problems
for a job shop by using a concurrent genetic algorithm. Int J Prod Res 48:7507–7522
18. Toledo CFM, França PM, Morabito R, Kimms A (2009) Multi-population genetic algorithm to
solve the synchronized and integrated two-level lot sizing and scheduling problem. Int J Prod
Res 47:3097–3119
19. Li H, Fang L (2014) Co-evolutionary algorithm: an efficient approach for bilevel programming
problems. Eng Optim 46:361–376
20. Maneeratana K, Boonlong K, Chaiyaratana N (2005) Co-operative co-evolutionary genetic
algorithms for multi-objective topology design. Comput Aided Des Appl 2:487–496
21. Yu B, Zhao H, Xue D (2017) A multi-population co-evolutionary genetic programming
approach for optimal mass customisation production. Int J Prod Res 55:621–641
22. Shukla N, Tiwari MK, Ceglarek D (2013) Genetic-algorithms-based algorithm portfolio for
inventory routing problem with stochastic demand. Int J Prod Res 51:118–137
23. Kucukkoc I, Karaoglan AD, Yaman R (2013) Using response surface design to determine the
optimal parameters of genetic algorithm and a case study. Int J Prod Res 51:5039–5054
24. Storn R, Price K (1997) Differential evolution—a simple and efficient heuristic for global
optimization over continuous spaces. J Glob Optim 11:341–359
25. Gnanavel Babu A, Jerald J, Noorul Haq A, Muthu Luxmi V, Vigneswaralu TP (2010) Scheduling
of machines and automated guided vehicles in FMS using differential evolution. Int J Prod Res
48:4683–4699
26. Peng W, Huang M (2014) A critical chain project scheduling method based on a differential
evolution algorithm. Int J Prod Res 52:3940–3949
27. Neri F, Tirronen V (2010) Recent advances in differential evolution: a survey and experimental
analysis. Artif Intell Rev 33:61–106
28. Nourmohammadi A, Zandieh M (2011) Assembly line balancing by a new multi-objective
differential evolution algorithm based on TOPSIS. Int J Prod Res 49:2833–2855
29. Chen H, Zhou S, Li X, Xu R (2014) A hybrid differential evolution algorithm for a two-stage
flow shop on batch processing machines with arbitrary release times and blocking. Int J Prod
Res 52:5714–5734
References 29

30. Wisittipanich W, Kachitvichyanukul V (2012) Two enhanced differential evolution algorithms


for job shop scheduling problems. Int J Prod Res 50:2757–2773
31. Moscato P, Cotta C (2006) A gentle introduction to memetic algorithms. In: Glover FKG (ed)
Handbook of metaheuristics, 1st edn. Springer Science & Business Media, pp 105–144
32. Moscato P (1989) On evolution, search, optimization, genetic algorithms and martial arts—-
towards memetic algorithms. Caltech Concurrent Computation Program, C3P Report 826
33. Liu T, Jiang Z, Geng N (2013) A memetic algorithm with iterated local search for the capacitated
arc routing problem. Int J Prod Res 51:3075–3084
34. Wang H, Yang S, Ip WH, Wang D (2012) A memetic particle swarm optimisation algorithm
for dynamic multi-modal optimisation problems. Int J Syst Sci 43:1268–1283
35. Neri F, Cotta C (2012) Memetic algorithms and memetic computing optimization: a literature
review. Swarm Evol Comput 2:1–14
36. Wang Y, Chen Y, Lin Y (2017) Memetic algorithm based on sequential variable neighborhood
descent for the minmax multiple traveling salesman problem. Comput Ind Eng 106:105–122
37. Gao L, Zhang G, Zhang L, Li X (2011) An efficient memetic algorithm for solving the job
shop scheduling problem. Comput Ind Eng 60:699–705
38. Lacomme P, Larabi M, Tchernev N (2013) Job-shop based framework for simultaneous
scheduling of machines and automated guided vehicles. Int J Prod Econ 143:24–34
39. Ting C-K, Liao X-L (2013) The selective pickup and delivery problem: formulation and a
memetic algorithm. Int J Prod Econ 141:199–211
40. Atashpaz-Gargari E, Lucas C (2007) Imperialist competitive algorithm: an algorithm for opti-
mization inspired by imperialistic competition. In: 2007 IEEE congress on evolutionary com-
putation. Singapore, pp 4661–4667
41. Lian K, Zhang C, Gao L, Li X (2012) Integrated process planning and scheduling using an
imperialist competitive algorithm. Int J Prod Res 50:4326–4343
42. Seidgar H, Kiani M, Abedi M, Fazlollahtabar H (2014) An efficient imperialist competi-
tive algorithm for scheduling in the two-stage assembly flow shop problem. Int J Prod Res
52:1240–1256
43. Yousefi M, Yusuff RM (2013) Minimising earliness and tardiness penalties in single machine
scheduling against common due date using imperialist competitive algorithm. Int J Prod Res
51:4797–4804
44. Afshar A, Emami Skardi MJ, Masoumi F (2015) Optimizing water supply and hydropower
reservoir operation rule curves: an imperialist competitive algorithm approach. Eng Optim
47:1208–1225
45. Hosseini S, Al Khaled A (2014) A survey on the imperialist competitive algorithm meta-
heuristic: implementation in engineering domain and directions for future research. Appl Soft
Comput 24:1078–1094
46. Karimi N, Zandieh M, Najafi AA (2011) Group scheduling in flexible flow shops: a hybridised
approach of imperialist competitive algorithm and electromagnetic-like mechanism. Int J Prod
Res 49:4965–4977
47. Bilel N, Mohamed N, Zouhaier A, Lotfi R (2016) An improved imperialist competitive algo-
rithm for multi-objective optimization. Eng Optim 48:1823–1844
48. Moradinasab N, Shafaei R, Rabiee M, Ramezani P (2013) No-wait two stage hybrid flow shop
scheduling with genetic and adaptive imperialist competitive algorithms. J Exp Theor Artif
Intell 25:207–225
49. Behnamian J, Ghomi SMTF (2012) Incorporating transportation time in multi-agent production
network scheduling. Int J Comput Integr Manuf 25:1111–1128
50. Shirzadi S, Tavakkoli-Moghaddam R, Kia R, Mohammadi M (2017) A multi-objective impe-
rialist competitive algorithm for integrating intra-cell layout and processing route reliability in
a cellular manufacturing system. Int J Comput Integr Manuf 30:839–855
51. Simon D (2008) Biogeography-based optimization. IEEE Trans Evol Comput 12:702–713
52. Lin J (2016) A hybrid discrete biogeography-based optimization for the permutation flow shop
scheduling problem. Int J Prod Res 54:4805–4814
30 2 Evolutionary-Based Methods

53. Rabiee M, Jolai F, Asefi H, Fattahi P, Lim S (2016) A biogeography-based optimisation algo-
rithm for a realistic no-wait hybrid flow shop with unrelated parallel machines to minimise
mean tardiness. Int J Comput Integr Manuf 29:1007–1024
54. Paslar S, Ariffin MKA, Tamjidy M, Hong TS (2015) Biogeography-based optimisation for
flexible manufacturing system scheduling problem. Int J Prod Res 53:2690–2706
55. Mukherjee R, Chakraborty S (2012) Selection of EDM process parameters using biogeography-
based optimization algorithm. Mater Manuf Process 27:954–962
56. Tamjidy M, Paslar S, Baharudin BTHT, Hong TS, Ariffin MKA (2015) Biogeography based
optimization (BBO) algorithm to minimise non-productive time during hole-making process.
Int J Prod Res 53:1880–1894
57. Lin J, Zhang S (2016) An effective hybrid biogeography-based optimization algorithm for the
distributed assembly permutation flow-shop scheduling problem. Comput Ind Eng 97:128–136
58. Yang G-Q, Liu Y-K, Yang K (2015) Multi-objective biogeography-based optimization for
supply chain network design under uncertainty. Comput Ind Eng 85:145–156
59. Rao RV, Savsani VJ, Vakharia DP (2011) Teaching–learning-based optimization: a novel
method for constrained mechanical design optimization problems. Comput Des 43:303–315
60. Shen J, Wang L, Zheng H (2016) A modified teaching–learning-based optimisation algorithm
for bi-objective re-entrant hybrid flowshop scheduling. Int J Prod Res 54:3622–3639
61. Rao RV (2016) Teaching learning based optimization algorithm: and its engineering applica-
tions. Springer International Publishing, Switzerland
62. Črepinšek M, Liu S-H, Mernik L (2012) A note on teaching–learning-based optimization
algorithm. Inf Sci 212:79–93
63. Zou F, Wang, L, Hei X, Chen D, Jiang Q, Li H (2014) Bare-bones teaching-learning-based
optimization. Sci World J 136920
64. Zhang W, Zhang S, Guo S, Yang Y, Chen Y (2017) Concurrent optimal allocation of distributed
manufacturing resources using extended teaching-learning-based optimization. Int J Prod Res
55:718–735
65. Tuncel G, Aydin D (2014) Two-sided assembly line balancing using teaching–learning based
optimization algorithm. Comput Ind Eng 74:291–299
66. Nara K, Takeyama T, Kim H (1999) A new evolutionary algorithm based on sheep flocks hered-
ity model and its application to scheduling problem. In: 1999 IEEE international conference
on systems, man, and cybernetics, Tokyo, Japan, pp 503–508
67. Kim H, Ahn B (2001) A new evolutionary algorithm based on sheep flocks heredity model.
In: 2001 IEEE Pacific Rim conference on communications, computers and signal processing,
Victoria, BC, Canada, pp 514–517
68. Chakaravarthy GV, Marimuthu S, Ponnambalam SG, Kanagaraj G (2014) Improved sheep
flock heredity algorithm and artificial bee colony algorithm for scheduling m-machine flow
shops lot streaming with equal size sub-lot problems. Int J Prod Res 52:1509–1527
69. Anandaraman C (2011) An improved sheep flock heredity algorithm for job shop scheduling
and flow shop scheduling problems. Int J Ind Eng Comput 2(4):749–764
70. Eusuff M, Lansey K, Pasha F (2006) Shuffled frog-leaping algorithm: a memetic meta-heuristic
for discrete optimization. Eng Optim 38:129–154
71. Mora-Melia D, Iglesias-Rey P, Martínez-Solano F, Muñoz-Velasco P (2016) The efficiency of
setting parameters in a modified shuffled frog leaping algorithm applied to optimizing water
distribution networks. Water 8:182
72. Samuel GG, Rajan CCA (2014) A modified shuffled frog leaping algorithm for long-term
generation maintenance scheduling. In: Pant M, Deep K, Nagar A, Bansal JC (eds) Proceedings
of the third international conference on soft computing for problem solving, Springer India,
New Delhi, pp 11–24
73. Bhattacharjee KK, Sarmah SP (2014) Shuffled frog leaping algorithm and its application to
0/1 knapsack problem. Appl Soft Comput 19:252–263
74. Luo X, Yang Y, Li X (2008) Solving TSP with shuffled frog-leaping algorithm. In: 2008 eighth
international conference on intelligent systems design and applications, Kaohsiung, Taiwan,
pp 228–232
References 31

75. Wang L, Gong Y (2013) A fast shuffled frog leaping algorithm. In: 2013 ninth international
conference on natural computation (ICNC), Shenyang, China, pp 369–373
76. Amirian H, Sahraeian R (2017) Solving a grey project selection scheduling using a simulated
shuffled frog leaping algorithm. Comput Ind Eng 107:141–149
77. Tripathy B, Dash S, Padhy SK (2015) Multiprocessor scheduling and neural network training
methods using shuffled frog-leaping algorithm. Comput Ind Eng 80:154–158
78. Passino KM (2002) Biomimicry of bacterial foraging for distributed optimization and control.
IEEE Control Syst 22:52–67
79. Das S, Biswas A, Dasgupta S, Abraham A (2009) Bacterial foraging optimization algorithm:
theoretical foundations, analysis, and applications. In: Abraham A, Hassanien A-E, Siarry P,
Engelbrecht A (eds) Foundations of computational intelligence volume 3: global optimization.
Springer, Berlin, Heidelberg, pp 23–55
80. Liu C, Wang J, Leung JY-T, Li K (2016) Solving cell formation and task scheduling in cellular
manufacturing system by discrete bacteria foraging algorithm. Int J Prod Res 54:923–944
81. Muñoz MA, Halgamuge SK, Alfonso W, Caicedo EF (2010) Simplifying the bacteria foraging
optimization algorithm. In: IEEE congress on evolutionary computation, Barcelona, Spain, pp
1–7
82. Kasaiezadeh A, Khajepour A, Waslander SL (2014) Spiral bacterial foraging optimization
method: Algorithm, evaluation and convergence analysis. Eng Optim 46:439–464
83. Zhao F, Jiang X, Zhang C, Wang J (2015) A chemotaxis-enhanced bacterial foraging algorithm
and its application in job shop scheduling problem. Int J Comput Integr Manuf 28:1106–1121
84. Zhao F, Liu Y, Shao Z, Jiang X, Zhang C, Wang J (2016) A chaotic local search based bacterial
foraging algorithm and its application to a permutation flow-shop scheduling problem. Int J
Comput Integr Manuf 29:962–981
85. Li MS, Ji TY, Tang WJ, Wu QH, Saunders JR (2010) Bacterial foraging algorithm with varying
population. Biosystems 100:185–197
Chapter 3
Swarm Intelligence-Based Methods

3.1 Introduction

The term “Swarm Intelligence” refers directly to the collective behavior of a group
of animals, which follow very basic rules, or to an Artificial Intelligence approach,
which aims at the solution of a problem using algorithms based on collective behavior
of social animals. For over three decades, several algorithms based on the observa-
tion of the behavior of groups of animals were developed, such as Particle Swarm
Optimization, from the observation of flocks of birds. Some of the most established
Swarm Intelligence (SI) methods include the Ant Colony Optimization method, the
Harmony Search method, and the Artificial Bee Colony algorithm. From the short
literature survey which was conducted and presented in Chap. 1, it was derived that
Particle Swarm Optimization is by far the most popular method, followed by Ant
Colony Optimization, Artificial Bee Colony, and Harmony Search.
Due to the large amount of metaphor based methods developed in the category
of SI-based methods, several scientists have expressed their concern on the novelty
of some of them and thus the use of these methods and derivation of new ones has
also attracted criticism [1]. In the present work, methods which have been employed
at least a couple of times in the field of engineering, as well as their variants, are
presented.

3.2 Particle Swarm Optimization

Particle Swarm Optimization (PSO) method is a computational method which aims


to determine the optimal values of design parameters for a given problem by repre-
senting the candidate solutions with moving particles, which move within the search
space according to specific velocities and throughout the process they are guided to
the optimum points by better performing individuals. This method was invented by

© The Author(s) 2019 33


N. E. Karkalos et al., Computational Methods for Application in Industry 4.0,
Manufacturing and Surface Engineering, https://doi.org/10.1007/978-3-319-92393-2_3
34 3 Swarm Intelligence-Based Methods

Kennedy and Eberhart [2]. According to its inventors, this method has ties to artifi-
cial life and swarming theory, but also possesses elements of evolutionary methods.
In fact, this method was created based on sociobiology observations of the birds’
behavior and then some basic observations were transformed into algorithm. Birds
always travel in groups, without collisions between them and adjust their position
and velocity because it reduces the effort to search for food and appropriate shelter
[2, 3].
The basic concept behind PSO method is that the whole number of particles, rep-
resenting actually the candidate solutions, is displaced in the search space according
to their best known position and the best known position of the swarm in order to
approach the global optimum point. Particles are assumed as having no mass and
volume and the update of their positions is performed using fundamental equations
from physics with some additional elements in order to ensure that the basic functions
of metaheuristics are performed [4], see Fig. 3.1. It is to be noted that PSO and SI
methods differ from other population-based algorithms such as Genetic Algorithms
and other Evolutionary-Based Algorithms because generally the total number of
population members does not change, but information is shared between members,
so as they can be directed towards the optimum point [5].
At first, a randomly initialized population of n particles-candidate solutions, which
can move in the multidimensional search space with velocity updated according to
personal and global best values, is generated [5]. The number of particles is problem

Fig. 3.1 Swarm updates location in each iteration [7] (reproduced with permission)
3.2 Particle Swarm Optimization 35

dependent and usually no rules are used to determine it [3]. A local search may also
be applied to a group of particles to enhance the exploitation capabilities [5].
The particle position (X i ) and velocity (V i ) are updated as follows:
   
Vit+1  wVit + c1 ∗ rand() ∗ Pit − X it + c2 ∗ rand() ∗ Pgt − X it (3.1)
X it+1  X it + Vit+1 (3.2)

As it can be seen from Eq. 3.1, three different terms are applied simultaneously
in order to determine the new position of a particle [6, 7]. The first term of Eq. 3.1
represents the current speed of the particle (inertia of the movement) and can be
balanced by a weight coefficient w. The second term represents the cognition term,
which causes the swarm to have the ability to search the whole search space and
avoid a local minimum, based on its own previous best value Pit or experience [3].
The third term is called the social term and reflects the information shared between
the swarm and the particles and leads towards good solutions [8]. The two constants
in the last two terms, i.e., c1 and c2 , regulate the effect of personal and global best
value Pgt to the outcome and are also called acceleration factors [3, 6]. Finally, the
rand() functions return a number in the range [0,1]. Average velocity is shown to
decrease when approaching a good solution and it is larger for a large-scale problem
[3].
As PSO is the most popular and old SI method, several modifications and numer-
ous variants have been introduced since its invention to address some of its shortcom-
ings [8]. For example in [6], an enhanced PSO method including mutation, crossover,
and shift operators to help the method to escape more easily from local optima is
presented. For the interested reader, more detailed information on PSO method, its
variants, and general applications is presented in [4, 9]. Some of these applications
concerning manufacturing engineering are single machine tardiness problem [5],
process planning optimization [10], no-wait flow shop problem [11], flexible flow
shop problem [12], assembly sequence planning and assembly line balancing [13],
and truck scheduling cross-dock [14].

3.3 Artificial Bee Colony Method

Artificial Bee Colony (ABC) method is another foraging behavior-inspired optimiza-


tion method. In fact, it originates from the observation of the behavior of honey bees
and was proposed about a decade ago, in 2005, in a technical report by Karaboga
[15]. As an algorithm, it can also be categorized in SI methods as it involves the
cooperation between individuals for a common purpose. In specific, honey bees act
collectively with a view to collect their food from places at a distance from their
beehive. As in real life, several types of honey bees are employed in this method,
which perform a different job for the bee colony.
36 3 Swarm Intelligence-Based Methods

A forager bee, initially called unemployed, flies away from the beehive in order
to find and evaluate food sources. Food sources are evaluated based on their distance
from the beehive, their nutrient value and taste, and the degree of difficulty for their
extraction. The forager bee associated to a food source is called an employed bee and
transfers information about food sources back to the beehive and informs the other
bees about these sources. Moreover, a percentage of the bees are either searching
for food source randomly (scout) or use the information from the employed bees to
find a suitable food source (onlooker). A very particular fact about bees is that the
exchange of information is conducted in the form of a “dance”, which is dependent
on the quality of food sources. After returning to the beehive, an employed bee can
attempt to recruit new foragers after dancing and return to the same food source,
continuing to forage with recruiting bees or stop foraging [16].
In this algorithm, candidate solutions (feasible solutions) are represented by the
position of food sources and the evaluation of the candidate solution is represented
by the evaluation of the nectar amount of each food source, i.e., the quality of food
source [16]. The number of employed bees or the onlooker bees is equal to the
number of candidate solutions. Then, it is assumed that half of the population of bees
are employed and half of them are onlookers [17]. These two type of bees carry out
the exploitation process, as it will be described afterwards [16, 18]. Initially, food
sources are randomly distributed in the search space, according to Eq. 3.3:
 
xi j  x min
j + x max − x min ∗ r
j j (3.3)

In the last equation, i = 1,…, NP and j = 1,…, D, where NP represents the number of
food sources and D the number of decision variables. The term r represents a random
number in the range 0–1. At each step of the algorithm, the search for better solutions
is conducted by means of three food search processes by employed, onlooker, and
scout bees. Employed bees choose a modified food source position, slightly different
from the one stored in their memory, and evaluate it; if it has a higher amount of
nectar, it is chosen as a new food source and is memorized by the bee, while the old
is abandoned [16]. The route of a bee foraging for nectar can be seen in Fig. 3.2. The
neighborhood solutions for this phase of the algorithm are produced as follows:
 
vi j  xi j + ϕi j xi j − xk j (3.4)

In Eq. 3.4, j is a randomly chosen integer in the range 1–D and k ∈ {1, …,
NP} is a random food source, different from food source x i . The term ϕ ij is random
number in the range [−1,1] and vij is the modified solution in the neighborhood of
x ij . After employed bees have completed their work, they share their information
with the onlookers, which evaluate all current food sources and choose one of them
according to a probability value, dependent on the fitness of each food source. The
onlookers also modify the food source position to produce a new one and choose
it, if it is better than the previous one. At last, the scout bees are sent to find new
food sources randomly, which is implemented by a random selection process and
3.3 Artificial Bee Colony Method 37

Fig. 3.2 Bee route in the nectar foraging procedure [18] (reproduced with permission)

serves for diversification purposes [16, 19]. These new random food sources replace
abandoned food sources, i.e., food position, which were not improved after some
iterations. This is implemented in the algorithm as follows:
 
j j j j
xi  xmin + rand() ∗ xmax − xmin (3.5)

j
In Eq. 3.5, xi is the new food source to replace the abandoned x i and j is a random
integer, in the range 1–D. It is to be noted that the determination of the abandoned
food sources is implemented using a counter, which increases each time an employed
bee cannot find any good modified solution, and it is set to zero once a better solution
is found [19]. Moreover, initially, the ABC algorithm allowed only one scout bee per
generation [20]. These steps are performed in the same way until termination criteria
are met. Basic parameters of the ABC algorithm are the number of food sources, the
38 3 Swarm Intelligence-Based Methods

number of each bee type members [21], the maximum number of iterations, and the
“limit” variable, determining the abandonment of a food position.
As for applications of ABC to industrial engineering, there exist several to prob-
lems such as job shop scheduling [22], capacitated vehicle routing problem [23],
multi-factory parallel machine problem [24], two-machine flow shops [25], and job
shop scheduling problem [26].

3.4 Ant Colony Optimization Algorithm

Ant Colony Optimization (ACO) constitutes one of the most popular SI metaheuris-
tics. It was originally proposed by Dorigo in his Ph.D. thesis [27]. Ants constitute
a social species and have determined their foraging behavior after millions of years
of evolutionary. In specific, they are able to communicate with other ants by the
deposition of chemical substances as they travel to find food in order to guide them
and effectively pass around every object that lies in their path, thus determining the
shortest path in every case. This type of behavior of some species, which involves a
self-organization method without need of direct communication, is termed as stig-
mergy in the field of biology.
In this method, the candidate solutions are represented by ants, which wander in
a specific area around their nest, actually the search space, and seek for food. Ants
create different paths on the ground when searching for food in various directions
[28]. When they find their food they travel back to their nest leaving a substance
called pheromone along their path [29]. When other ants are near this path they may
get attracted by the pheromone trails and follow this path. An important factor for
determining the best paths is the attractive strength of pheromone in a path. If this
path is shorter it is more frequently traversed by ants and thus pheromone levels are
high, attracting more other individuals. On the contrary, if the path is longer, the ants
require more time to travel to the food source and pheromone levels are reduced due
to evaporation. This evaporation mechanism is essential to the method as it is directly
related to the diversification capability and can lead the method to evade local optima
[28].
The algorithm is implemented based on natural observations. Initially, a popula-
tion of ants in randomly distributed in a graph and initial pheromone for each edge
is defined [30]. Usually, the initial value of pheromone is very low and almost equal
for each path. More specifically, for each ant, a starting node is randomly selected
and the ant moves to the next nodes until a tour is completed. The movements are
conducted according to a transition rule [31] and local information on the nodes. In
fact, moves are conducted based on a probability function dependent on the amount
of pheromone in an edge and the desirability of this path. The desirability can be
computed by a heuristic and is defined in a different way according to each problem.
For example, for the case in which short edges are desired, it is defined as the inverse
of the length of each path [32]. When a path is completed, it represents a potential
solution to the problem, which is subsequently evaluated [31]. For each edge/path
3.4 Ant Colony Optimization Algorithm 39

crossed, the pheromone level is updated based on the fitness of the candidate solution
corresponding to the path and the process continues until the termination criteria are
met [30]. In Fig. 3.3, pheromone trails development and the eventual dominance of
the pheromone level of the shortest path are shown for various different cases.
Two additional features are present in the ACO method, namely pheromone trail
evaporation and daemon actions [31]. Pheromone trail evaporation helps to avoid
premature convergence of the algorithms, as it allows ants to follow new paths and
daemon actions can act as an alternative to pheromone evaporation. Daemon actions
are generally optional processes that cannot be carried by individual ants and require
a rather centralized way of application; an example for these actions is a local search
procedure. Important parameters of the method include the number of ants, maximum
number of iterations, relative importance of pheromone, and evaporation rate.
As ACO method is one of the earliest metaheuristics, several modifications have
been proposed since its invention, aiming at the improvement of its performance.
In the work of Adubi and Misra [29], four main variants of the ACO method are
presented. In the work of Wong et al. [33], a modification for the ACO method is
proposed, in which a two-stage procedure is considered. Finally, for a more compre-
hensive review of the ACO method, the interested reader can consult works in the
relevant literature such as [31, 34, 35].
As ACO method originates from a metaphor concerning the determination of
shortest path, it is very popular for various scheduling problems which occur in
production engineering such as vehicle routing [36] and course timetabling. More
specifically, some of the problems solved with ACO are job shop scheduling [37,
38], process planning and scheduling [33], makespan minimization on parallel batch
processing machines [39], flexible manufacturing problems [40], and safety stock
placement [41].

Fig. 3.3 Pheromone trails for a undisturbed route from nest to food source, b introduction of obsta-
cle, c ants on two different routes and d eventual dominance of the shortest route [32] (reproduced
with permission)
40 3 Swarm Intelligence-Based Methods

3.5 Intelligent Water Drops Algorithm

The Intelligent Water Drops (IWD) algorithm is a metaheuristic inspired by water


drops and the flow of rivers according to their environment. This algorithm was
proposed by Shah-Hosseini [42]. Usually, rivers are flowing from a higher point, until
they join the sea; their paths contain several turns according to the topography of the
regions crossed. Water drops in a river are moving mainly due to gravitational force,
but their path is also modified by the presence of various obstacles such as rocks;
however, in any case, water can find the optimum path by various mechanisms such
as removing soil from the river bed. Based on these basic observations of water flow
in rivers, the IWD algorithm was implemented with a view to navigate efficiently
through graphs representing optimization problems and eventually determine the
optimum path for each case.
For the IWD algorithm, two basic properties are considered: the velocity of the
water drops and the amount of soil in a path [42]. Higher velocity can lead to larger
quantities of soil removed and the amount of soil can affect water velocity and thus,
lead to reduction or increase of water velocity in a path [43]. Water drops move in
finite length steps called paths and after they have crossed a path, their velocity, as
well as the amount of soil in the path, is updated. The more preferable paths are those
containing lower amount of soil, as they are easier to be traversed.
The two properties of the IWD vary throughout the iterative process; the velocity
increases in a nonlinear way to the inverse of the soil amount between two locations
and the soil amount increases as it is removed from a path, correlated to the inverse
of the time needed for the IWD to move from a location to the next following
discrete steps [42, 44]. The ultimate goal is to find the best path from a source to a
destination. Thus, at first the problem is represented appropriately by a graph structure
and parameters of the algorithm are initialized, as well as initial soil values for path
and velocity value, soil content values and visited node list for water drops [42]. The
number of IWD is usually assumed equal to the number of nodes [44]. Then, each
IWD is randomly placed on a node and the destination node is selected according to
a probability value, which is dependent on soil amount of the path and according to
the visited nodes list [42, 44]. Generally, the IWD prefers paths with low soil content
[44]. Afterwards, velocity of IWD is updated, the amount of soil removed from the
path is computed, and soil content of the path and the IWD is calculated [42]. These
steps are repeated until a tour is completed by the IWD and the length of the total
path is estimated. Before the next iteration starts, the soil content of paths creating the
minimum length path is also updated and the optimization ends after the termination
criteria are met. This algorithm bears some resemblance to the ACO algorithm, in
which the ants move at paths, and based on the pheromone that they leave behind,
optimum paths are determined [44].
Several modifications were proposed for the IWD algorithm by Niu et al. [43].
More specifically, initial amount of soil is set to be different to every path, the choice
of second node for the path of the IWD is chosen not only according to the amount
of soil but according to a modified conditional probability, bounds are set on the
3.5 Intelligent Water Drops Algorithm 41

soil content update, elite IWDs are defined to widen the global update process,
and local search process is added, as well. Furthermore, Alijla et al. [45] proposed
modifications to the selection process of the second node by replacing the fitness
proportionate selection method with two other ranking methods. IWD method has
been applied to some engineering-related problems in job shop scheduling [43, 46].

3.6 Harmony Search Algorithm

Harmony Search (HS) algorithm is a metaheuristic, proposed by Geem et al. [47],


which is inspired by music and specifically by the harmony achieved by perfect
melodies, containing the appropriate notes. More specifically, the process described
by this algorithm resembles the way jazz musicians improvise in order to obtain a high
aesthetic standard. Three are the basic characteristics of a musical instrument that
affect its quality: the pitch (frequency), the timbre, which is related to the harmonic
content and specifically, the waveform modulations, and the amplitude, which is
related to the level of sound [48]. The use of different notes in the composition alters
these characteristics and produces a different level of aesthetic standard.
By conducting similar observations, the HS algorithm was established. In this
method, the candidate solution is represented by a harmony composed of notes or
pitch of a musical instrument, which are the design variables. The perfect harmony
can be determined by three basic mechanisms. The first one involves the usage of a
harmony memory list in order to preserve the best solutions and the random selection
of components from existing solutions to create a new one. The second one is the
pitch adjustment, which is employed to vary slightly a solution, eventually evading
local optima [47] and is similar to the mutation operator of the GA. Finally, the
third mechanism is randomization, namely generation of random solutions within
the acceptable range, which aims to increase the diversity of the solutions and conduct
an exploration of a larger portion of the search space [48, 49].
After initial harmonics are generated and the harmony memory is created, often
having size of 50–100 solutions [50], there is a possibility that a new solution will be
produced by using one of the three mechanisms [48]. If the solution-new harmony is
better than the previous, it is accepted to the harmony memory and the worst harmony
is excluded from the memory. The aesthetic standard of the harmony represents the
objective function and it is used to evaluate the harmony. The basic parameters of the
algorithm are the harmony memory size, i.e., the size of the memory, which contains
solution vectors, the pitch adjusting rate (PAR), pitch limits and finally, bandwidth
and harmony memory acceptance rate (HMAR). HMAR and PAR are considered as
the most important parameters and affect also the speed of convergence. Among the
deficiencies of HS is the relatively weak local search ability [49].
In the work of Gao et al. [50], some of the variants of HS method were discussed.
In the work of Mahdavi et al. [51], the use of varying PAR and bandwidth values was
proposed, as their values need to be different at different stages of the optimization
process. For more details on HS method, a review study is presented in the work of
42 3 Swarm Intelligence-Based Methods

Wang et al. [49] and a thorough review and analysis of this method is also presented
in the work of Manjarres et al. [52].
Concerning engineering, several applications of the HS method can be observed
in the relevant literature with subjects such as maximization of production rate and
workload balancing in assembly line [53], single machine scheduling problem [54],
supply chain, routing [55], and medium-term sales forecasting [56].

3.7 Firefly Algorithm

This algorithm is inspired by the communication between fireflies using flash signals,
termed generally as the social behavior of fireflies [57]. The two basic applications
of flash signals are to attract partners or potential prey or even act as a warning for
predators [58]. It was proposed by Yang [59] and since then, several modifications
have been proposed. This algorithm is functioning in a similar way to standard SI
methods such as the PSO algorithm, and has been mostly employed for continu-
ous optimization problems, but works on discrete optimization problems exist as
well [57, 60]. The candidate solutions are represented by a population of fireflies,
which move towards more attractive fireflies, with higher levels of light intensity
[61]. By observing the basic characteristics of firefly behavior, a metaheuristic was
constructed.
More specifically, in the initial formulation of firefly metaheuristic, the first step
involves the random generation of an initial population of fireflies and the definition
of initial parameter values [61]. The basic parameters of the firefly algorithm are the
attractiveness β 0 , the light absorption coefficient γ , and the randomization parameter
α [57]. Then, the fireflies can move, updating their current position and based on three
basic rules [60–62]:
1. Fireflies are assumed to be unisexual, so the attraction does not depend on their
sex.
2. The brighter fireflies attract the less bright ones, but if there is no brighter firefly
a random move within the search space is performed.
3. The brightness of a firefly is directly correlated to the objective function values.
Consequently, at every iteration, the update of a firefly position (X i ) is conducted
as follows:
 
2  1
X it+1  X it + β0 e−γ ri j x tj − xit + a rand − (3.6)
2

The first term of Eq. 3.6 represents the current position of the firefly, the second
term represents the attraction mechanism, and the third represents a random walk.
The form of this equation shows that both movement towards the better solution and
random move are conducted at the same time. Randomization parameter values can
affect the convergence to great extent as large values of a increase the exploration
3.7 Firefly Algorithm 43

capabilities of the algorithm and low values of a increase the exploitation capabilities
of the algorithm. In order to compute the distance between fireflies (r ij ), Cartesian
distance is usually employed [61]. From Eq. 3.6, it can be seen that attractiveness is
correlated to distance as light intensity decreases exponentially with the distance and
attractiveness is considered proportional to the light intensity. In some formulation
of the Firefly Algorithm, the third term is written as α εi , with εi being a random
vector, derived from a Gaussian distribution of a Lévy distribution [62–64].
The quality of the candidate solutions/fireflies is assessed by the values of the
objective function, which represents the desirability of fireflies according to their
brightness. So, each firefly is compared to another firefly and if it is less bright,
updates its position, otherwise it does not move. After this process is applied to all
fireflies, evaluation is performed again and new brightness levels are computed and
the process continues until termination criteria are met.
A disadvantage of the original formulation of firefly algorithm is that the random-
ization parameter a is assumed constant for all iterations. It has been reported that this
assumption can lead to local optima instead of the global optimum, as the similarity
between fireflies is increasing after a number of iterations. This is anticipated, as the
definition of a does not take into account each firefly separately and thus it cannot be
changed dynamically according to each phase of optimization process. Furthermore,
analysis of the trajectories followed by the fireflies has shown that the global best
firefly acts as the attractor of the whole population. Thus, a good strategy to avoid
potential trapping in local optima is for a firefly to have separate searching ability
[61]. So, it was proposed that this parameter should be varied by a stepwise strategy
as follows:
 
ai (t + 1)  αi (t) − αi (t) − amin ∗ e−(| X gbest (t)−X i,best (t)|∗( maxiter ))
t
(3.7)

From Eq. 3.7, it can be seen that the value of randomization parameter depends on
both personal (X i,best ) and global (X gbest ) best positions and the number of iterations.
In other approach, a values are allowed to vary between a lower and an upper value,
according to the number of iterations performed [58]. Other variants, which aim to
enhance the algorithm’s capabilities, include the use of clusters of fireflies instead of
a unified population [63] or the use of chaotic maps for tuning parameters γ and β
[64]. Finally, a detailed review of the various firefly algorithm variants can be found
in [62]. As far as optimization problems in engineering are concerned, several works
conducted with the aid of Firefly algorithm are reported on subjects such as flow
shop problem [57], job shop scheduling problem [58], manufacturing cell formation
[60], vehicle routing problem [65], and cross-docks scheduling [66].

3.8 Cuckoo Search

Cuckoo Search (CS) algorithm is inspired by the breeding behavior of some species
of cuckoos, which lay their eggs in other species’ nests and was proposed as an
optimization algorithm by Yang and Deb [67]. Since then, the algorithm has achieved
44 3 Swarm Intelligence-Based Methods

popularity, particularly in the engineering field. Cuckoos often adopt an aggressive


reproduction strategy, as they choose to lay their eggs in the nests of other birds and
even remove other eggs from the nests to increase the probability of their offspring
to be born. Moreover, during the timeline of their existence, some species evolved
in such a way that their eggs resemble the host species eggs but also their chicks can
imitate the sound of host chicks. However, there is always a possibility that the host
species throw the cuckoo eggs from their nest or abandon the nest [67]. This natural
behavior was transformed in a metaheuristic, capable of producing solution to hard
optimization problems.
At first, an initial population consisting of host nests is randomly generated within
the search space and then the iterative process starts. The process of CS optimization
algorithm is based on several fundamental rules. Each cuckoo can deliver a single
egg at each moment and dump it in a random nest. The total number of nests is fixed
during the process and a probability value determines whether the cuckoo egg will
be discovered by the host bird, which can either throw this egg away or abandon this
nest and build a new one. This rule implies that a percentage of the nests is being
replaced by new ones [67]. Moreover, the nests with the highest quality eggs will be
carried over to the next generations, something that is similar to the elitism feature
of other metaheuristics such as GA.
The eggs in the nest represent the candidate solutions, which can be potentially
replaced by new ones, namely the cuckoo eggs. The objective function is used to
evaluate the quality of nests [67]. In the original version of the algorithm, one of
the nests, except the best one, is selected randomly at each iteration [68]. For the
generation of new solutions, Lévy flight, imitating the flight behavior of birds when
searching for food, according to the Lévy distribution, is conducted to ensure that
solutions are not close to each other and try to reach the global extrema [67, 69].
After a new solution is created, it is compared to the solution of the randomly selected
nest and, if it is better, it takes its position. Furthermore, a percentage of host nests
is abandoned and new nests are built using also Lévy flights [67, 70]. As aforemen-
tioned, the best solutions are kept and the last step of each iteration involves ranking
of existing solutions.
The advantage of this method is that it can be easily implemented as it requires
only a couple of parameters [67]. A modified version of CS method involves the
replacement of all nests, except for the best one, by new solutions at each step using
Lévy flight and a different strategy concerning the replacement of worse nests [68].
For the interested reader, a comprehensive review of CS is presented in [71]. CS
method has been already applied to several industry-related problems such as selec-
tion of optimal machining parameters [69], reliability problems [72], and allocation
problems [73].
3.9 Fruit Fly Optimization Algorithm 45

3.9 Fruit Fly Optimization Algorithm

Fruit Fly Optimization (FFO) algorithm is another metaheuristic for finding the global
optimum based on the food search conducted by fruit flies. It was proposed by Pan
and was applied for the first time in financial test cases in 2012 [74]. In this algorithm,
the search for the optimum point is conducted by fruit flies, which search for the ideal
food location and according to the results of their search, guide the whole swarm
towards favorable points in the search space. Fruit flies are capable of smelling food
from remote locations around the swarm central location, even at 40 km distance
[74, 75] and afterwards fly towards this location [76]. Thus, for the search process,
both smell (“osphresis”) and vision capabilities of fruit flies are utilized [74].
Initially, the fruit fly swarm is randomly generated and for each fruit fly, a random
direction is generated in respect to the initial position of the swarm. This is calculated
according to the previous swarm location and random values. Then, the smell foraging
search can take place, during which the distance between the food source location
and the origin is computed to find the smell concentration judgment value. Finally,
smell concentration is determined according to the objective function values [74].
If the random direction can be determined by X i and Y i , the smell concentration
judgment value can be calculated as follows:

Disti  X i2 + Yi2 (3.8)
Si  1/Disti (3.9)
Smelli  f (Si ) (3.10)

In the last equation, f (S i ) is termed as a smell concentration judgment function,


which can represent the objective function of the problem. After every fruit fly has
terminated the smell search process and all positions are evaluated, the swarm can
move from its location to the position of the best food source with the highest smell
concentration, with the aid of vision-based search process and the iterative procedure
is repeated until termination criteria are met. This algorithm is one of the easiest to
be implemented due to the fact that it does not contain many parameters [74].
The simplest form of FFO is shown to possess several drawbacks, because ini-
tialization is completely random, as well as smell-based search and also the lack
of an effective local search is also noted. So, it was proposed that the initialization
should be partially conducted using a special heuristic search method and partially
in a random way. A neighborhood search was added to improve the search ability
and the exploration capability of the algorithm. A probability value was also added
for local search, in order to improve the exploration capability of the algorithm, and
an update criterion was defined for the vision-based search process [74]. In another
approach, a knowledge base built according to the experience of elite fruit flies and
the definition of multiple sub-swarms was proposed to increase effectiveness of FFO
[76]. Furthermore, Mitić et al. [77] proposed the introduction of chaotic maps to
FFO, by adding a chaotic parameter a to the smell-based search step, which enables
46 3 Swarm Intelligence-Based Methods

the fruit fly to move towards good solutions in a chaotic way. Finally, some other
variants can be found in the work of Xing and Gao [75].
Concerning production-related problems, there are a few reported to be solved
with the aid of FFO method such as: blocking flow shop scheduling [78] and flexible
job shop scheduling [76].

3.10 Hunting Search Algorithm

Hunting Search (HuS) is a metaheuristic algorithm, which imitates the behavior of


a herd of animals when trying to catch their prey. This algorithm was first proposed
by Oftadeh and Mahjoob [79]. Animals hunting in herds, such as lions or wolves,
often tend to encircle their prey and eventually catch it, after ensuring that it is not
possible to evade the encirclement [79, 80]. Using this cooperative way of hunting,
they can catch even bigger animals [81]. The hunting animals move also according
to their relative positions with other hunters and if their prey manages to escape,
they attempt to organize the encircling movement again [79]. The same approach,
with three basic steps, namely moving, correcting, and repositioning, is followed in
the Hunting Search algorithm in order to effectively determine the optimum solution
[81].
At first, the positions of the hunters, stored in the hunting group matrix, are
initialized randomly and evaluated according to the objective function. Each hunter
represents a candidate solution of the optimization problem [79]. The important
numerical parameters of the algorithm are [80]: number of epochs (NE), iterations per
epoch (IE), hunting group size (HGS), maximum movement toward leader (MML),
position correction rate (PCR), hunting group consideration rate (HGCR), distance
radius (Ra), and reorganization parameters (−a,b). In the original version of the
algorithm, only the HGS, MML, HGCR, and PCR parameters were defined.
The hunters are always moving towards the leading hunter, which is nearest to the
prey, representing the lowest value of fitness function. This movement is regulated
by a random variable and a maximum movement value, as follows:
 
xi  xi + rand ∗ MML ∗ xiL − xi (3.11)

In Eq. 3.11, rand is a random number in the range 0–1, x i is the previous position
of a hunter, xi is the new position of the hunter, and xiL is the position of the leader. In
contrast to the real case, the hunters do not know, in advance, the optimum solution
but rather follow the current best solution dynamically. If the movement towards the
leader is not successful and a worse point is found, the hunter returns to its previous
position [79]. Positions are also corrected according to the relative positions of each
hunter in order to ensure that they cooperate better and hunt more efficiently [79,
80]. This movement is regulated by the HGCR and PCR parameters or HGCR and
Ra parameters in a later version of the algorithm [80], which aim to improve the
3.10 Hunting Search Algorithm 47

search with better solutions globally and locally, respectively [79]. If a new position
is worse than the previous, the hunter moves back to its previous position.
Moreover, if the positions of hunters are too close, their positions are properly
reorganized before the process continues with the next step. This process is necessary
in order to avoid local optima [79]. Alternatively, the reorganization process may
occur after a number of iterations [82]. When a member of the herd finds a better
solution, then it becomes the leader. A reorganization marks the end of an epoch; an
epoch is reached after a predefined number of iterations or after a local optimum is
detected [80, 82, 83]. Reorganization can be performed as follows:

xi  xiL ± rand ∗ (max(xi ) − min(xi )) ∗ α ∗ exp(−β ∗ EN) (3.12)

In Eq. 3.12, parameters α and β are related to the convergence rate of the algo-
rithm; if their value is large, a slow convergence is obtained, which is useful for
problems with several local optima. Parameter EN counts the times, e.g., number of
epoch that the hunting group is trapped until the current step. It is to be noted that in a
couple of studies [81, 84], a different way of repositioning of the hunters is presented,
by adding a perturbation to the hunters’ position according to their distance from the
prey. In HuS method, the past vectors are stored in the hunting group matrix, similarly
to Tabu Search and HS methods but the difference is that comparison is conducted
only with the previous solution [80]. In order to enhance the method’s efficiency,
modifications such as Lévy flight for the search process are also proposed [85].
This method is already applied at several occasions for production-related subjects
such as track scheduling in multi-door cross-dock problem [81], and no-wait flow
shop scheduling [84].

3.11 Migrating Birds Optimization Algorithm

Migrating Birds Optimization (MBO) algorithm was proposed by Duman et al. [86]
and is inspired from the details of flight of a flock of migrating birds. More specifi-
cally, this algorithm attempts to imitate the famous “V-formation” of migrating birds,
used by birds to fly at long distances [86, 87]. One of the birds is assumed to lead the
entire flock and the other birds follow the leader along two lines. This V-formation
is instinctive for the migrating birds and is shown to be beneficial for the flight. It
minimizes the total energy required for the flight, because the birds at the two lines
use the air turbulence from the leader’s flapping [87, 88]. As the first bird-leader is
consuming more energy, it gets tired after some time and is replaced by another bird
[88].
At each iteration of the MBO algorithm, candidate solutions at the neighborhood
of each bird are evaluated, starting from the leading bird, and then improved candidate
solutions replace the older ones [86, 89]. It is to be noted that a unique mechanism, the
so-called “benefit mechanism”, exists in this method; if no improved solutions can
be found among the neighbors of a bird, the unused neighbors of the front solution
48 3 Swarm Intelligence-Based Methods

are shared between the two birds and the best of them can be chosen instead [86, 89].
This mechanism is actually the most important feature that distinguishes MBO from
the other metaheuristics and is considerably favorable for the rapid convergence of
the algorithm [87].
Generally, the numerical parameters of the algorithm include [86, 88, 89] the
number of initial birds-solutions (n), the number of neighbor solutions considered
(k), the number of neighbor solutions to be shared with the following solution (x),
and the number of tours or laps to be conducted (m) and maximum iterations number
(K). Some of these parameters are compared to actual parameters of the flock; for
example k is thought as the speed of birds in real situations, x represents the wing
distance of bird and m the number of flutter or flapping of birds [87, 88]. Careful
choice of the various MBO parameters is crucial in order to find the optimum solution
within a reasonable time period; suggestions on optimum values of these parameters
can be found in [86] and [88].
At first, n initial birds-candidate solutions are randomly generated and positioned
in a V-formation. Then, the neighborhood of the leading solution is searched for
potentially better solutions and after that the same process, including the “benefit”
mechanism, is applied to the other birds in the formation until the “tails” of the
formation [86, 89]. This process is repeated for several times, called tours, after
which the leading bird becomes the last and one of the second row birds becomes
the first. It is to be noted that the leading bird will move to the tail of the line, from
which the current leading bird originated, but the next time this movement will be
conducted in the opposite line [87]. In another approach, this movement was also
performed by exchanging the leader with the best performing bird [90]. When the
termination criteria are met, the process is finished.
As for other metaheuristics, several modifications have been proposed also for
MBO. In order to improve the global search ability, it was proposed that multiple
flocks of birds could be used simultaneously (Enhanced MBO method), generated by
heuristic methods or randomly [89]. After m tours are completed, the best solutions
are shared among the various flocks [89]. Furthermore, Alkaya et al. [91] designed an
effective MBO algorithm with a specific neighbor generating function. According to
this approach, neighbors are obtained only within D-dimensional spheres. In another
study focusing on neighbor search part of the MBO algorithm, several neighbor
operators were compared according to their performance on various benchmarks.
Moreover, in the work of Soto et al. [90], parallel procedures for the various sorting
processes conducted in MBO are used, in order to further enhance its efficiency.
Finally, other ways of neighborhood search and suggestions on values of MBO
parameters are presented in the work of Benkalai et al. [92]. As for the applications
of MBO in engineering, although this method is relatively new, it has been already
employed to solve no-wait flow shop scheduling problem [89], flow shop sequencing
problem [88], and machine-part cell formation problems [92, 93].
3.12 Flower Pollination Algorithm 49

3.12 Flower Pollination Algorithm

Flower (or plant) Pollination Algorithm (FPA) constitutes a group of metaheuristics


inspired by the natural process of plant pollination. As with the most other bioin-
spired methods, such algorithms were relatively recently proposed, initially by Yang
[93]; however, Zhang et al. [94] also proposed another novel pollination algorithm,
which was inspired by the work of Ibanez [95]. The pollination process for plants
is conducted either by biotic (90%) or by abiotic way (10%) in nature. This process
is usually carried out by insects, which can show preference to specific species of
plants and neglect others. In fact, this behavior constitutes an important parameter
for FPA and is termed as the “flower constancy”. Furthermore, it is observed that the
reproduction ratio is correlated to the similarity of flowers [96].
The biotic pollination process is conducted by cross-pollination. Insects can fly
at long distances to carry pollen from different plants; this process, referred to as
global pollination as well, can be modeled by Lévy flight process. On the other hand,
self-pollination is conducted on the same plant or flower and represents the local
pollination process, whose occurrence is regulated by a probability value [96]. In
this approach, abiotic pollination is classified among the local pollination processes;
in nature, this pollination method can refer to pollination by diffusion through the
air [93]. In the approach presented in [94], a more detailed representation of the
pollination process is conducted, including also the population of pollinators, which
feed on the nectar of flowers as well as pollen grains.
In the original FPA, the solution is represented by a flower or pollen and the
population of flowers considered represents the population of solutions. In the sim-
plest form of the FPA, it is assumed that each plant contains exactly one flower
or pollen [93]. The global and local search processes, performing exploration and
exploitation, respectively, are represented by the global and local or self-pollination
processes [97]. In the FPA version presented by Yang [93], at first, the initial popu-
lation of size n is generated and the initial global best is determined after evaluation.
Then, an iterative process is taking place for every individual of the population and
global or local pollination can occur according to a probability value. For the local
pollination process, it was proposed that it can be further enhanced with the aid
of a clonal selection process [96]. Afterwards, fitness evaluation is performed and
new solutions replace the previous, if they are superior to them. The convergence of
FPA algorithm was studied in the work of He et al. [98] and it was found that this
method can converge quickly and achieve excellent solutions. As it is a relatively
new metaheuristic, it has not found yet many applications regarding engineering but
there exist relevant works such as [94], in which an FPA algorithm was applied in
the field of e-commerce logistics.
50 3 Swarm Intelligence-Based Methods

3.13 Anarchic Society Optimization Algorithm

Anarchic Society Optimization (ASO) algorithm is inspired by human societies, in


which members act dangerously, often without obeying to specific rules, attempting
to find themselves in better situations; it is also a rare case of a metaheuristic based
on political ideology [99]. This method was proposed by Ahmadi-Javid [100] and is
suitable both for discrete and continuous optimization problems [101]. A basic feature
of the ASO method is that the position of each member is affected by three different
movement policies, according to its current position, to the other members’ position,
as well as past positions [100]. So, at first, an initial population of society members-
candidate solutions is generated and the fitness for each member is computed. Then
the three aforementioned policies are applied for society members.
More specifically, in order to calculate the contribution of the current position to
the determination of the next position, the so-called fickleness index is first calcu-
lated, according to the fitness of each member and its neighboring members, chosen
according to a neighboring function [100]. This represents the will of members to
alter their current position due to dissatisfaction for it [100, 102]. The next step
involves the alignment of members’ movement according to the best individual,
similarly to many SI-based methods; however, as the members of this society tend to
be adventurous and anarchic, there exists a probability value, the external irregular-
ity index, which determines whether each member will behave according to the best
individual or will move irregularly [100]. It is to be noted that the highest level of
irregularity occurs when the diversity of the society is increasing, e.g., when variation
of objective function values for society members is high. Finally, the members can
also move according to their past values, but given that they are behaving irregularly,
there exists a probability value, the internal irregularity index, which is employed
for the determination of whether the best or a random previous move will be selected
[100, 103].
After the movement policies are determined, an important step before the end of
each iteration of ASO is performed, namely the application of suitable combination
rule in order to decide the final movement of each member [100]. This combination
rule can involve the choice of the most favorable of the three policies (elitism), a suc-
cessive application of each policy to alter each member’s position (sequential rule),
and random selection or hybrid technique, such as crossover [101, 102, 104]. Finally,
a review of ASO is presented in the work of Bozorgi et al. [99]. This method has
already been employed for problems concerning flow shop and job shop environment
[101, 104].
References 51

References

1. Sörensen K (2015) Metaheuristics—the metaphor exposed. Int Trans Oper Res 22:3–18
2. Kennedy J, Eberhart R (1995) Particle swarm optimization. In: IEEE international conference
on neural networks, Perth, WA, Australia, pp 1942–1948
3. Khare A, Rangnekar S (2013) A review of particle swarm optimization and its applications
in solar photovoltaic system. Appl Soft Comput 13:2997–3006
4. Poli R, Kennedy J, Blackwell T (2007) Particle swarm optimization. Swarm Intell 1:33–57
5. Fatih Tasgetiren M, Liang Y-C, Sevkli M, Gencyilmaz G (2006) Particle swarm optimization
and differential evolution for the single machine total weighted tardiness problem. Int J Prod
Res 44:4737–4754
6. Guo YW, Li WD, Mileham AR, Owen GW (2009) Optimisation of integrated process planning
and scheduling using a particle swarm optimisation approach. Int J Prod Res 47:3775–3796
7. Tsai C-Y, Kao I-W (2011) Particle swarm optimization with selective particle regeneration
for data clustering. Expert Syst Appl 38:6565–6576
8. Esmin AAA, Coelho RA, Matwin S (2015) A review on particle swarm optimization algorithm
and its variants to clustering high-dimensional data. Artif Intell Rev 44:23–45
9. Zhang Y, Wang S, Ji G (2015) A comprehensive survey on particle swarm optimization
algorithm and its applications. J Math Probl Eng 931256
10. Wang YF, Zhang YF, Fuh JYH (2012) A hybrid particle swarm based method for process
planning optimisation. Int J Prod Res 50:277–292
11. Samarghandi H, ElMekkawy TY (2014) Solving the no-wait flow-shop problem with
sequence-dependent set-up times. Int J Comput Integr Manuf 27:213–228
12. Attar SF, Mohammadi M, Tavakkoli-Moghaddam R, Yaghoubi S (2014) Solving a new multi-
objective hybrid flexible flowshop problem with limited waiting times and machine-sequence-
dependent set-up time constraints. Int J Comput Integr Manuf 27:450–469
13. Che ZH (2017) A multi-objective optimization algorithm for solving the supplier selection
problem with assembly sequence planning and assembly line balancing. Comput Ind Eng
105:247–259
14. Keshtzari M, Naderi B, Mehdizadeh E (2016) An improved mathematical model and a hybrid
metaheuristic for truck scheduling in cross-dock problems. Comput Ind Eng 91:197–204
15. Karaboga D (2005) An idea based on honey bee swarm for numerical optimization. Techical
report tr-06, Erciyes Engineering Faculty, Kayseri
16. Karaboga D, Akay B (2009) A comparative study of artificial bee colony algorithm. Appl
Math Comput 214:108–132
17. Dunder E, Gumustekin S, Cengiz MA (2018) Variable selection in gamma regression models
via artificial bee colony algorithm. J Appl Stat 45:8–16
18. Karaboga D, Basturk B (2008) On the performance of artificial bee colony (ABC) algorithm.
Appl Soft Comput 8:687–697
19. Bulut O, Tasgetiren MF (2014) An artificial bee colony algorithm for the economic lot schedul-
ing problem. Int J Prod Res 52:1150–1170
20. Li X, Yang G (2016) Artificial bee colony algorithm with memory. Appl Soft Comput
41:362–372
21. Hemamalini S, Simon SP (2010) Artificial bee colony algorithm for economic load dispatch
problem with non-smooth cost functions. Electr Power Components Syst 38:786–803
22. Lei D, Guo X (2013) Scheduling job shop with lot streaming and transportation through a
modified artificial bee colony. Int J Prod Res 51:4930–4941
23. Ng KKH, Lee CKM, Zhang SZ, Wu K, Ho W (2017) A multiple colonies artificial bee
colony algorithm for a capacitated vehicle routing problem and re-routing strategies under
time-dependent traffic congestion. Comput Ind Eng 109:151–168
24. Yazdani M, Gohari S, Naderi B (2015) Multi-factory parallel machine problems: improved
mathematical models and artificial bee colony algorithm. Comput Ind Eng 81:36–45
25. Wang X, Xie X, Cheng TCE (2013) A modified artificial bee colony algorithm for order
acceptance in two-machine flow shops. Int J Prod Econ 141:14–23
52 3 Swarm Intelligence-Based Methods

26. Zhang R, Song S, Wu C (2013) A hybrid artificial bee colony algorithm for the job shop
scheduling problem. Int J Prod Econ 141:167–178
27. Dorigo M (1992) Optimization, LEARNING AND NATURAL ALGorithms (in Italian).
Dipartimento di Elettronica, Politecnico di Milano
28. Osman H, Baki MF (2014) Balancing transfer lines using benders decomposition and ant
colony optimisation techniques. Int J Prod Res 52:1334–1350
29. Adubi SA, Misra S (2014) A comparative study on the ant colony optimization algorithms.
In: 2014 11th international conference on electronics, computer and computation (ICECCO),
Abuja, Nigeria, pp 1–4
30. Shyu SJ, Yin PY, Lin BMT, Haouari M (2003) Ant-tree: an ant colony optimization approach
to the generalized minimum spanning tree problem. J Exp Theor Artif Intell 15:103–112
31. Cordon O, Herrera F, Stützle T (2003) A review on the ant colony optimization metaheuristic:
basis, models and new trends. Mathware Soft Comput 9 (2–3)
32. Zecchin AC, Simpson AR, Maier HR, Leonard M, Roberts AJ, Berrisford MJ (2006) Appli-
cation of two ant colony optimisation algorithms to water distribution system optimisation.
Math Comput Model 44:451–468
33. Wong TN, Zhang S, Wang G, Zhang L (2012) Integrated process planning and
scheduling—multi-agent system with two-stage ant colony optimisation algorithm. Int J Prod
Res 50:6188–6201
34. Maniezzo V, Gambardella LM, de Luigi F (2004) Ant colony optimization. In: Onwubolu GC,
Babu BV (eds) New optimization techniques in engineering. Springer, Berlin, pp 101–121
35. Blum C (2005) Ant colony optimization: introduction and recent trends. Phys Life Rev
2:353–373
36. Yakıcı E (2017) A heuristic approach for solving a rich min-max vehicle routing problem
with mixed fleet and mixed demand. Comput Ind Eng 109:288–294
37. Seo M, Kim D (2010) Ant colony optimisation with parameterised search space for the job
shop scheduling problem. Int J Prod Res 48:1143–1154
38. Huang R-H (2010) Multi-objective job-shop scheduling with lot-splitting production. Int J
Prod Econ 124:206–213
39. Chen H, Du B, Huang GQ (2010) Metaheuristics to minimise makespan on parallel batch
processing machines with dynamic job arrivals. Int J Comput Integr Manuf 23:942–956
40. Yagmahan B, Yenisey MM (2008) Ant colony optimization for multi-objective flow shop
scheduling problem. Comput Ind Eng 54:411–420
41. Moncayo-Martínez LA, Zhang DZ (2013) Optimising safety stock placement and lead time
in an assembly supply chain using bi-objective MAX–MIN ant system. Int J Prod Econ
145:18–28
42. Hosseini HS (2007) Problem solving by intelligent water drops. In: 2007 IEEE congress on
evolutionary computation, Singapore, Singapore, pp 3226–3231
43. Niu SH, Ong SK, Nee AYC (2012) An improved intelligent water drops algorithm for achiev-
ing optimal job-shop scheduling solutions. Int J Prod Res 50:4192–4205
44. Hosseini HS (2009) The intelligent water drops algorithm, a nature inspired swarm based
optimization algorithm. J Int J Bio-Inspired Comput 1:71–79
45. Alijla BO, Wong L-P, Lim CP, Khader AT, Al-Betar MA (2014) A modified intelligent water
drops algorithm and its application to optimization problems. Expert Syst Appl 41:6555–6569
46. Niu SH, Ong SK, Nee AYC (2013) An improved intelligent water drops algorithm for solving
multi-objective job shop scheduling. Eng Appl Artif Intell 26:2431–2442
47. Geem ZW, Kim JH, Loganathan GV (2001) A new heuristic optimization algorithm: harmony
search. Simulation 76:60–68
48. Yang X-S (2009) Harmony search as a metaheuristic algorithm. In: Geem ZW (ed) Music-
inspired harmony search algorithm: theory and applications. Springer, Berlin, pp 1–14
49. Wang X, Gao X-Z, Zenger K (2015) The overview of harmony search. In: Wang X, Gao X-Z,
Zenger K (eds) An introduction to harmony search optimization method. Springer Interna-
tional Publishing, Cham, pp 5–11
References 53

50. Gao XZ, Govindasamy V, Xu H, Wang X, Zenger K (2015) Harmony search method: theory
and applications. Comput Intell Neurosci 258491
51. Mahdavi M, Fesanghary M, Damangir E (2007) An improved harmony search algorithm for
solving optimization problems. Appl Math Comput 188:1567–1579
52. Manjarres D, Landa-Torres I, Gil-Lopez S, Del Ser J, Bilbao MN, Salcedo-Sanz S, Geem
ZW (2013) A survey on applications of the harmony search algorithm. Eng Appl Artif Intell
26:1818–1831
53. Purnomo HD, Wee H-M (2014) Maximizing production rate and workload balancing in a
two-sided assembly line using harmony search. Comput Ind Eng 76:222–230
54. Zammori F, Braglia M, Castellano D (2014) Harmony search algorithm for single-machine
scheduling problem with planned maintenance. Comput Ind Eng 76:333–346
55. Alaei S, Setak M (2015) Multi objective coordination of a supply chain with routing and
service level consideration. Int J Prod Econ 167:271–281
56. Wong WK, Guo ZX (2010) A hybrid intelligent model for medium-term sales forecasting in
fashion retail supply chains using extreme learning machine and harmony search algorithm.
Int J Prod Econ 128:614–624
57. Vahedi Nouri B, Fattahi P, Ramezanian R (2013) Hybrid firefly-simulated annealing algorithm
for the flow shop problem with learning effects and flexible maintenance activities. Int J Prod
Res 51:3501–3515
58. Rohaninejad M, Kheirkhah AS, Vahedi Nouri B, Fattahi P (2015) Two hybrid tabu search—
firefly algorithms for the capacitated job shop scheduling problem with sequence-dependent
setup cost. Int J Comput Integr Manuf 28:470–487
59. Yang X-S (2009) Firefly algorithms for multimodal optimization. In: Watanabe O, Zeugmann
T (eds) Stochastic algorithms: foundations and applications. Springer, Berlin, pp 169–178
60. Sayadi MK, Hafezalkotob A, Naini SGJ (2013) Firefly-inspired algorithm for discrete opti-
mization problems: an application to manufacturing cell formation. J Manuf Syst 32:78–84
61. Yu S, Su S, Lu Q, Huang L (2014) A novel wise step strategy for firefly algorithm. Int J
Comput Math 91:2507–2513
62. Fister I, Fister I, Yang X-S, Brest J (2013) A comprehensive review of firefly algorithms.
Swarm Evol Comput 13:34–46
63. Hackl A, Magele C, Renhart W (2016) Extended firefly algorithm for multimodal optimiza-
tion. In: 2016 19th international symposium on electrical apparatus and technologies (SIELA),
Bourgas, Bulgaria, pp 1–4
64. Gandomi AH, Yang X-S, Talatahari S, Alavi AH (2013) Firefly algorithm with chaos. Com-
mun Nonlinear Sci Numer Simul 18:89–98
65. Alinaghian M, Naderipour M (2016) A novel comprehensive macroscopic model for time-
dependent vehicle routing problem with multi-alternative graph to reduce fuel consumption:
a case study. Comput Ind Eng 99:210–222
66. Madani-Isfahani M, Tavakkoli-Moghaddam R, Naderi B (2014) Multiple cross-docks
scheduling using two meta-heuristic algorithms. Comput Ind Eng 74:129–138
67. Yang XS, Deb S (2009) Cuckoo search via levy flights. In: 2009 world congress on nature &
biologically inspired computing (NaBIC), Coimbatore, India, pp 210–214
68. Yildiz AR (2013) Cuckoo search algorithm for the selection of optimal machining parameters
in milling operations. Int J Adv Manuf Technol 64:55–61
-
69. Bulatović RR, Ðordević -
SR, Ðordević VS (2013) Cuckoo search algorithm: a metaheuristic
approach to solving the problem of optimum synthesis of a six-bar double dwell linkage.
Mech Mach Theory 61:1–13
70. Gandomi AH, Yang X-S, Alavi AH (2013) Cuckoo search algorithm: a metaheuristic approach
to solve structural optimization problems. Eng Comput 29:17–35
71. Mohamad AB, Zain AM, Nazira Bazin NE (2014) Cuckoo search algorithm for optimization
problems—a literature review and its applications. Appl Artif Intell 28:419–448
72. Valian E, Tavakoli S, Mohanna S, Haghi A (2013) Improved cuckoo search for reliability
optimization problems. Comput Ind Eng 64:459–468
54 3 Swarm Intelligence-Based Methods

73. Kanagaraj G, Ponnambalam SG, Jawahar N (2013) A hybrid cuckoo search and genetic
algorithm for reliability–redundancy allocation problems. Comput Ind Eng 66:1115–1124
74. Pan W-T (2012) A new fruit fly optimization algorithm: taking the financial distress model
as an example. Knowl-Based Syst 26:69–74
75. Xing B, Gao W-J (2016) Innovative computational intelligence: a rough guide to 134 clever
algorithms. Springer International Publishing, Switzerland
76. Zheng X, Wang L (2016) A knowledge-guided fruit fly optimization algorithm for dual
resource constrained flexible job-shop scheduling problem. Int J Prod Res 54:5554–5566
77. Mitić M, Vuković N, Petrović M, Miljković Z (2015) Chaotic fruit fly optimization algorithm.
Knowl-Based Syst 89:446–458
78. Han Y, Gong D, Li J, Zhang Y (2016) Solving the blocking flow shop scheduling problem
with makespan using a modified fruit fly optimisation algorithm. Int J Prod Res 54:6782–6797
79. Oftadeh R, Mahjoob MJ (2009) A new meta-heuristic optimization algorithm: hunting search.
In: 2009 fifth international conference on soft computing, computing with words and percep-
tions in system analysis, decision and control, Famagusta, Cyprus, pp 1–5
80. Oftadeh R, Mahjoob MJ, Shariatpanahi M (2010) A novel meta-heuristic optimization
algorithm inspired by group hunting of animals: hunting search. Comput Math with Appl
60:2087–2098
81. Yazdani M, Naderi B, Mousakhani M (2015) a model and metaheuristic for truck scheduling
in multi-door cross-dock problems. Intell Autom Soft Comput 21:633–644
82. Bouzaida S, Sakly A, M’Sahli F (2014) Extracting TSK-type neuro-fuzzy model using the
hunting search algorithm. Int J Gen Syst 43:32–43
83. Zare K, Hashemi SM (2012) A solution to transmission-constrained unit commitment using
hunting search algorithm. In: 2012 11th international conference on environment and electrical
engineering, Venice, Italy, pp 941–946
84. Naderi B, Khalili M, Khamseh AA (2014) Mathematical models and a hunting search
algorithm for the no-wait flowshop scheduling with parallel machines. Int J Prod Res
52:2667–2681
85. Dogan E (2014) Solving design optimization problems via hunting search algorithm with
Levy flights. Struct Eng Mech 52(2):351–358
86. Duman E, Uysal M, Alkaya AF (2012) Migrating birds optimization: a new metaheuristic
approach and its performance on quadratic assignment problem. Inf Sci 217:65–77
87. Tongur V, Ülker E (2016) The analysis of migrating birds optimization algorithm with neigh-
borhood operator on traveling salesman problem. In: Lavangnananda K, Phon-Amnuaisuk
S, Engchuan W, Chan JH (eds) Intelligent and evolutionary systems. Springer International
Publishing, Cham, pp 227–237
88. Tongur V, Erkan Ü (2014) Migrating birds optimization for flow shop sequencing problem. J
Comput Commun 2:142
89. Gao KZ, Suganthan PN, Chua TJ (2013) An enhanced migrating birds optimization algorithm
for no-wait flow shop scheduling problem. In: 2013 IEEE symposium on computational
intelligence in scheduling (CISched), Singapore, Singapore, pp 9–13
90. Soto R, Crawford B, Almonacid B, Paredes F (2016) Efficient parallel sorting for migrating
birds optimization when solving machine-part cell formation problems. Sci Program 9402503
91. Alkaya AF, Algin R, Sahin Y, Agaoglu M, Aksakalli V (2014) Performance of migrating
birds optimization algorithm on continuous functions. In: Tan Y, Shi Y, Coello CAC (eds)
Advances in swarm intelligence. Springer International Publishing, Cham, pp 452–459
92. Benkalai I, Rebaine D, Gagné C, Baptiste P (2017) Improving the migrating birds optimization
metaheuristic for the permutation flow shop with sequence-dependent set-up times. Int J Prod
Res 55:6145–6157
93. Yang X-S (2012) Flower pollination algorithm for global optimization. In: Durand-Lose J,
Jonoska N (eds) Unconventional computation and natural computation. Springer, Berlin, pp
240–249
94. Zhang M, Pratap S, Huang GQ, Zhao Z (2017) Optimal collaborative transportation service
trading in B2B e-commerce logistics. Int J Prod Res 55:5485–5501
References 55

95. Ibanez S (2012) Optimizing size thresholds in a plant-pollinator interaction web: towards a
mechanistic understanding of ecological networks. Oecologia 170:233–242
96. Nabil E (2016) A modified flower pollination algorithm for global optimization. Expert Syst
Appl 57:192–203
97. Abdelaziz AY, Ali ES (2015) Static VAR compensator damping controller design based on
flower pollination algorithm for a multi-machine power system. Electr Power Compon Syst
43:1268–1277
98. He X, Yang X-S, Karamanoglu M, Zhao Y (2017) Global convergence analysis of the
flower pollination algorithm: a discrete-time Markov chain approach. Procedia Comput Sci
108:1354–1363
99. Bozorgi A, Bozorg-Haddad O, Chu X (2018) Anarchic society optimization (ASO) algorithm.
In: Bozorg-Haddad O (ed) Advanced optimization by nature-inspired algorithms. Springer
Singapore, Singapore, pp 31–38
100. Ahmadi-Javid A (2011) Anarchic society optimization: a human-inspired method. In: 2011
IEEE congress of evolutionary computation (CEC), New Orleans, LA, pp 2586–2592
101. Ahmadi-Javid A, Hooshangi-Tabrizi P (2015) A mathematical formulation and anarchic soci-
ety optimisation algorithms for integrated scheduling of processing and transportation oper-
ations in a flow-shop environment. Int J Prod Res 53:5988–6006
102. Bozorg-Haddad O, Latifi M, Bozorgi A, Rajabi M-M, Naeeni S-T, Loáiciga HA (2018)
Development and application of the anarchic society algorithm (ASO) to the optimal operation
of water distribution networks. Water Sci Technol Water Supply 18:318–332
103. Shayeghi H (2012) Anarchic society optimization based pid control of an automatic voltage
regulator (AVR) system. Electr Electron Eng 2(4):199–207
104. Ahmadi-Javid A, Hooshangi-Tabrizi P (2017) Integrating employee timetabling with schedul-
ing of machines and transporters in a job-shop environment: a mathematical formulation and
an anarchic society optimization algorithm. Comput Oper Res 84:73–91
Chapter 4
Other Computational Methods
for Optimization

4.1 Introduction

The last chapter of the present work is dedicated to methods that contain a few or
no similarities to the methods presented in the two previous chapters but however,
is worth mentioning due to their popularity or promising capabilities in the field of
industrial engineering. These methods include Simulated Annealing, Tabu search,
Electromagnetism-like Mechanism, and Response Surface Methodology methods.
More specifically, Simulated Annealing method is related to the metallurgical pro-
cess of annealing and its objective function is related to the reduction of the internal
energy of the system, by appropriate variation of its temperature. Tabu Search method
exhibits essentially no nature-inspired characteristics, as its basic feature is a list of
unacceptable moves, which is used to prevent the solution process to get trapped
in a local optimum point. Electromagnetism-like Mechanism is using the natural
mechanism of attraction-repulsion in electromagnetism, in order to lead the solution
process to the global optimum point. Finally, Response Surface Methodology is a
more generic method including not only optimization capabilities but also Design of
Experiments and experimental results analysis features. As in the previous chapters,
apart from the description of basic features of each method, examples from indus-
trial engineering problems solved by means of the aforementioned methods are also
presented.

4.2 Simulated Annealing Method

The Simulated Annealing (SA) method is employed to solve optimization problems


in a very wide range of fields. This method was inspired by the annealing process
applied for heat treatment of materials in metallurgy, with a view to alter their phys-
ical properties by altering their structure. This thermal process involves heating and

© The Author(s) 2019 57


N. E. Karkalos et al., Computational Methods for Application in Industry 4.0,
Manufacturing and Surface Engineering, https://doi.org/10.1007/978-3-319-92393-2_4
58 4 Other Computational Methods for Optimization

cooling of materials using a predefined schedule according to the type of materials


and is directly related to the thermodynamic free energy of the system. SA method
was formulated by Kirkpatrick et al. [1]. Although SA shares common features
with some of the other stochastic optimization methods, it does exhibit some dis-
tinguishable features, which render it usually improper to classify it along one of
the categories presented in Chaps. 2 and 3, and thus, this method is presented in a
separate chapter.
The basic idea behind SA method is that the objective function is treated as
the internal energy of a material, which is required to be reduced to the state of
minimum possible energy. Then, by checking the energy level of neighboring states
of the initial one, it is possible for the system to move to another state with a view to
reach that of lowest energy. The candidate solutions from the neighborhood belong
to the set of feasible solutions, for the given search space. Initially, the temperature
of the system is set in a relatively high value and it is allowed to cool down as
the procedure continues. The probability of accepting a new state is computed by
a special function in respect to the energy of the two states, i.e., current and new,
and the global temperature. Thus, temperature can represent the level of disorder in
the optimization process [2]. As it is anticipated, states with lower energy are more
preferable.
After the neighborhood solutions are checked, two paths may be taken. If the new
solution is better, it is accepted. On the other hand, if it is worse, the decision is
made by a probability function based on the difference of current and new solution
and the current temperature. This probability function can be formulated similarly to
Boltzmann or even Cauchy function, which is thought to be more capable to prevent
the trapping in local optima. When the temperature value is high, it is allowed to
accept states, which correspond to worse solutions, in order to allow the algorithm to
evade positions of local optima and premature convergence [3]. However this is not
allowed when the temperature is lower, because it is required to guide the solution
to an area, in which the optimum solution will be determined. At each temperature,
a number of steps are conducted until it is decreased by some method (decrement
rule). The type of cooling schemes can be geometric, exponential, or linear [4].
Usually, shorter time is being spent at higher temperatures and more time at lower.
This behavior of the SA algorithm is advantageous for cases with many local optima.
In order to further enhance the capability of the algorithm to evade local optima, a
restart strategy is often implemented in order to start the optimization process from
a new random solution, if a number of iterations without improvement are exceeded
[5]. The pseudocode for SA can be seen in Fig. 4.1.
If the initial temperature is not high enough or it is decreased rapidly, the annealed
material will contain many defects and imperfections [6]. The initial temperature is
set in a way that it can lead to the acceptance of a wide range of solutions, in order to
explore large areas of the search space. For example, the initial temperature can be
set at a value that allows the 90% of moves to be accepted. Furthermore, the chosen
neighborhood and cooling scheme/schedule play a decisive role on the performance
of SA method [7].
4.2 Simulated Annealing Method 59

Step1. Create the initial state of the system (initial temperature and random
initial feasible solution)
Step 2. While termination criteria are not met
Step 2a. Compute temperature with an appropriate function (annealing
schedule)
Step 2b. Pick a random neighbor (by a small change / perturbation of
the previous solution)
Step 2c. If probability value is higher than a random value in (0,1) then
the state of the random neighbor is the new state
Step 3. The final state is provided

Fig. 4.1 Pseudocode of simulated annealing algorithm

As the SA method is one of the earliest computational methods for solving opti-
mization problems, it is anticipated that a large amount of relative works exist in the
literature. More specifically, in the field of industrial and manufacturing engineering,
several works exist, on subjects such as dynamic facility layout [8–10], cell forma-
tion [3], hybrid vehicle routing [5], integrated process planning and scheduling [11],
integrated lot sizing and scheduling [12], and truck scheduling [13].

4.3 Tabu Search

Tabu Search (TS) constitutes one of the earliest metaheuristic methods and was
proposed by Glover [14]. TS method is based on the local search, but its rules
improve the performance of local search methods in order to prevent the trapping
in a local minimum point. The simplest form of TS method, as described by its
inventor, contains the basic element of the method, which is a short-term memory
list with the forbidden moves. In the work of Glover [14], several applications of TS
methods are reported and it is considered that TS method can be rather beneficial for
combinatorial optimization problems such as scheduling [15].
TS involves a local search procedure, in which an initial feasible solution is
improved until the optimum point is reached. A basic feature of TS method is the
use of several memory structures, which can guide the search to avoid premature
convergence and search a satisfactory portion of the total search space. The simplest
list is the short-term list, in which the last n visited solutions are stored in order not
to get visited until they are eventually deleted from the list [16]. The size of a tabu
list is often dependent on the size of the problem and there exists no single rule to
provide the exact size for every problem [17, 18]. This list can essentially prevent
60 4 Other Computational Methods for Optimization

the algorithm from returning back to recently searched solutions and reach a local
optimum [15]. Apart from the search procedure, intensification and diversification
schemes can also be applied during the search, aided by two special types of tabu lists
[19]. An intermediate-term list is related to the intensification process and implements
thorough exploration of promising areas of the search space. Intermediate-term list
records the number of consecutive iterations that various solutions components exist
in the current solution [17, 18]. The third list type is the long-term list, which serves
for diversification processes and helps the search procedure to move to new search
regions, when a local optimum is reached. This memory is a frequency memory,
storing the number of iterations of performed moves of visited solutions and can be
used to restart search from a different initial solution (restart diversification) or just
penalize frequently performed moves (continuous diversification) [17, 18]. The three
different memory lists can occasionally overlap.
It is important to note that although the elements in a short-term tabu list indicate
restricted moves, exceptions can be allowed in the case, when there is no current
solution better than the restricted one. The rule that enables the use of a restricted
move is also called an aspiration criterion and the most popular of them is, in fact,
the use of a tabu move that is better than the current best solution [15]. The use
of all types of lists is recommended in the case of hard optimization problems, but
otherwise the short-term list can be sufficient.
At first, an initial solution is generated and is set as the best solution. The tabu
list is also initialized and the search begins with moves in the search space until
the termination criteria, e.g., maximum number of iterations or threshold fitness
value or maximum number of iterations with no improvement in fitness value, are
met. These moves are essentially transitions between various points in the search
space [17]. At every iteration, a candidate solutions list, namely the neighborhood
solutions, is generated and each of these solutions is checked whether it is a tabu one
and also whether it improves the best solution. This neighborhood search applies a
small perturbation to the previous solution, in order to get a slightly modified one
[15]. In order not to evaluate a large number of candidates, sometimes the first non-
tabu move, which is better than the current solution, or a solution that satisfies the
aspiration criteria, is chosen (best feasible one) [15, 19]. When the tabu list is full,
the first element is pushed out, so that a new one can enter the list.
TS method is used in conjunction with other methods in order to enhance their
capabilities. This method is rather simple to be implemented in the original formula-
tion as it involves only three parameters, namely the tabu size (or tenure), maximum
number of iterations and number of neighbors [16]. As it is one of the earliest meta-
heuristics, many modifications have been proposed to improve this method. In the
work of Fiechter [20], the earliest effort to parallelize the TS method efficiently is
presented, in conjunction with the various types of memory lists, intended to apply
it in large optimization problems. Furthermore, in the work of Hung et al. [21], an
enhanced parallel TS is presented in order to address two serious problems of the
previous TS variants, such as the question about the optimality of the final solution
and the quality of the final solution. For that reason, a hash table is employed to save
all local optimum points. The algorithm starts by assigning a random initial solution
4.3 Tabu Search 61

to each processor and the search continues until a local optimum is found. If the local
optimum is stored in the hash table, search is terminated and restarted from a new
random solution, otherwise the solution is added to the hash table and the process
continues [21]. Thus, the hash table acts as a long-term memory, including all local
minima encountered, which can effectively end a search when a local optimum is
reached. Furthermore, in this approach, the subspaces associated to each local opti-
mum are determined as the set of all points that lead to the same local optimum. This
definition is also useful to indirectly determine the percentage of the explored area.
In the work of Dhingra and Bennage [22], the comparison of three different
strategies for TS, regarding the use of aspiration criterion and penalty function for
infeasible moves, is conducted. Demir et al. [17] employed a dynamic tabu tenure
(tabu list size) and a long-term memory function, after investigating various param-
eters of the algorithm, in order to ameliorate the performance of TS. In the work of
Lei and Wu [23], the creation of a multi-objective TS algorithm is described.
In the work of Glover et al. [24], a more detailed reference to the TS method
is conducted for the interested reader to consult it. Moreover, a brief summary of
advances concerning the TS method can be found in the work of Gendreau [25].
Various applications, related to industry-related problems solved by TS method,
are reported such as robotic cell scheduling problem [15], bin-packing problem
[19], quadratic assignment and generalized assignment problem [16, 18], and buffer
allocation in production lines [17].

4.4 Electromagnetism-Like Mechanism Algorithm

The Electromagnetism-like Mechanism (EM) algorithm is a physics-inspired


method, which is based on the attraction-repulsion mechanism in electromagnetic
fields and was proposed by Birbil and Fang [26]. The candidate solutions are repre-
sented by electrically charged particles, which are located in random positions inside
the search space. Electrically charged particles move according to the forces exerted
on them by other particles and based on their own charge. This behavior is utilized
in order to direct the candidate solution towards the optimum one. In the standard
version of the heuristic, four stages can be observed [27], namely initialization, local
(neighborhood) search, calculation of total force vector, and movement of the parti-
cles along the direction of force. The basic principle of the method is that superior
solutions attract other solutions to their position and inferior solutions repel other
solutions from their position [28].
During the initialization period, the population generation and objective function
evaluation take place [26, 29]. A number of points (m), representing electrically
charged particles, is randomly generated inside the search space and afterwards,
they are evaluated according to the objective function of the problem [27]. These
points are uniformly distributed in the search domain according to its bounds. The
best feasible solution is stored before the next steps are carried out. The next step of
the algorithm involves a local search around the points for given step length, in order
62 4 Other Computational Methods for Optimization

to find potentially better solutions in the proximity of the initial points. In order for
the candidate points to be closer to the optimum, selection of a neighboring point is
also governed by a random variable [26, 29]. If the selected new position is better
than a previous one, it replaces the previous one and the search ends; if one of these
positions is better than the global best, the global best is also updated [27]. Local
search procedure provides the EM with a good balance between exploration and
exploitation and it can be applied either to all points, as aforementioned, or only to
the current best point [30].
During the third stage, a force is exerted on each charged particle-candidate solu-
tion, according to the contribution from other particles, by using the superposition
principle. The calculation of force is the “core” of the EM method as it determines
the direction and step of movement [26]. In a real electromagnetic field, the force
is proportional to the charges of particles and inversely proportional to the distance
between them. Thus, at first, a “charge” value is calculated for each particle according
to its fitness and the fitness of other particles in such a way that particles with higher
fitness have higher charge values; however, these charges are not characterized as
positive or negative [27]. This calculation is performed as follows:
  i  best  
f x − f x
q i  exp −n  N      (4.1)
k − f x best
k1 f x

In Eq. 4.1, qi represents the charge of particle x i and n is the number of dimensions
of the problem. When charges are calculated, the force can be computed as follows,
for each of the m particles:

m ⎪
⎪ x j − x i  q i q j    
if f x j < f x i
⎨  x j −x i ∨2
Fi 
⎪ x i − x j  q i q j     (4.2)
ji ⎪
⎩ if f x j ≥ f x i
x −x 
j i 2

If particle i is superior to particle j then the force is a positive interaction (attrac-


tion), otherwise it is a negative interaction (repulsion). Finally, during the fourth stage
of the algorithm, the particles move in the direction of the exerted force by a specific
step length [27]. The move depends on a random value (λ), the normalized total
force, in order to maintain feasibility, and a random vector, from a Random Number
Generator-RNG, whose components correspond to the lower and upper bounds of
allowed movement in each direction [29], as follows:

Fi
x i  x i + λ  i  (RNG) (4.3)
F 

According to this calculation, there is a chance to move to unexplored regions


of the search space [30]. The algorithm is stopped when a maximum number of
iterations are reached or when a number of successive iterations without change of
the best value are reached.
4.4 Electromagnetism-Like Mechanism Algorithm 63

It was shown that the EM algorithm has a satisfactory ability of diversification


due to the attraction-repulsion mechanism involved in the calculation of particles
movement but its intensification ability is weaker and it is attempted to improve from
the local search feature [26]. In the work of Zhang et al. [27], some modifications were
proposed to improve its performance and gain the ability of handling constraints.
First, the calculation of total force was simplified by using only a percentage of
the total components forces to obtain the resultant force, instead of using m − 1
components, and the Euclidean norm term used in the total force calculation is
eliminated as it increased the computational cost and could lead to local optima.
Furthermore, the particle move formula was improved by adding a new variable,
called move probability, according to which there is possibility to use the improved
formula by adding some components of the total force to the previous position, and
otherwise the original update formula is preferred. If none of these moves produces
a better position, an elitism strategy is adopted and the previous position is retained.
In the work of Tseng and Chen [28], a modified charge calculation formula was
used and a new distance measure between particles was presented. Details about
other enhanced versions of the EM algorithm can be found in the work of Xing and
Gao [29]. Some applications concerning the use of EM algorithm in industry-related
problems include single machine tardiness problem [28], response time variability
problem [30], optimized tool path planning [31], and layout design of reconfigurable
manufacturing system [32].

4.5 Response Surface Methodology

Response Surface Methodology (RSM) is a generally used method for the determina-
tion of the correlation between several explanatory variables and response variables
for a specific problem. This method was first proposed by Box and Wilson in 1951
[33] and consists of several steps, including the Design of Experiments (DoE) for a
given problem, the analysis of results, and the determination of optimum parameters
by fitting the results to a second-degree polynomial model. As RSM method involves
such a variety of processes, it can be also defined as collection of mathematical and
statistical techniques [34].
It is common to conduct experiments by a factorial or fractional factorial design
before analyzing the results and using the RSM method to determine optimum param-
eters [35, 36]; usually a Central Composite Design (CCD) or Box–Behnken design
is preferred. More specifically, if a first-order model is to be next selected, 2k facto-
rial design or Plackett–Burman design is more suitable, and if second-order model
is to be selected, 3k factorial design, CCD, or Box–Behnken design is preferred
[37]. After the results are obtained, the first step involves the selection of the most
appropriate function, which can model the correlation between input and output
variables adequately. Usually, for that reason, functions containing first- or second-
order terms, namely linear, squared, or cross-product terms are employed to model
the majority of problems [38]. It should be noted that the failure of a lower order
64 4 Other Computational Methods for Optimization

model is not directly implying that a higher order is more suitable, as higher order
models have other deficiencies [35]. For the determination of model coefficients, the
least square method is usually used and statistical tests such as Analysis of Variance
(ANOVA) can be performed to compute the goodness of fit of the model and the
statistical importance of its terms [35, 36]; if it is proven that some of the terms can
be excluded of the model, this indicates that the effect of a variable or its combi-
nation with other variables cannot contribute to the explanation of the correlation
between input and output variables. The use of an appropriate DoE method in the
previous step can be advantageous not only because it efficiently reduces the number
of necessary experiments and also time and cost, but serves for the establishment of
a more accurate model as it can provide minimal errors [34].
The next step of the RSM procedure is actually the optimization process, in order
to find the optimum parameters that minimize the response. After a starting point is
selected, the optimization process is conducted in two steps. The first step involves
the use of a first-order model and steepest descent method in order to obtain rapid
convergence from the initial point to the vicinity of the optimum point. Then, the
second step involves the use of a second-order model in order to obtain higher
accuracy, when searching for the optimum point in a smaller area, which is indicated
by the first step procedure. After a possible point is detected (stationary point), it is
examined in order to determine that it is in fact a point of minimum response and
not a saddle point [36].
Some of the fundamental RSM properties include the orthogonality, the rotata-
bility, and the uniformity and are hereafter described:

• Orthogonality is a property related to individual effects of factors that are estimated


independently with minimal variance, i.e., minimal variance estimates of the model
coefficients. It is shown that orthogonality facilitates the statistical test for the
significance of model parameters [37].
• Rotatability is a property related to the distribution of point about the center of the
factor space to ensure constant prediction variance; prediction variance is constant
at all points lying in equal distance from the center point of the design.
• Uniformity is a property that is related to the control of the number of center points
for uniform precision. Uniform precision is obtained if variance at the origin point
is equal to its value at unary distance from the origin.

RSM has been applied to several problems related to industry such as no-wait two-
machine flow shop scheduling [39], truck scheduling problem [40, 41], maintenance
planning policy, and preventive maintenance [42, 43].
4.6 General Conclusions 65

4.6 General Conclusions

The book presents the main metaheuristics related to industrial and manufacturing
engineering problems. As it is attempted to cover a significant area of the relevant
literature, this work is divided into three main chapters, each containing a category
of computational methods for optimization, namely Evolutionary-Based methods,
Swarm Intelligence-Based methods, as well as other important methods (SA, TS,
EM, RSM), and their variants. The fundamental features of both well-established
and newly proposed, but nevertheless promising methods, are clearly described and
selected literature references, pertinent to Industry 4.0 applications, are provided in
each subsection. The application of these methods for the time and cost-efficient
solution of important and hard industrial problems renders them appropriate to be
included within the framework of Industry 4.0, e.g., towards the creation of more
sophisticated decision-making systems. Statistical analysis concerning the popularity
of the presented methods is conducted in the present work, revealing the frequent use
of these methods in industry-related problems during the last decade and emphasizing
the necessity of presenting the significant amount of gathered knowledge. Thus,
the presentation of computational and statistical methods conducted in this book
is considered to be helpful towards the selection of appropriate methods for the
solution of hard industry-related problems as well as towards the implementation of
these methods.

References

1. Kirkpatrick S, Gelatt CD, Vecchi MP (1983) Optimization by simulated annealing. Science


220:671–680
2. Suppapitnarm A, Seffen KA, Parks GT, Clarkson PJ (2000) A simulated annealing algorithm
for multiobjective optimization. Eng Optim 33:59–85
3. Lin S-W, Ying K-C, Lee Z-J (2010) Part-machine cell formation in group technology using a
simulated annealing-based meta-heuristic. Int J Prod Res 48:3579–3591
4. Hasani K, Kravchenko S, Werner F (2014) A hybridization of harmony search and simulated
annealing to minimize mean flow time for the two-machine scheduling problem with a single
server. IJORN 3:9–26
5. Yu VF, Redi AANP, Hidayat YA, Wibowo OJ (2017) A simulated annealing heuristic for the
hybrid vehicle routing problem. Appl Soft Comput 53:119–132
6. Ku M-Y, Hu MH, Wang M-J (2011) Simulated annealing based parallel genetic algorithm for
facility layout problem. Int J Prod Res 49:1801–1812
7. Tubaileh A, Siam J (2017) Single and multi-row layout design for flexible manufacturing
systems. Int J Comput Integr Manuf 30:1316–1330
8. Şahin R, Türkbey O (2009) A new hybrid tabu-simulated annealing heuristic for the dynamic
facility layout problem. Int J Prod Res 47:6855–6873
9. Kheirkhah A, Bidgoli MM (2016) Dynamic facility layout problem under competitive envi-
ronment: a new formulation and some meta-heuristic solution methods. Prod Eng 10:615–632
10. Palubeckis G (2017) Single row facility layout using multi-start simulated annealing. Comput
Ind Eng 103:1–16
11. Li WD, McMahon CA (2007) A simulated annealing-based optimization approach for inte-
grated process planning and scheduling. Int J Comput Integr Manuf 20:80–95
66 4 Other Computational Methods for Optimization

12. Ramezanian R, Saidi-Mehrabad M, Fattahi P (2013) Integrated lot-sizing and scheduling


with overlapping for multi-level capacitated production system. Int J Comput Integr Manuf
26:681–695
13. Assadi MT, Bagheri M (2016) Differential evolution and population-based simulated anneal-
ing for truck scheduling problem in multiple door cross-docking systems. Comput Ind Eng
96:149–161
14. Glover F (1989) Tabu search—part I. ORSA J Comput 1(3):190–206
15. Yan P, Che A, Yang N, Chu C (2012) A tabu search algorithm with solution space partition and
repairing procedure for cyclic robotic cell scheduling problem. Int J Prod Res 50:6403–6418
16. McKendall A, Li C (2017) A tabu search heuristic for a generalized quadratic assignment
problem. J Ind Prod Eng 34:221–231
17. Demir L, Tunali S, Løkketangen A (2011) A Tabu search approach for buffer allocation in
production lines with unreliable machines. Eng Optim 43:213–231
18. Wu T-H, Yeh J-Y, Syau Y-R (2004) A Tabu search approach to the generalized assignment
problem. J Chin Inst Ind Eng 21:301–311
19. Li K, Liu H, Wu Y, Xu X (2014) A two-dimensional bin-packing problem with conflict penalties.
Int J Prod Res 52:7223–7238
20. Fiechter C-N (1994) A parallel tabu search algorithm for large traveling salesman problems.
Discret Appl Math 51:243–267
21. Hung Y-F, Lin J-Y, Chen W-C (2009) Enhanced parallel Tabu search with the memory of local
optima. J Chin Inst Ind Eng 26:115–125
22. Dhingra AK, Bennage WA (1995) Discrete and continuous variable structural optimization
using Tabu search. Eng Optim 24:177–196
23. Lei D, Wu Z (2005) Tabu search based approach to multiobjective machine part cell formation.
Int J Prod Res 43:5241–5252
24. Glover F, Laguna M (1997) Tabu search. Springer Science + Business Media, New York
25. Gendreau M (2002) Recent advances in Tabu search. In: Ribeiro CC, Hansen P (eds) Essays
and surveys in metaheuristics. Springer US, Boston, pp 369–377
26. Birbil Şİ, Fang S-C (2003) An electromagnetism-like mechanism for global optimization. J
Glob Optim 25:263–282
27. Zhang C, Li X, Gao L, Wu Q (2013) An improved electromagnetism-like mechanism algorithm
for constrained optimization. Expert Syst Appl 40:5621–5634
28. Tseng C-T, Chen K-H (2013) An electromagnetism-like mechanism for the single machine
total stepwise tardiness problem with release dates. Eng Optim 45:1431–1448
29. Xing B, Gao W-J (2016) Innovative computational intelligence: a rough guide to 134 clever
algorithms. Springer International Publishing, Switzerland
30. García-Villoria A, Moreno RP (2010) Solving the response time variability problem by means
of the electromagnetism-like mechanism. Int J Prod Res 48:6701–6714
31. Kuo C-L, Chu C-H, Li Y, Li X, Gao L (2015) Electromagnetism-like algorithms for optimized
tool path planning in 5-axis flank machining. Comput Ind Eng 84:70–78
32. Guan X, Dai X, Qiu B, Li J (2012) A revised electromagnetism-like mechanism for layout
design of reconfigurable manufacturing system. Comput Ind Eng 63:98–108
33. Box GEP, Wilson KG (1951) On the experimental attainment of optimum conditions. J R Stat
Soc 13:1–45
34. Peasura P (2015) Application of response surface methodology for modeling of postweld heat
treatment process in a pressure vessel steel ASTM A516 Grade 70. Sci World J 2015:318475
35. Hill WJ, Hunter WG (1966) A review of response surface methodology: a literature survey.
Technometrics 8:571–590
36. Markopoulos AP, Habrat W, Galanis NI, Karkalos NE (2016) Modelling and optimization of
machining with the use of statistical methods and soft computing. In: Davim JP (ed) Design
of experiments in production engineering. Springer International Publishing, Cham, pp 39–88
37. Khuri AI, Mukhopadhyay S (2010) Response surface methodology. Wiley Interdiscip Rev
Comput Stat 2:128–149
References 67

38. Onwubolu GC (2006) Selection of drilling operations parameters for optimal tool loading using
integrated response surface methodology: a tribes approach. Int J Prod Res 44:959–980
39. Rabiee M, Zandieh M, Jafarian A (2012) Scheduling of a no-wait two-machine flow shop with
sequence-dependent setup times and probable rework using robust meta-heuristics. Int J Prod
Res 50:7428–7446
40. Amini A, Tavakkoli-Moghaddam R (2016) A bi-objective truck scheduling problem in a cross-
docking center with probability of breakdown for trucks. Comput Ind Eng 96:180–191
41. Vahdani B, Zandieh M (2010) Scheduling trucks in cross-docking systems: robust meta-
heuristics. Comput Ind Eng 58:12–24
42. Rivera-Gómez H, Gharbi A, Kenné JP (2013) Joint production and major maintenance planning
policy of a manufacturing system with deteriorating quality. Int J Prod Econ 146:575–587
43. Berthaut F, Gharbi A, Kenné J-P, Boulet J-F (2010) Improved joint preventive maintenance
and hedging point policy. Int J Prod Econ 127:60–72

Вам также может понравиться