Вы находитесь на странице: 1из 44

CO-4

Evolutionary Computing
Prepared By
Shaik Aslam
Outlines
 Overview of Evolutionary computing
 Genetic Programming and Genetic Algorithms
 Genetic algorithms - optimization
Evolutionary Computing-Outlines

 Introduction
 Overview of evolutionary computing
1. Evolutionary programming
2.Evolutionary strategies
3.Genetic programming
4.Genetic algorithms
 Genetic algorithms and optimization
1. Genotype
2.Fitness function
 The schema theorem: the fundamental theorem of genetic algorithms
 Genetic algorithm operators
1.Selection
2.Crossover
3.Mutation
Evolutionary Computing-Outlines

4. Mode of operation of GAs


5.Steps for implementing GAs
6.Search process in GAs
 Integration of genetic algorithms with neural networks
1.Use of GAs for ANN input selection
2.Using GA for NN learning
 Integration of genetic algorithms with fuzzy logic
 Known issues in GAs
1.Local minima and premature convergence
2.Mutation interference
3.Deception
4.Epistasis
Evolutionary Computing-Outlines

 Population-based incremental learning


1. Basics of PBIL
2. Generating the population
3. PBIL algorithm
4. PBIL and learning rate
 Evolutionary strategies
 ES applications
1. Parameter estimation
2. Image processing and computer vision systems
3. Task scheduling by ES
4. Mobile manipulator path planning by ES
5. Car automation using ES
1.Overview of Evolutionary computing
 Generally speaking, evolution is the process by which life adapts to changing
environments.
 The offspring of an organism must inherit enough of its parents’
characteristics to remain viable while introducing some differences which can
cope with new problems presented by its surroundings. Naturally, some
succeed and others fail. Those surviving have the chance to pass
characteristics on to the next generation.
 A creature’s survival depends, to a large extent, on its fitness within its
environment, which is in turn determined by its genetic makeup.
The pursuit of artificial evolution using computers has led to the development of
an area commonly known as evolutionary computation or evolutionary
algorithms.
Loosely stated, the problem addressed by natural evolution is,
“what combinations of genetic traits result in an organism that can survive in its
environment long enough to reproduce?”
Overview of Evolutionary computing
 Evolutionary computation or evolutionary computing is a broad term that
covers a family of adaptive search population-based techniques that can
be applied to the optimization of both discrete and continuous mappings.
Techniques:
 Standard single-point-based optimization techniques operate on a point-
by point basis where the algorithm starts with a single point (current
point) and a new solution is then created.
 The calculation of the new solution is based on a number of steps of the
corresponding algorithm.
 Single-point-based optimization algorithms suffer from inherent demerits.
 Moreover, single-point-based optimization algorithms are only efficient
when the problem at hand is well defined and has a relatively simple
objective function.
Overview of Evolutionary computing
 Figure 8.1 outlines the major search techniques used today and positions
evolutionary algorithms among them.
Overview of Evolutionary computing
 Evolutionary computing based on evolutionary algorithms, such as
population-based optimization techniques, represents an excellent
alternative to the single-point methods.
 They efficiently tackle problems that are ill-modeled or have multi-
objective functions.
• Evolutionary algorithms are inspired by the natural selection and
evolution processes. They operate on a population of potential solutions
to produce a better solution. In this process, natural principles, such as the
survival of the fittest, are applied.
 The basic idea is to represent every individual of the potential solution as
an array of sequences of strings, chromosomes.
 Each string in the chromosome is called a gene and the position of a gene
is called its locus.
 The values that genes might take are called alleles
Overview of Evolutionary computing
 The initial population of the potential solutions is created randomly and it
evolves according to processes that are based on natural evolution, such
as selection, recombination or crossover, and mutations.
 During these operations, which are called evolutionary operations.
CROSSOVER:
• According to their fitness values, the most successful chromosomes are
selected for the crossover process to produce new offspring that might
have better fitness values.
MUTATION: The mutation process is applied to add diversity to the potential
solutions. It is worth mentioning here that the population of the potential
solutions is presented using some encoding mechanisms that suit the
problem being tackled.
Overview of Evolutionary computing
The most popular encoding schemes are
 binary encoding,
 floating-point encoding and
 gray encoding .
 As such, an evolutionary algorithm is characterized by the following five
components:
(1) Encoding: a mechanism to represent the population of potential solutions.
(2) Initialization: a mechanism to create the initial population of the potential
solution.
(3) Fitness function: an objective function or evaluation function that is used to
assign the fitness values to the chromosomes.
(4) Evolutionary operators, such as crossover and mutation.
(5) Working parameters: a set of values of the different parameters such as
population size and chromosome length.
Overview of Evolutionary computing
Overview of Evolutionary computing
Evolutionary programming

 Evolutionary programming (EP) techniques, developed by Fogel [10] in


the late 1980s, are stochastic-based optimization methods that aim at
generating machine intelligence that has the ability of predicting changes
in the environment.
 The basic EP method involves three steps that are repeated until a
termination condition is attained.
The three steps are summarized as follows:
 An initial population of potential solutions is created randomly.
 Each solution is replicated into a new population and the new
chromosomes are mutated according to the mutation distribution.
 The new chromosomes are evaluated based on their fitness-function
values. A selection process is then applied to retain the survival solutions
of the new generation.
Evolutionary programming
 In evolutionary programming the chromosomes are represented as finite state
machines (FSMs).
 The objective is to optimize these FSMs in order to provide a meaningful
representation of behavior based on the interpolation of the symbol [5].
Typically, EP does not use crossover operators.
 The main process here is the mutation operation, which is used to randomly
create new offspring from the parent population [3]
There exist five possible mutation operators:
 Modify an output symbol.
 Modify a state transition.
 Add a state.
 Delete a state.
 Change the initial state.
One of these mutation operators is selected based on a mutation distribution
(usually a probability distribution).
 Furthermore, more than one mutation operator can be applied to one parent.
Evolutionary strategies

 Evolutionary strategies (ES) were originally developed by Rechenberg [11]


as a method to solve continuous optimization problems.
 They make use of a population of a size equal to one with one single
genetic operation, i.e., mutation. Schwefel [12] utilized the real-valued
form to represent the chromosomes and compared the performance of
the ES with more traditional optimization techniques.
 Moreover, he extended the population size to more than one and
introduced other operators such as selection and recombination. For a
survey of evolutionary strategies and their applications, one may consult
[13].
Genetic programming

 In genetic programming (GP), the potential solutions of the population are


the parse trees of computer programs where the fitness values are
measured by running them [14].
 A GP may be considered as a subset of genetic algorithms with the
difference in terms of the solutions representation.
A GP is usually implemented using the following four steps:
 Initialize the population of potential solutions (parse trees of computer
programs) of functions and terminals. These functions and terminals are
the alphabet of the computer programs.
 Execute each program and assign a fitness value to it according to how
well the program does in solving the problem.
Genetic programming

 Generate new offspring of the computer programs as follows:


(a) Copy the computer program that has the best fitness value.
(b) Create a new computer program using the mutation operator.
(c) Apply a crossover operator to create new computer programs.
 The program with the highest fitness value is considered to be the result
of the genetic programming.
Genetic algorithms
 Genetic algorithms (GAs) represent another important class of evolutionary
computing techniques. GAs are non-comprehensive search techniques used
to determine among other things the global optima of a given function (or a
process) that may or may not be subject to constraints.
 The origin of GAs dates back to the early 1950s when a group of computer
scientists and biologists teamed up to simulate the behavior of a class of
biological processes.
 But it was only later in the early 1970s that Holland and his associates
introduced the methodology in a more formal and tractable way.
 Holland was able to show using the schema theory [16, 17] that genetic
algorithms have sound theoretical roots and they are able to accurately
solve a wide range of optimization problems. This is done through a
procedure inspired from the biological process of evolution and the survival
of the fittest concept.
 GAs have seen in recent years wide interest from researchers in the field of
engineering mathematics, connectionist modeling and approximate
reasoning, to name a few.
Genetic algorithms
 The search procedure of GAs is stochastic in nature and doesn’t usually
provide the exact location of the optima as some other gradient-based
optimization techniques do.
 However, GA-based techniques possess two attractive features putting
them at an advantage with respect to their derivative-based counterparts.
 In fact, given their discrete search nature, they could be easily applied to
continuous as well as to discontinuous functions.
 Moreover, while GAs may not provide for the mathematically exact
solution of a given optimization problem, they usually outperform
gradient-based techniques in getting close to the global optima and hence
avoid being trapped in local ones.
 Given their growing use in an ever-increasing broad set of applications,
genetic algorithms and evolutionary strategies have become major tools
of evolutionary computing.
2.Genetic algorithms and optimization

 A large number of techniques have been proposed to solve optimization


problems including the path-following methods and the integration-
based methods.
 But while alleviating some of the issues, most of these techniques still rely
on the gradient of the function.
 Techniques termed genetic algorithms were formulated first by Holland
Based on biologic evolution and on the survival of the fittest principle,
these techniques have been proposed as effective tools for dealing with
global optimization problems.
 They are derivative-free based techniques, and as such could be easily
applied to smooth functions, simply continuous functions, or even
discontinuous functions.
• This feature makes for solving global optimization problems.
Genetic algorithms and optimization

 The solution is obtained on the basis of an iterative search procedure,


which mimics to a certain extent the evolution process of biological
entities.
 The end result of this process, which starts with a randomly selected
population of individuals, is a population with a strong survivability index,
something better known as the survival of the fittest principle.
 This translates into finding the location of the point(s) at which the
function is maximal. In a more formal way, let us suppose the goal is to
maximize a function f of m variables given by:
Genetic algorithms and optimization

• where (x*1, x*2 , . . . , x*m ) represents the vector solution belonging to


the search space(s) taken here as ℜm = (ℜ × ℜ . . . × ℜ).
• It is presumed that the function f takes positive values. If this is not the
case, then one may wish to add a bias term into the function to make it
positive over all the search space.
• In the case of minimization, one might pose the problem of optimization
as that of maximization of a function h, which is the negation of f all over
the search space.
• In other words, maximize the function h such as:
min f (x1, x2, . . . , xm) = max h(x1, x2, . . . , xm) = max(−f (x1, x2, . . . , xm))
(8.3)
Genetic algorithms and optimization
• The basic idea of GAs is to choose first a random population in the range
of optimization, with a fixed size n (n usually depends on the search range,
the accuracy required and the nature of the function itself).
• Using the so called binary encoding procedure, each variable is
represented as a string of q binary digits.
• This leads to a population of elements represented by a matrix of n rows
and qm rows.
• A set of “genetic” operators is then applied to this matrix to create a new
population at which the function f attains increasingly larger values.
The most common operators that have been used to achieve this task are
 Selection
 Crossover
 Mutation.
Genotype
 When attempting to map natural evolution into the framework of
artificial evolution, we must first consider the “data” for the system.
 In nature, these data consist of living creatures.
 Each individual represents a potential solution to the problem of survival.
Similarly, in genetic algorithms, we consider a set of potential solutions,
which are referred to collectively as a population.
 Each single solution is called an individual.
 Each individual in nature has a form determined by its DNA. Its collection
of genetic traits is commonly known as a genotype.
 In genetic algorithms, the term genotype is used to describe the
encoding of a problem solution represented by an individual.
 Thus, each individual has a genotype, which encodes a solution.
• In the GA literature, an individual’s genotype is often referred to as its
chromosome.
Genotype

• Genotypes in genetic algorithms are typically represented by strings,


sometimes called bits or characters. Each element of the string represents
a gene, which is a single unit of genetic information.
• In nature genes control, directly or indirectly, various traits in the
individual.
• For example, in humans there are genes to determine eye and hair
colour, and genes for determining other characteristics.
• It is important to note that in nature, several gene often collectively
determine a physical trait, and that they are not necessarily independent.
• This is true in genetic algorithms as well, where a solution encoding may
make use of several interacting genes. Each gene has one or more
Genotype
• possible values known as alleles. One can imagine that humans have a
hair colour gene with an allele for each colour: brown, blond, black, red,
etc. The reality in humans is far more complicated than this but it serves
as an illustration of the idea. The number of alleles for a specific gene is
essentially fixed in nature, and in artificial evolution it is determined by
the encoding of solutions. The simplest genes are binary, having only two
alleles.

• This means they can be represented by a single bit. However, some genes
may have several alleles and are represented using characters. In genetic
algorithms, the number of genes in the genotypes for a particular problem
is usually fixed. There have been some applications which employ
variable-length genotypes but these will not be addressed here. Since the
genotype is intended to express a solution to a specific problem, the genes
and alleles must be designed for that problem and must express the
various components of a potential solution.
Genotype
• The design of the genotype structure for a particular application is one of
the two most difficult and important parts of using genetic algorithms.
Unfortunately, there are no straightforward, general-purpose approaches.
The task is highly dependent on the problem and, in many cases, on the
• particular kind of genetic algorithm one wishes to employ.
• In a number of cases, the choice of objective function is quite obvious. In
the equation minimization problem described earlier, the genotype is a set
of parameters for the function, and the objective function is simply the
value of
• the equation being minimized, given the genotype.
• In this case, a lower result from the objective function represents a better
solution to the problem. In many cases, however, a good objective
function is more difficult to construct and heuristic approaches must be
used.
Fitness function

• If one considers the manufacturing process, for instance, a genotype could consist of a set of values
for various control parameters of the process, but the objective function is more difficult to
quantify. One possibility is to measure the yield from the process (the more

To use genetic algorithms, it is necessary to provide a means for evaluating the value or goodness of a
particular solution. In nature, this is frequently
• thought to be the fitness of a creature (as in the popular concept of survival
• of the fittest), referring to its relative ability to survive in its environment.
• A fit creature must be able to find food and shelter, must be able to endure the local temperature
and weather, and detect and evade predators. If one creature is able to do this better than another,
we say that it is “fitter.”
• This leads us to the concept of a fitness function, which measures the fitness of a particular
solution.
• In genetic algorithms, fitness functions are also called objective functions. An objective function
takes a genotype as its parameter and typically gives a real-valued result that represents the fitness
or goodness of the solution. Generally speaking, this fitness is comparative rather than absolute,
and serves to differentiate among many solutions rather than to discover the ideal solution. The
ability to compare solutions is, in most cases, essential to the operation of a genetic algorithm.
3.The schema theorem: the fundamental theorem of
genetic algorithms
4.Genetic algorithm operators
5.Integration of genetic algorithms with neural networks
6.Integration of genetic algorithms with fuzzy logic
7.Known issues in GAs
8.Population-based incremental learning
9. Evolutionary strategies
10.ES applications

The Evolution strategies applications are as follows


 Parameter estimation
 Image processing and computer vision systems
 Task scheduling by ES
 Mobile manipulator path planning by ES
 Car automation using ES
ES applications

1. ES – as a stochastic search – can be applied to system parameter


estimation.
 For system parameter estimation, adaptability and robustness are
important factors.
2. Adaptability is adaptiveness to the system dynamics and robustness
pertains to robustness to outliers.
 Hatanaka et al. applied ES to the parameter estimation of the
autoregressive (AR) model, and they used (μ + λ) − ES selection.
 Finally, they showed the out-performance of ES
 over recursive least square and recursive weighted least square
methods.
 They emphasized the adaptability and robustness of ES over other
methods.
11. Additional Applications of Genetic Fuzzy, Genetic
Neuro Systems

Вам также может понравиться