Вы находитесь на странице: 1из 34

Optimization Methods

Rafa Zdunek
Metaheuristics (2h.)
Introduction
Complexity theory,
Stochastic programming
Evolutionary strategies,
Genetic algorithms,
Simulated annealing,
Other metaheuristics
Bibliography
T. Back, Evolutionary Algorithms in Theory and Practice, New York,
Oxford University Press, 1996,
The Handbook of Evolutionary Computation, Editors: T. Back, D. B.
Fogel, Z. Michalewicz, New York, Oxford University Press, 1996,
T. Weise, Global Optimization Algorithms Theory and Application, e-
book, 2009, http://www.it-weise.de/
A. R. Mehrabian, C. Lucas, A Novel Numerical Optimization Algorithm
Inspired from Weed Colonization, Ecological Informatics, Vol. 1, No. 4,
2006, pp. 355366,
J. Arabas, Wykady z algorytmw ewolucyjnych, WNT, 2004
Complexity theory
A problem that belongs to the NP class can be solved in polynomial time by
a nondeterministic Turing machine (a parallel machine that can take many
computational paths simultaneously, with the restriction that parallel
machines cannot communicate). NP stands for Nondeterministic Polynomial
and it means that it is possible to "guess" the solution (by some
nondeterministic algorithm) and then check it, both in polynomial time.

P is a set of decision problems that can be solved in polynomial time, i.e.


problems that can be solved relatively quickly.

NP-hard is a class of problems that are, informally, "at least as hard as the
hardest problems in NP".
Complexity theory
A problem X is NP-complete
if it has two properties:

It belongs to the NP-class. Any


given solution to X can be verified
quickly (in polynomial time).

It also belongs to the class of


NP-hard problems: Any NP
problem can be converted into X
by transformation of the inputs in
polynomial time.

NP-hard problem has no solution There is a polynomial time


in a polynomial time. algorithm for any NP-hard problem.
Stochastic programming
Stochastic optimization (SO) incorporates probabilistic (random)
framework, either in the problem data (an objective function,
constraints, etc.), or in the algorithm itself (through random parameter
values, random choices, etc.), or both. Metaheuristic is concerned
with the latter, where the solution space is searched with some
pseudo-random strategies.
Metaheuristic
In computer science, metaheuristic designates a computational method that
optimizes a problem by trying to improve iteratively a candidate solution with
regard to a given measure of quality. Metaheuristics make few or no assumptions
about the problem being optimized and can search very large spaces of candidate
solutions. However, metaheuristics do not guarantee an optimal solution is ever
found. Other terms having a similar meaning as metaheuristic are: derivative-
free, direct search, black-box, or just heuristic optimizer. Many metaheuristics
implement some form of stochastic optimization.

Metaheuristics is suitable for NP-hard problems.


Metaheuristics
Examples of metaheuristics:
Evolutionary algorithms (ES, GA, EP, GP),
Tabu search (TS),
Simulated annealing (SA),
Particle swarm optimization (PSO),
Ant colony optimization (ACO),
Invasive weed optimization (IWO).
Optimization Algorithms
Example of a multi-modal cost
function
Example of a multi-modal cost
function
An unstable optimization
problem
Simulated Annealing
E = Enew Eold; - energy variation.
(This may be modeled by the difference
in the objective function.)

Temperature T should be scheduled,


e.g. according to the exponential rule;
slowly decreasing to zero!

Slower decrease in T higher probability


to attain a global minimum but a higher
computational cost.
Simulated Annealing
Definitions
Problem space: A problem space X (phenome) of an optimization problem
is a set of all elements x that may be feasible solutions.
Solution space: A union of all solutions of an optimization problem refers to
as the solution space S, where X* S X, and X* is the global optimization
set.
Search space: A search space G of an optimization problem is a set of all
elements g which can be processed by the search operations.
Genotype: Elements g G of the search space G of a given optimization
problem are called genotypes.
Gene: Distinguishable units of information in a genotype that encode
phenotypical properties are called genes.
Allele: An allele is a value of a specific gene.
Locus: A locus is a position where a specific gene can be found in a
genotype.
Optimization spaces and sets
Search space problem space
g G, x X : gpm( g ) = x Genotype-phenotype mapping should be
at least surjective. Sometimes it is injective.
Encoding
Phenotype-genotype mapping: x X , g G : gpm 1 ( x) = g

Binary encoding Chromosome A 011001110101101011110101


(Gray encoding): Chromosome B 110011100111001111100000
Chromosome A 3 5 1 2 8 4 7 9 6
Permutation encoding:
Chromosome B 8 1 6 7 2 3 5 4 9

Chromosome A 3.1234 7.3455 0.6443 6.4543 7.0056


Value encoding: Chromosome B NNGJEIFJDVSIERJFDLGSVYFEGT
Chromosome C (N), (N), (W), (E), (S)
Evolutionary algorithms
Fundamental evolutionary algorithms:
Evolutionary strategies (ES),
Genetic algorithms (GA),
Evolutionary programming (EP),
Genetic programming (GP)

Related evolutionary algorithms:


Particle swarm optimization (PSO),
Ant colony optimization (ACO),
Invasive weed optimization (IWO).
Evolutionary Algorithm
Evolutionary strategy (1 + 1)
Mutation Yi = X i + N ( 0,1)
procedure Evolutionary strategy (1 + 1) t t
begin i
t:= 0
initialization Xt - range of mutation
fitness evaluation (Xt)
while (not stop condition) do
begin
1/5 success rule:
Yt:= mutation(Xt) If for k consecutive generations
fitness evaluation (Yt) the number of successful mutation (with
if ( (Yt) > (Xt) ) then better fitness) is greater than 1/5 of the
begin total number of mutations, increase the
Xt+1:= Yt range of mutation, i.e. = ci ;
else
Xt+1:= Xt
end
Elseif equal, do nothing;
t:= t + 1
end Otherwise; decrease the range of
end mutation, i.e. = ci ;
Evolutionary strategy ( + )
procedure Evolutionary strategy ( + )
begin
t:= 0
initialization Pt
fitness evaluation (Pt)
while (not stop condition) do
begin
Tt:= reproduction of individuals from Pt
Ot:= crossover and mutation (Tt)
fitness evaluation (Ot)
Pt+1:= best fitted individuals from Pt Ot
t:= t + 1
end
end
Evolutionary strategy (,)
procedure Strategia ewolucyjna (, )
begin
t:= 0
initialization Pt
fitness evaluation (Pt)
while (not stop condition) do
begin
Tt:= reproduction of individuals from Pt
Ot:= crossover and mutation (Tt)
fitness evaluation (Ot)
Pt+1:= best fitted individuals from Ot
t:= t + 1
end
end
Genetic algorithm
procedure Genetic Algorithm
begin
t:= 0
initialization: P0 - random population of n chromosomes,
while (not stop condition) do
begin
fitness evalution (Pt)
Tt:= selection(Pt)
Ot:= crossover and mutation (Tt)
Pt+1:= replace(Ot,Pt)
t:= t + 1
end
end
Selection
Parents can be selected for reproduction
using the following basic selection criteria:

Roulette Wheel Selection,


Rank Selection,
Stochastic Universal Sampling,
Tournament Selection
Rank functions
F ( Poz ) = 2 + 2 ( 1)
( Poz 1)
Linear:
N 1
Poz = 1 (worst fitted individual) Poz = N (best fitted individual)

- selective pressure
NX Poz 1
Non-linear: F ( Poz ) = N
, where X is the root to the polynomial
X
i =1
i 1

( N ) X N 1 + X N 2 + + X + = 0
Roulette Wheel Selection
(example) N
For = 2 and N = 11

Poz
F(Poz)
round(F(Poz)/N)
(selection probability)

(F(Poz)/N)
(cumulative)
Roulette Wheel Selection
Uniformly distributed
11 random numbers:

Problem:
Only the best fitted
individuals have
a big chance for
reproduction.
Rank Selection
Stochastic Universal Sampling
Vector of selection probability:

(
F Poz ( i ) ), i = 1, , N
C

( )
N
C = F Poz ( i )
i =1
(the length of the circle)

Assuming Noff = 11,


hence the angle between
consecutive points is Let l = 0.03 11deg
(1/ Noff)*360deg = 32.4deg
Initial angle for the first point is given by [0, 1/ Noff]
Crossover
Chromosome 1 Chromosome 1

Chromosome 2
Chromosome 2

Offspring 1 Offspring 1

Offspring 2 Offspring 2

One-point crossover Two-point crossover

Mask

Uniform crossover
Mutation

Real-value mutation Binary mutation


zi = zi ri
= 2 uk , u U [ 0,1] , k iterative step Mutation probability [0, 1/2],
Usually pm = 0.05.
For k (close to a global optimum),
0 (weak mutation)
To obtain one mutation
Mutation probability = 1/n, in each chromosome: pm = 1/n
where n is a number of genes in the chromosome.
Offspring replacement
Invasive Weed Optimization
Invasive Weed Optimization (IWO)
(A. R. Mehrabian, C. Lucas, 2006)

Initialization of a population: a finite number of seeds


are being dispread over the search area,

Reproduction: every seed grows to a flowering plant and


produces seeds depending on its fitness,

Spatial dispersal: the produced seeds are being randomly


dispread around their parents with a decreasing variance and
grow to new plants,

Competitive exclusion: this process continues until


maximum number of plants is reached; now only the plants
with best fitness can survive and produce seeds, others are
being eliminated. The process continue until maximum
iterations is reached and hopefully the plant with best
fitness is the closest to the optimal solution.

Вам также может понравиться