Академический Документы
Профессиональный Документы
Культура Документы
Evolutionary Algorithms
Optimization Tasks
Adaptive Walk
in a “fitness landscape”
Fitness
Xn
End Local optimum
X2
X1 X1
X2
Types of fitness landscapes
Fitness
Start
B
A
q C
Neighborhood N Local
x optimum
• fixed N
• variable (“adaptive”) N
Local Neighborbood Strategies
Tabu search
Elements of artificial adaptive systems (Koza)
• Genetic algorithms
• Evolution strategies
• Particle Swarm Optimization (PSO)
• Ant Colony Optimization (ACO)
Which search strategy should be applied?
n
f ( x) = ∑ xi2
i =1
globales Minimum
sin 2 x 2 + y 2 − 0.5 n −1 2
f ( x) = 0.5 +
(1 + 0.001⋅ ( x 2
+ y2 ) )
2 (
f ( x) = ∑100 ⋅ xi +1 − x
2 2
i ) + (1 − x ) i
i =1
Operators of Evolutionary Optimization
• Selection
Genetic Algorithms: Chromosome Representation
100011100001010
110010110001011
genes
chromosome
Principle of Evolutionary Searching
Optimum
Best value
ξP,σ
Initialize parent (ξ σP,QP);
For each generation:
Generate λ variants (ξ σV,QV) around the parent:
ξV,σ
σ V = σP • G;
ξV = ξP + σV • G;
calculate fitness QV;
Select best variant by QV ;
ξP,σ
Set (ξ σP,QP) ≡ (ξ
ξV,σ
σV,QV)best ;
Disadvantages:
0
f2 2
4
0
number of times the
0
solution is dominated
1
0 dominated solution
0
f1
non-dominated solution
(Pareto solution)
Particle Swarm Optimization (PSO)
Fitness
-2
-4
-4 -2 0 2 4
y = f ( x) = x 2 − 10 ⋅ cos(2 ⋅ π ⋅ x)
Rastrigin function
PSO Algorithmus
begin
Positionen und Geschwindigkeiten der Partikel initialisieren
Fitness evaluieren
Gedächtnisse initialisieren
while (Abbruchbedingung ≠ true)
neue Geschwindigkeiten berechnen
Positionen neu berechnen
Fitness evaluieren
Gedächtnisse auffrischen
end
end
PSO: Update Regel 1 (Standard Typ)
Individuelles Soziales
Gedächtnis Gedächtnis
v i (t + 1) = v i (t ) + 2 ⋅ r1 ⋅ ( p i − xi (t )) + 2 ⋅ r2 ⋅ ( p b − x i (t ))
v i (t + 1) = w ⋅ v i (t ) + n1 ⋅ r1 ⋅ ( p i − x i (t )) + n 2 ⋅ r2 ⋅ ( p b − x i (t ))
v: Geschwindigkeitsvektor des Partikels
x: Ortsvektor des Partikels w − wend
w: Trägheitsgewicht (inertia weight) w = wstart − start ⋅ Epochs
i: Dimension MaxEpochs
t: Epochenzahl
n1 und n2: individuelle und soziale Konstante
r1 und r2: Zufallszahlen zwischen 0 und 1
pi und pb: die besten Postionen aus dem individuellen und sozialen Gedächtnis
PSO: Individuelles und soziales Gedächtnis
globales
Minimum
n1 = 1 n1 = 2
n2 = 2 n2 = 1
v i (t + 1) = K ⋅ ( v i (t ) + n1 ⋅ r2 ⋅ ( pi − xi (t )) + n2 ⋅ r2 ⋅ ( pb − xi (t )))
globales globales
Minimum Minimum
Rosenbrock Funktion