Вы находитесь на странице: 1из 104

GENETIC ALGORITHM

GA Quick Overview

 Developed: USA in the 1970’s


 Early names: J. Holland, K. DeJong, D. Goldberg
 Typically applied to:
 discrete optimization
 Attributed features:
 not too fast
 good heuristic for combinatorial problems
 Special Features:
 Traditionally emphasizes combining information from good
parents (crossover)
 many variants, e.g., reproduction models, operators
Genetic Algorithm (GA)
 Genetic Algorithm (GA) was invented by John
Holland in the 1960s and was developed by
Holland and his students and colleagues at the
University of Michigan in the 1960s and the
1970s.
 In contrast with evolution strategies and
evolutionary programming, Holland's original
goal was not to design algorithms to solve
specific problems, but rather to formally study
the phenomenon of adaptation as it occurs in
nature and to develop ways in which the
mechanisms of natural adaptation might be
imported into computer systems.
Akhmad Aminullah
Genetic Algorithm (GA)
 The first mention of the words “genetic algorithm” and
the first published application of a genetic algorithm both
came in Bagley’s in 1967 pioneering dissertation. At the
time there was a great deal of interest in game-playing
programs, in that spirit, Bagleys devised a controllable
test bed of game-playing tasks modeled after the game
of hexapawn. Bagley constructed GA to search for
parameter sets in game evaluations and compared them
to correlation algorithms, learning procedures modeled
after the weight-changing algorithms of that period

Akhmad Aminullah
GA in Structural design
 Goldberg started apply GA to engineering field,
his studied about “Mass-spring-dashpot
system identification with simple GA” in 1981
and continue many other research
like”Steady-state and transient optimization of
gas pipeline using GA” and “Structural
optimization (plane truss) via GA” (Goldberg,
1989).

Akhmad Aminullah
Engineering application of GA

Year Author Description


1982 Etter, Hick, and Cho Recursive adaptive filter design using simple GA

1985 Davis Outline of Job Shop Scheduling procedure via


GA
1985 Goldberd and Kuo On-off , Steady state optimization of oil pump
pipeline system with GA
1986 Glover Keyboard configuration design using a GA
1986 Minga Aircraft landing strut weight optimization with
GA
1997 Coello C.C., A simple genetic algorithm for the design of
Christiansen A.D. and reinforced concrete beams
Santos F.H.
1998 Camp. C., Pezeshk, P., Optimized Design of two-dimensional structures
and Guozhong, C. using a Genetic Algorithm
Akhmad Aminullah
Engineering application of GA

Year Author Description


1999 Chen S.Y., Rajan S.D. Using Genetic Algorithm as An
Automatic Structural Design Tool
1999 Camp, C., Li, J. and Pezeshk, S. Composite Frame Design Using a
Genetic Algorithm
1999 Al Tabtabai H., Alex A.P. and James Slab Formwork Design using
R. Genetic Algorithm
2003 Charles V. C, Shahram P., and Flexural Design of reinforced
Hakan H. concrete frames using a genetic
algorithm
2003 Lu, W. Optimum Design of Cold-Formed
Steel Purlins Using Genetic
Algorithms
2005 Sousa, L.C., Catro, C.F. and Optimal design of V and U bending
Progress using genetic algorithms
2005 Yeniay, O. Penalty function methods for
constrained optimization with
Akhmad Aminullah genetic algorithms
Engineering application of GA

Year Author Description


2005 Alexandre A. D., Sebastião A. L., Genetic Algorithm Optimization of
Pedro C. G., Vellasco and Martha Semi-Rigid Steel Structures
L.F.
2007 Sesok, D. and Belevicius, R. Use of genetic algorithm in topology
of truss structures
2008 Tsoulus G. I. Solving Constrained optimization
problem using a novel genetic
algorithm
2008 Sesok, D. and Belevicius, R. Global Optimization of Trusses with
a Modified Genetic Algorithm
2009 Corriveaum, G., Guilbault, R. and Genetic algorithm and finite element
Tahan, A. coupling for mechanical
optimization

Akhmad Aminullah
Ch 2 METHOD OF OPTIMIZATION

Optimal design of civil engineering structure


usually to find set of discrete design variable (e.g.
member cross-section properties) to minimize an
objective function (e.g. weight or cost of
structure) subject to one or more constraints for
a predetermined structural layout and loading.
To make optimization calculation has many kind
of optimization technique

Akhmad Aminullah
Optimization algorithms

Optimization algorithms can be divided into two classes :


 Deterministic methods: these methods use function and/or
gradient information to construct mathematical approximation of
the functions, and then they find an optimum point employing hill-
climbing methods. These methods work normally with continuous
design variables and need a small number of function evaluations,
but they may not find a global optimum point.
 Nondeterministic methods: the most common methods in this
class are random search, genetic algorithm (GA), evolutionary
programming (EP), evolution strategies (ES), simulated annealing
(SA), and particle swarm optimization (PSO). These methods
work entirely using only function values. These methods can work
with discrete variables and (with infinite time) find a global
optimum in the presence of several local optima. However, the
number of function evaluations can be high even when a global
optimum not found.
Akhmad Aminullah
Types of programming problems

 Linear programming problems (LP): in these problems


the variables are continuous and both the objective
function and constraints are linear;
 Nonlinear programming problems (NLP): in these
problems the variables are continuous and the objective
function or the constraints can be either linear or
nonlinear;
 Mixed-integer programming problems (MIP): in these
problems the objective function and constraints are
functions of integer and continuous variables;
 Integer programming problems (IP): in these problems
there no continuous variables involved;
 Binary programming problems (BP): in these problems
the variables have either a value of 0 or 1.
Akhmad Aminullah
Nonlinear Programming Problem
Constraints
 The majority of engineering problems involve constraints
minimization. The task is to minimize a function subject
to constraints. A very common instance of a constraints
optimization problem arises in finding the minimum
weight design of a structure subject to constraints on
stress and deflection. Constraints problems may be
expressed in the following general nonlinear
programming form:
Minimize f ( x)
Subject to g i  x   0 i  1,..., m
and hi  x   0 i  1,..., l

Akhmad Aminullah
Direct search method
 Cyclic Coordinate search
 Simulated Annealing (SA)
 Genetic Algorithm (GA)

Akhmad Aminullah
Integer and Discrete Programming

 Bound and Branch Method


 Farkas’ Method for Discrete Nonlinear
Monotone Structural Problems
 Genetic Algorithm for Discrete Programming

Akhmad Aminullah
Example Problem

Center and end section of the pressure vessel (Coello., 2000)

F ( X )  0.6224 x1 x3 x 4  1.778 x 2 x32  3.1661 x12 x 4  19.84 x12 x3

g1  X    x1  0.0193 x3  0

g 2  X    x2  0.00954 x3  0

g 3  X   x32 x 4  x33  1296000  0


4
3
g 4  X   x 4  240  0

Akhmad Aminullah
(3.6)
Comparison from all method
Best solution found
Design Bound and
Variable Cyclic SA GA Farka’s
Branch
x1 0.727590981 0.856745591 0.76 1.125 1.95625
x2 0.359699953 0.423955333 0.3717647 0.625 1.3375
x3 37.69901301 44.29203313 38.58823 48.97 38.125
x4 240 151.2285683 226.2745 106.72 240
g1 -3.03143E-08 -0.001909351 -0.015247161 -0.179879 -1.2204375
g2 -5.13686E-05 -0.001409337 -0.003632986 -0.1578262 -0.9737875
g3 -0.083980743 -10.73604938 -3197.88573 97.90317616 -32047.958
g4 -8.56099E-09 -88.77143173 -13.7255 -133.28 0
F(X) 5804.506651 6047.076189 5970.541429 7981.569061 20400.1411

Akhmad Aminullah
GENETIC ALGORITHM

Holland’s GA is a method for moving from one


population of “chromosomes” to a new
population by using a kind of “natural selection”
together with the genetics-inspired operators of
crossover (recombination), mutation and
inversion (Mitchell, 1996).

Akhmad Aminullah
Genetic Algorithm
 Based on Darwinian Paradigm

Reproduction Competition

Survive Selection

 Intrinsically a robust search and optimization mechanism


The structure of the GA
Generate initial evaluate objective function are optimization yes Best
population criteria met individuals

no

Start Selection Result


Generate new
population

Crossover
(recombination)

Mutation

Akhmad Aminullah
Start

Generation = 1

Initial
Population

FindFitness

Statistics

Scaling

For Generation = 2 to
MaxGeneration

For NewIndividual = 1 to PopulationSize


step 2

Selection (mate1)
Selection (mate2)

If Rnd <=
CrossOverProbobality
No
yes

Crossover NoCrossover

New Individual

Replace
FindUnknowns
FindFitness
Statistics

Next
Generation

End
GA reproduction cycle

1. Select parents for the mating pool


(size of mating pool = population size)
2. Shuffle the mating pool
3. For each consecutive pair apply crossover with
probability pc , otherwise copy parents
4. For each offspring apply mutation (bit-flip with
probability pm independently for each bit)
5. Replace the whole population with the resulting
offspring
Algorithm
BEGIN
Generate initial population;
Compute fitness of each individual;
REPEAT /* New generation /*
FOR population_size / 2 DO
Select two parents from old
generation;
/* biased to the fitter ones */
Recombine parents for two offspring;
Compute fitness of offspring;
Insert offspring in new generation
END FOR
UNTIL population has converged
END
Example of convergence
Basic principles 1
 Coding or Representation
 String with all parameters
 Fitness function
 Parent selection
 Reproduction
 Crossover
 Mutation

 Convergence
 When to stop
Basic principles 2
 An individual is characterized by a set of parameters:
Genes
 The genes are joined into a string: Chromosome

 The chromosome forms the genotype


 The genotype contains all information to construct an
organism: the phenotype

 Reproduction is a “dumb” process on the


chromosome of the genotype
 Fitness is measured in the real world (‘struggle for
life’) of the phenotype
Coding
 Parameters of the solution (genes) are concatenated
to form a string (chromosome)
 All kind of alphabets can be used for a chromosome
(numbers, characters), but generally a binary alphabet
is used
 Order of genes on chromosome can be important
 Generally many different codings for the parameters
of a solution are possible
 Good coding is probably the most important factor for
the performance of a GA
 In many cases many possible chromosomes do not
code for feasible solutions
Genetic Algorithm
 Encoding
 Fitness Evaluation

 Reproduction

 Survivor Selection
Genetic algorithm operator
Genetic algorithm is initialised with a population
of guesses. These are usually random and will
be speared throughout the search space.
Typically genetic algorithm uses three operators,
such as selection, crossover and mutation to
direct the population towards convergence at the
global optimum. (Coley, 1999)

Akhmad Aminullah
Encoding and decoding
GA starts with a set of population represented by
chromosomes. The encoding of a chromosome
is problem dependent. Since any possible data
structure can be used as encoding for the
creation of a search space to represent a given
problem, there are several ways to encode the
design variables, such as the binary encoding,
gray codes, direct encoding and tree based
encoding.

Akhmad Aminullah
Representation

Phenotype space Genotype space =


Encoding {0,1}L
(representation) 10010001
10010010
010001001
011101001
Decoding
(inverse representation)
Genome or chromosome

1 1 0 1 0 0 1 0 1 1 0 1 1 1 0 0

Evaluation
Selection
Crossover
Mutation

Akhmad Aminullah
GA operators: 1-point
crossover
 Choose a random point on the two parents
 Split parents at this crossover point
 Create children by exchanging tails
 Pc typically in range (0.6, 0.9)
Crossover

Parent 1
1 1 0 1 0 0 1 0 1 1 0 1 1 1 0 0

Child
1 1 0 1 1 0 1 1 0 1 0 1 1 1 0 0

Parent 2
0 0 1 1 1 0 1 1 0 1 0 0 1 0 1 1

Akhmad Aminullah
Alternative Crossover
Operators
 Performance with 1 Point Crossover depends on the
order that variables occur in the representation
 more likely to keep together genes that are near
each other
 Can never keep together genes from opposite
ends of string
 This is known as Positional Bias
 Can be exploited if we know about the structure
of our problem, but this is not usually the case
n-point crossover
 Choose n random crossover points
 Split along those points
 Glue parts, alternating between parents
 Generalisation of 1 point (still some positional bias)
Uniform crossover
 Assign 'heads' to one parent, 'tails' to the other
 Flip a coin for each gene of the first child
 Make an inverse copy of the gene for the second child
 Inheritance is independent of position
Reproduction Operators comparison

 Single point crossover

Cross point
• Two point crossover (Multi point crossover)


One-point crossover - Nature
1 2
1 2

2 1
2 1
Two-point crossover
 Randomly two positions in the chromosomes are
chosen
 Avoids that genes at the head and genes at the tail
of a chromosome are always split when
recombined
Uniform crossover
 A random mask is generated
 The mask determines which bits are copied from one parent and
which from the other parent
 Bit density in mask determines how much material is taken from the
other parent (takeover parameter)

Mask: 0110011000 (Randomly generated)


Parents: 1010001110 0011010010

Offspring: 0011001010 1010010110


Reproduction Operators

 Uniform crossover

• Is uniform crossover better than single crossover


point?
– Trade off between
• Exploration: introduction of new combination of features
• Exploitation: keep the good features in the existing solution
Problems with crossover
 Depending on coding, simple crossovers can
have high chance to produce illegal offspring
 E.g. in TSP with simple binary or path coding, most
offspring will be illegal because not all cities will be in
the offspring and some cities will be there more than
once
 Uniform crossover can often be modified to avoid
this problem
 E.g. in TSP with simple path coding:
 Where mask is 1, copy cities from one parent
 Where mask is 0, choose the remaining cities in the order
of the other parent
GA operators: mutation

 Alter each gene independently with a probability pm


 pm is called the mutation rate
 Typically between 1/pop_size and 1/ chromosome_length
Mutation

Before
1 1 0 1 1 0 1 1 0 1 0 1 1 1 0 0

After
1 1 0 1 1 0 1 0 0 1 0 1 1 1 0 0

Akhmad Aminullah
Crossover OR mutation?

 Decade long debate: which one is better / necessary /


main-background

 Answer (at least, rather wide agreement):


 it depends on the problem, but
 in general, it is good to have both
 both have another role
 mutation-only-EA is possible, xover-only-EA would not work
Crossover OR mutation? (cont’d)

Exploration: Discovering promising areas in the search


space, i.e. gaining information on the problem
Exploitation: Optimising within a promising area, i.e. using
information
There is co-operation AND competition between them
 Crossover is explorative, it makes a big jump to an area
somewhere “in between” two (parent) areas
 Mutation is exploitative, it creates random small
diversions, thereby staying near (in the area of ) the parent
Crossover OR mutation?
(cont’d)

 Only crossover can combine information from two


parents
 Only mutation can introduce new information (alleles)
 Crossover does not change the allele frequencies of
the population (thought experiment: 50% 0’s on first
bit in the population, ?% after performing n
crossovers)
 To hit the optimum you often need a ‘lucky’ mutation
Selection Rule

 GA is predicted based on the concept of


“survival of fittest”
 The selection method determines how
individuals are chosen for mating.
 The Ranking selection scheme is outperforms
the commonly-used roulette wheel selection
scheme

Akhmad Aminullah
GA operators: Selection
 Main idea: better individuals get higher chance
 Chances proportional to fitness
 Implementation: roulette wheel technique
 Assign to each individual a part of the
roulette wheel
 Spin the wheel n times to select n

1/6 = 17% individuals

A B fitness(A) = 3
C fitness(B) = 1
3/6 = 50% 2/6 = 33%
fitness(C) = 2
Parent/Survivor Selection
 Strategies
 Survivor selection
 Always keep the best one
 Elitist: deletion of the K worst
 Probability selection : inverse to their fitness
 Etc.
Parent/Survivor Selection

 Too strong fitness selection bias can lead to sub-


optimal solution
 Too little fitness bias selection results in unfocused
and meandering search
Parent selection
Chance to be selected as parent proportional to
fitness
 Roulette wheel

To avoid problems with fitness function


 Tournament

Not a very important parameter


Parent/Survivor Selection
 Strategies
 Parent selection
 Uniform randomly selection
 Probability selection : proportional to their fitness
 Tournament selection (Multiple Objectives)
 Build a small comparison set
 Randomly select a pair with the higher rank one beats the lower one
 Non-dominated one beat the dominated one
 Niche count: the number of points in the population within
certain distance, higher the niche count, lower the
rank.
 Etc.
Real Value GA

 Real value GA means, to use the value of the


input variables directly to the calculation.
 The real value representation achieves faster
performances by eliminating further decoding
stages

Akhmad Aminullah
Crossover Real Value
Crossover for real value GA used arithmetic crossover. Arithmetic
crossover performs a linear recombination of two genitors as defined by
two chromosomes:

  
C1  k11 , k 21 ,..., k n1 and C 2  k12 , k 22 ,..., k n2 
The linear recombiniation between C1 and C2 produces the new
generation, C*1 and C*2,

  
C1*  k11* , k 21* ,..., k n1* and C*2  k12* , k 22* ,..., k n2* 
That are defined by

k m1*  k m1  1   k m2 and k m2*  k m2  1   k m1

Where m varies from 1 up to n and  is defined as random value present


in a 0 to 1 interval (Del Savio et al, 2005)
Akhmad Aminullah
Mutation Real value
The self-adjusted mutation search for a close
solution based on random changes in both
directions:

  
C1  k11 , k21 ,..., kn1 and C2  k1i* , k2i* ,..., kni* 
Where k mi* is defined by:
i*  
k mi   max  k mi if the chosen bit is equal to zero
 i
 
k
 k m   k m  min if the chosen bit is equal to one
m i

i
Where max and min are user-defined boundaries for the domain of the k m
variable, m varies from 1 to n and  is a random number from 0 to 1

Akhmad Aminullah
Generation gap

The generation gap G is a parameter which


controls the percentage of the population to be
replaced during each generation. The value of G
fo traditional GA is 1.0 and this means that the
entire population is replaced during each
generation

Akhmad Aminullah
Constraints
Most optimization problems have constraints. The solution or set of
solutions which are obtained as the final result of an evolutionary
search must necessarily be feasible, that is, satisfy all constraints.
Taxonomy of constraints can be considered and composed of
(a) Number

(b) Metric

(c) Criticality

(d) Difficulty.

Originally conceived, Genetic algorithm methods produce new


solutions by recombining and perturbing existing solutions. In certain
instances, an encoding, a reproduction (e.g. crossover) operator and
a perturbation (e.g. mutation) operator can be devised such that
feasible existing solutions (parents) will always produce feasible new
solutions (children).

Akhmad Aminullah
An example after Goldberg ‘89 (1)

 Simple problem: max x2 over {0,1,…,31}


 GA approach:
 Representation: binary code, e.g. 01101  13
 Population size: 4

 1-point xover, bitwise mutation

 Roulette wheel selection

 Random initialisation

 We show one generational cycle done by hand


x2 example: selection
X2 example: crossover
X2 example: mutation
Example of coding for TSP
Travelling Salesman Problem
 Binary
 Cities are binary coded; chromosome is string of bits
 Most chromosomes code for illegal tour
 Several chromosomes code for the same tour
 Path
 Cities are numbered; chromosome is string of integers
 Most chromosomes code for illegal tour
 Several chromosomes code for the same tour
 Ordinal
 Cities are numbered, but code is complex
 All possible chromosomes are legal and only one chromosome
for each tour
 Several others
Roulette wheel
 Sum the fitness of all chromosomes, call it T
 Generate a random number N between 1 and T
 Return chromosome whose fitness added to the running total
is equal to or larger than N
 Chance to be selected is exactly proportional to fitness

Chromosome : 1 2 3 4 5 6
Fitness: 8 2 17 7 4 11
Running total: 8 10 27 34 38 49
N (1  N  49): 23
Selected: 3
Tournament
 Binary tournament
 Two individuals are randomly chosen; the fitter of the two is
selected as a parent
 Probabilistic binary tournament
 Two individuals are randomly chosen; with a chance p,
0.5<p<1, the fitter of the two is selected as a parent
 Larger tournaments
 n individuals are randomly chosen; the fittest one is
selected as a parent

 By changing n and/or p, the GA can be adjusted


dynamically
Problems with fitness range
 Premature convergence
 Fitness too large
 Relatively superfit individuals dominate population
 Population converges to a local maximum
 Too much exploitation; too few exploration
 Slow finishing
 Fitness too small
 No selection pressure
 After many generations, average fitness has
converged, but no global maximum is found; not
sufficient difference between best and average fitness
 Too few exploitation; too much exploration
Solutions for these problems
 Use tournament selection
 Implicit fitness remapping
 Adjust fitness function for roulette wheel
 Explicit fitness remapping
 Fitness scaling
 Fitness windowing
 Fitness ranking

Will be explained below


Fitness Function
Purpose
 Parent selection

 Measure for convergence

 For Steady state: Selection of individuals to


die

 Should reflect the value of the chromosome in


some “real” way
 Next to coding the most critical part of a GA
Fitness scaling
 Fitness values are scaled by subtraction and
division so that worst value is close to 0 and
the best value is close to a certain value,
typically 2
 Chance for the most fit individual is 2 times the
average
 Chance for the least fit individual is close to 0

 Problems when the original maximum is very


extreme (super-fit) or when the original
minimum is very extreme (super-unfit)
 Can be solved by defining a minimum and/or a
maximum value for the fitness
Example of Fitness Scaling
Fitness windowing

 Same as window scaling, except the


amount subtracted is the minimum
observed in the n previous
generations, with n e.g. 10
 Same problems as with scaling
Fitness ranking

 Individuals are numbered in order of increasing


fitness
 The rank in this order is the adjusted fitness
 Starting number and increment can be chosen in
several ways and influence the results

 No problems with super-fit or super-unfit


 Often superior to scaling and windowing
Fitness Evaluation
 A key component in GA
 Time/quality trade off

 Multi-criterion fitness
Multi-Criterion Fitness
 Dominance and indifference
 For an optimization problem with more than one objective
function (fi, i=1,2,…n)
 given any two solution X1 and X2, then
 X1 dominates X2 ( X1 X2), if
 fi(X1) >= fi(X2), for all i = 1,…,n

 X1 is indifferent with X2 ( X1 ~ X2), if X1 does not dominate X2, and


X2 does not dominate X1
Multi-Criterion Fitness
 Pareto Optimal Set
 Ifthere exists no solution in the search space
which dominates any member in the set P,
then the solutions belonging the the set P
constitute a global Pareto-optimal set.
 Pareto optimal front

 Dominance Check
Multi-Criterion Fitness
 Weighted sum
 F(x)= w1f1(x1) + w2f2(x2) +…+wnfn(xn)
 Problems?
 Convex and convex Pareto optimal front
 Sensitive to the shape of the Pareto-optimal
front
 Selection of weights?
 Need some pre-knowledge
 Not reliable for problem involving uncertainties
Multi-Criterion Fitness
 Optimizing single objective
 Maximize: fk(X)
Subject to:
fj(X) <= Ki, i <> k
X in F where F is the solution space.
Multi-Criterion Fitness
 Weighted sum
 F(x)= w1f1(x1) + w2f2(x2) +…+wnfn(xn)
 Problems?
 Convex and convex Pareto optimal front
 Sensitive to the shape of the Pareto-optimal
front
 Selection of weights?
 Need some pre-knowledge
 Not reliable for problem involving uncertainties
Multi-Criterion Fitness
 Preference based weighted sum
(ISMAUT Imprecisely Specific Multiple Attribute Utility Theory)
 F(x) = w1f1(x1) + w2f2(x2) +…+wnfn(xn)
 Preference
 Given two know individuals X and Y, if we prefer X than Y,
then
F(X) > F(Y),
that is
w1(f1(x1)-f1(y1)) +…+wn(fn(xn)-fn(yn)) > 0
Multi-Criterion Fitness
 All
the preferences constitute a linear space
Wn={w1,w2,…,wn}
 w1(f1(x1)-f1(y1)) +…+wn(fn(xn)-fn(yn)) > 0
 w1(f1(z1)-f1(p1)) +…+wn(fn(zn)-fn(pn)) > 0, etc

 For any two new individuals Y’ and Y’’, how to


determine which one is more preferable?

Multi-Criterion Fitness
Min :    wk [ f k (Y' ))  f k (Y' ' )]
k

s.t. : Wn

Min :  '   wk [ f k (Y' ' ))  f k (Y' )]


k

s.t. : Wn
Multi-Criterion Fitness
Then,
  0  Y'  Y' '

 '  0  Y' '  Y'


Otherwise,
Y’ ~ Y’’

Construct the dominant relationship among some


indifferent ones according to the preferences.
Other parameters of GA 1
 Initialization:
 Population size
 Random
 Dedicated greedy algorithm

 Reproduction:
 Generational: as described before (insects)
 Generational with elitism: fixed number of most fit
individuals are copied unmodified into new generation
 Steady state: two parents are selected to reproduce
and two parents are selected to die; two offspring are
immediately inserted in the pool (mammals)
Other parameters of GA 2
 Stop criterion:
 Number of new chromosomes
 Number of new and unique chromosomes
 Number of generations
 Measure:
 Best of population
 Average of population
 Duplicates
 Accept all duplicates
 Avoid too many duplicates, because that degenerates the
population (inteelt)
 No duplicates at all
Example run
Maxima and Averages of steady state and
generational replacement
45 St_max

40 St_av.
Ge_max
35
Ge_av.
30
25
20
15
10
5
0
0 5 10 15 20
Example

b
Encode

No Binary B (cm) H (cm)


0 000 10 30
1 001 20 40
2 010 30 50
3 011 40 60
4 100 50 70
5 101 60 80
6 110 70 90
7 111 80 100
Initial Value (Random)

B = 60 H = 90 B = 60 H = 90

1 0 1 1 1 0 1 0 1 1 1 0

B = 20 H = 70 B = 20 H = 70

0 0 1 1 0 0 0 0 1 1 0 0
Function

Smallest Weigh  W

Ok for Stress value


Mutation (One Point Mutation)

B = 20 H = 70 B = 60 H = 90

0 0 1 1 0 0 1 0 1 1 1 0

B = 10 H = 70 B = 60 H = 70

0 0 0 1 0 0 1 0 1 1 0 0
Example of GA
Result
Bridge Structure
Bridge Structure
3D Bridge Structure
Final Configuration
Application to Office Building
Steps
Steps
Final Result

Вам также может понравиться