Вы находитесь на странице: 1из 27

OPTIMIZATION PROBLEM

FORMULATION AND SOLUTION


TECHNIQUES






5.1 Introduction
There are many cases in practical applications where the variables of optimization are
not continuous. Some or all of the variables must be selected from a list of integer or
discrete values. For example, structural members may have to be designed using
sections available in standard sizes; membercrosssectional dimensions may have to be
selected from the commercially available ones. Therefore, considerable interest was
shown for discrete variable engineering optimization problems since the late 1960s and
early 1970s. However, at that time even optimization methods for simpler continuous
nonlinear programming (NLP) problems were still in the process of development. In the
1970s and 80s, a major effort was put into development and evaluation of such
algorithms. Although research in this area continues to develop better methods,
especially for large scale problems, several reliable algorithms are now available for
NLP problems, including sequential quadratic programming (SQP) and
augmented Lagrangian methods. In recent years, the focus has shifted back to
V
Optimization Problem Formulation and Solution Techniques 129
applications to practical problems that naturally use discrete or mixed discrete
continuous variables in their formulation. Among the methods for discrete variable
nonlinear optimization problems, the following techniques have been most commonly
discussed: branch and bounds (BBM), zeroone variable techniques, roundingoff
techniques. Penalty function approach, sequential linear programming (SLP) and
random search techniques have also been applied to discrete optimization problems.
The purpose of this chapter is to define the basic formulation of the design
optimization problem, and survey the most relevant techniques to structural design
optimization. The basic simple genetic algorithm is also described in detail.
5.2 Minimum weight via optimization techniques
It is not just in civil engineering that the search for minimum weight is the main goal,
quantity of material is an important factor in most design fields. Everyone naturally tries
to achieve as much as possible using as little as possible. The ability of engineers to
produce better designs has been severely limited by the techniques available for design
optimization. Typically, much of the development effort has focused on simulation
programs to evaluate design parameters. Now the question arises why minimum weight
design for steel frameworks built for domestic and residential activities. This question
will be answered in the next section.
5.3 Why minimum weight design for steel structures?
5.3.1 Client brief
Clients specify their requirements through a brief. It is essential for effective design to
understand the intentions of the client: the brief is the way in which the client expresses
and communicates these intentions. As far as the designer is concerned, the factors
Optimization Problem Formulation and Solution Techniques

130
which are most important are intended use, budget cost limits, time to completion and
quality. Once these are understood, a realistic basis for producing the design will be
established.
5.3.2 Cost considerations
The time taken to realise a steel building from concept to completion is generally less
than that for a reinforced concrete alternative (Owens et al.,1992). This reduces time
related building costs, enables the building to be used earlier and produces an earlier
return on the capital invested. To gain full benefit from the manufacturer and
particularly the advantages of speed of construction, accuracy and lightness, the
cladding and finishes of the building must have similar attributes. In addition, because a
steel framework is made up of prefabricated components produced in a factory,
repetition of dimensions, shapes and details will streamline the manufacturing process
and are major factors in economic design.
The cost of steel frameworks is governed to a great extent by the degree of
simplicity and repetition embodied in the framework components and connections.
Typical cost breakdown is investigated by many authors, e.g. Owens et al. (1992). This
can be summarised in three major stages as follows:
1. Fabrication: this includes piling foundation, steel framework, brickwork, external
and internal cladding, sunscreens, etc. This stage may cost about 52% of the total
cost.
2. Finishes: this involves ceilings and floors, etc. This stage is calculated as 20% of the
total cost of the building.
3. Services: this entails electric facilities, lifts plumbing and sprinklers, etc. It is
estimated as 28 % of the total cost.
Optimization Problem Formulation and Solution Techniques

131
The designer of a steel framework should aim to achieve minimum overall cost. This
is a balance between the capital cost of the framework and the improved revenue from
early occupation of the building through fast design, fabrication and erection. For a
domestic and residential building, the cost of welding and connections becomes the
same for different designs presupposing the method of design.
Now the question arises what the structural designer could do to provide the client
with an economic design presupposing the intend use of the building as a domestic and
residential activities. Assuming the location of the building is fixed, the cost of
foundation piling can be determined depending of the structural system and the bearing
capacity of the soil. In addition, the cost of finishes and services depends on the
intended use of the building and this can be easily determined. From this discussion, it
becomes clear that for a domestic and residential construction, the minimum weight of
construction becomes a major task for a structural designer. From this point of view,
many researches, among them Grierson and Pak (1993), Adeli and Kumar (1995),
Huang and Arora (1997), Jenkins (1997), Saka (1998) and Camp et al. (1998), have
investigated methods seeking minimum weight design and considering different
constraints.
The formulation of the optimization will be addressed and the concept of genetic
algorithms in structural optimization will be discussed.
5.4 Optimization problem formulation
The constrained optimization problem, which is more practical optimization problem,
has been formulated in terms of some parameters and restrictions. The parameters
chosen to describe the design of a structure are known as design variables while the
Optimization Problem Formulation and Solution Techniques

132
restrictions are known as constraint conditions. Mathematicians formulated the
optimization problem in a standard mathematical function formula F(x) as described in
the following sections.
5.4.1 Design variables
Implicitly, the notion of optimizing a structure presupposed some flexibility to change
the design elements. The potential in changing expressed in terms of ranges of
permissible changes of certain design variables denoted by a vector x = {x
1
, x
2
,, x
n
}.
The design variables in the structural optimization problem might be the crosssectional
area, the node position, the second moment of inertia, etc. In other words, they are the
parameters that control the geometry of the optimized structure. The design variable can
take either continuous or discrete variables. A continuous variable is one that takes any
value in the range of the variation in its region. A discrete variable is one that takes only
isolated values, typically from a list of permissible values or a catalogue. Therefore,
these design variables can be expressed as
)
2 1
(
T T T T
J j
, , , x x x x x = , J j , 2 , 1 =
j j , i
D x and (5.1)
) (
2 1 , j
,...,
, j
,
, j
j
d d d D = .
The vector of design variables x is divided into J subvectors
J
x . The
components of these subvectors
j , i
x take values from a corresponding catalogue
j
D , i
indicates the number of design variables in each subvector and is the number of
sections in each catalogue.
Optimization Problem Formulation and Solution Techniques

133
In structural steelwork design problem, the material design variables and sectional
properties from catalogue are often discrete. In the present study, the standard sections
of universal beams, universal columns and circular hollow sections suggested by the BS
4 and BS 4848 are used.
Although the discrete variable problem appears to be easier to solve than the
continuous one (since fewer possible solutions exist), in general, it is more difficult to
solve except in some trivial cases. This is due to the fact that the discrete design space is
disjoint and nonconvex (Arora et al., 1994).
5.4.2 Objective function
The notation of optimization also implies that there are some merit function or functions
that can be improved and can also be used as a measure of effectiveness of the design.
The objective function, merit function, and cost function are names of the function F(x)
being optimized and this function measures the effectiveness of the design. This
function might be a formulation of a single objective f
1
(x) or multiple objectives as
follows:
F(x) = {f
1
(x), f
2
(x),, f
p
(x)}. (5.2)
Optimization with more than one objective is generally referred to as multicriteria
optimization. For structural optimization problems, weight, displacements, stresses,
buckling loads, vibration frequency and cost or any combination of these can be used as
objective function. The multicriteria function has different ways commonly used for
reducing the number of functions to one. The first way is simply to generate a composite
objective function that replaces all the objectives. The second way, most common in the
formulation of design optimization problems, is to select the most important objective
Optimization Problem Formulation and Solution Techniques

134
function, for instance the total weight of the structure, and to consider this function to be
the goal of the optimization task. Then, imposed limits, like stresses in each member,
nodal displacements and critical buckling load, etc, are prescribed. The third way is
Pareto optimization in which a range of potential products designs is created to meet
conflecting objectives, thus allowing the requirements to be refined in the light of
further information (see Brandt, 1992).
5.4.3 Constraints
The limits, which take values for the design variables, are known as side constraints.
The side constraints are divided into two types. The first type, commonly used in the
design problem, is an inequality constraint:
1
) (
~
) (

x
x
s
s
G
G
, s =1, 2,
s
s , (5.3)
where ) ( x
s
G and ) (
~
x
s
G are the calculated and limited values of constraints and
s
s is
the number of inequality constraint functions.
In the design optimization problem, not all constraints are functions of one term
but they are functions of several terms. This can be expressed by
1
) (
~
) (
...
) (
~
) (
) (
~
) (
,
,
2
2 1
1
+ + +
x
x
x
x
x
x
ss s
ss s

s, s,
G
G
G
G
G
G
s, s,
(5.4)
where ss is the number of terms in the constraint function.
5.4.4 Standard formulation
From the above sections, the final formulation of the optimization problem can be
mathematically represented by
Optimization Problem Formulation and Solution Techniques

135
Minimize F(x)
subjected to: 1
) (
~
) (

x
x
s
s
G
G
, s =1, 2,
s
s ,
) , ,
2
,
1
(
T T T T
J j
x x x x x = , J j , 2 , 1 = (5.5)
j j i
D x
,
and
) (
2 1 , j
,...
, j
,
, j
j
d d d D = .
The feasible solution of a nonlinear problem can be graphically represented. For
example, a nonlinear function F(x) of two design variables x
1
and x
2
with three
nonlinear constraints G
1
(x), G
2
(x),and G
3
(x) can depicted as shown in Figure 5.1.


5.5 Features of a design optimization problem
It is important to highlight some of the features of the discrete nonlinear problem (5.5).
First, any of the inequality constraints may not be active at the optimum point because
the constraint surface may not pass through any of the discrete points, i.e. in numerical
calculations only a point closest to the constraint boundary may be found. Second, there
is no simple criterion such as Kuhn-Tucker condition to terminate the iterative search
x
1
F(x)
x
2
G
3
(x)
G
2
(x)
G
1
(x)
Figure 5.1. Feasible region in nonlinear problem
Optimization Problem Formulation and Solution Techniques

136
process. Thus, local optimality of the solution point can not be assured unless an
exhaustive search is performed. Third, the size of discreteness and nature of the discrete
values may govern the behaviour of some of the numerical algorithms as well as the
final solution of the problem. Fourth, the design problem is highly nonlinear problem
due to the nature of design variables and the relationships between the constraint
functions and design variables. Fifth, constraints have different formulation for different
members of the structure. For example, a structure has beams, columns, and a bracing
system. The constraints that control the design of beams are different from those of
bracing systems or columns. Moreover, the set of catalogue sections for beams are
different from those of bracing systems or columns. Sixth, the computational effort
needed to reach satisfactory results increases with the complexity of the treated design
problem. Therefore, it is important to review optimization techniques that deal with
discrete design variables. This is summarised in the following section.
5.6 Review of discrete optimization techniques
A review of the methods for discrete variable optimization was recently presented by
Bremicker et al. (1990), Vanderplaats and Thanedar (1991) and Arora et al. (1994).
Several algorithms for discrete optimization problems were developed, among them
branch and bound method, penalty function approach, roundingoff, cutting plane,
simulated annealing, genetic algorithms, neural networks, and Lagrangian relaxation
methods. It is observed that some of the methods for discrete variable optimization use
the structure of the problem to speed up the search for the discrete solution. This class of
methods is not suitable for implementation into a general purpose application (Arora et
al., 1994). The branch and bound method, simulated annealing, and genetic algorithm
Optimization Problem Formulation and Solution Techniques

137
are the most used methods. Herein, the literature review will be focused on these
methods in the following sections.
5.6.1 Branch and bound method
The branch and bound method BBM is perhaps the most widely knownmethod for
mixeddiscrete optimization problems. The method was originally developed for LP,
however it is quite general and can be applied to nonlinear discrete and mixed variable
problems. It is basically an enumeration method where one first obtains a minimum
point for the problem assuming all variables to be continuous. Then, each variable is
assigned a discrete value in sequence and the problem is solved again in the remaining
variables. The process of assigning discrete values to variables need not start from a
continuous optimum point although this approach may reduce the number of times the
problem needs to be resolved to obtain a feasible discrete point and subsequently the
optimum solution. It can be seen that the number of times the problem needs to be re
solved increases exponentially with the number of variables. Several procedures have
been devised to reduce this number. The first use of the branch and bound method is
attributed to Land and Doig (1960) for linear problem. Other attempts, to use BBM to
solve integer LP problems related to the plastic design of frames, made by Reinschmidt
(1971).
BBM was combined with exterior penalty functions and SQP methods to treat the
mixeddiscrete NLP problem. John et al. (1988) combine BBM with sequential
linearization for discrete optimal design of trusses. Hajela and Shih (1990) used BBM to
solve multiobjective optimization problems with discrete and integer variables.
Salajegheh and Vanderplaats (1993) used BBM for optimizing trusses with discrete
sizing and shape variables. Large storage space was needed and an exponential growth
Optimization Problem Formulation and Solution Techniques

138
in computational effort limits the applicability of BBM to solve higher dimensional
problems.
The usual BBM approach is to systematically search continuous solutions where
the discrete variables are forced to take discrete values from the specified set. The
logical structure for the set of solutions is that of a tree for each variable. Initially an
optimum point is obtained by treating all design variables as continuous. If this solution
is discrete, then the process is terminated. If one of the desired variables is not discrete,
then its value lies between two discrete values:
1 , , , +
< <
j j i j
d x d .

(5.6)
Now two subproblems are defined, one with the constraint
, , j j i
d x and

the
other with
1 , , +

j j i
d x . This process called branching. It basically eliminates some
portion of the continuous feasible region, which is not feasible for the discrete problem.
It does not, however, eliminate any of the discrete feasible solutions. The two sub
problems are solved again, and the optimum solutions are stored as nodes of the tree
containing optimum values of the variables, the objective function and the appropriate
bounds on the variables. This process of branching and solving continuous problems is
continued until a feasible solution is obtained. The cost function corresponding to this
solution becomes an upper bound on the optimum solution. From this point, all the
nodes of the tree that have cost function values higher than the established upper bound
are eliminated from further consideration. Such nodes, known as fathomed nodes, when
their lowest point is reached and no further branching is necessary from them. This
process is known as bounding. The process of branching and bounding is repeated from
each of the unfathomed nodes. From each node, at most two new nodes may originate.
A better upper bound for the optimum objective function is established when a feasible
Optimization Problem Formulation and Solution Techniques

139
discrete solution is obtained with a value of objective function less than the current
upper bound. The nodes may be fathomed in any of the following three ways: (1) a
feasible discrete solution to the continuous solution with the cost function value higher
than the current upper bound, (2) an infeasible continuous problem and (3) the optimal
value of the objective function for the continuous problem higher than the upper bound.
The search is terminated when all the nodes have fathomed.
This method has been used successfully, however, for problems with a large
number of discrete design variables, the number of subproblem nodes becomes large
making the method inefficient (Arora et al., 1994). This drawback is oppressive for
nonlinear optimization problems. Further detailed strategies and enhancements
developed can be found for the BB in Mesquita and Kamat (1987), Ringertz (1988),
Sandgren (1990) and Haftka and Gurdal (1993).
5.6.2 Simulated annealing
Simulated annealing (SA) is one of the techniques, which do not require derivatives of
the problem functions because it does not use any gradient or Hessian information. The
idea of SA originated in statistical mechanics by Metropolis et al. (1953). The approach
is suited to solving combinatorial optimization problems, such as discreteinteger
programming problems.
The use of simulated annealing for structural optimization is a quite recent
occurrence. Elperin (1988) applied the SA to design a tenbar truss where member
crosssectional area were to be selected from a set of discrete values. Kincaid and
Padula (1990) used SA for minimizing the distortion and internal forces in a truss
structure. Balling (1991) obtained the optimum design of threedimensional steel
structures using SA. One year later, the same framework was studied using the filtered
Optimization Problem Formulation and Solution Techniques

140
simulated annealing algorithm by May and Balling (1992). Recently, Leite and Topping
(1996) proposed a parallel simulated annealing model for structural optimization.
5.6.3 Genetic algorithms
5.6.3.1 Background
The famous naturalist Charles Darwin defined natural selection or survival of the fittest
in his book (Darwin, 1929) as the preservation of favourable individual differences and
variations, and the destruction of those that are injurious. In nature, individuals have to
adapt to their environment in order to survive in a process called evolution, in which
those features that make an individual more suitable to compete are preserved when it
reproduces, and those features that make it weaker are eliminated. Such features are
controlled by units called genes, which form sets known as chromosomes. Over
subsequent generations not only the fittest individuals survive, but also their genes
which are transmitted to their descendants during the sexual recombination process,
which is called crossover.
In the late 60s, John H. Holland became interested in the application of natural
selection to machine learning. He developed a technique known as reproductive plans
that allowed computer programs to mimic the process of evolution. This technique
became popular after the publication of his book (1975). He renamed this technique
using the term genetic algorithm (GA). The main goals of the research of Holland and
his students were
to abstract and rigorously explain the adaptive processes of natural systems.
to design artificial systems software that retained the important mechanisms of
natural systems.
Optimization Problem Formulation and Solution Techniques

141
Basically, the central thrust of the research on genetic algorithms (GAs) has been
due to its robustness and the balance between efficiency and efficacy necessary for
survival in many different environments. GAs are search algorithms, which are based on
the mechanics of natural selection and survival of the fittest, and unlike many
mathematical programming algorithms they do not require the evaluation of gradients of
the objective function and constraints.
Koza (1992) provides the following definition of a GA:
The genetic algorithm is a highly parallel mathematical algorithm that
transforms a set (population of individual mathematical objectives typically
fixedlength character strings patterned after chromosome strings), each
with an associated fitness value, into a new population (i.e., the next
generation) using operations patterned after the Darwinian principle of
reproduction and survival of the fittest and after naturally occurring genetic
operations (notably sexual recombination).
The GAbased techniques accept discrete and/or continuous design variables and
therefore are very versatile. GAs are different from most optimization techniques in
many ways:
GAs work on a coding of the design variablesbinary bit string representation is one
of such coding, rather than the design variables themselves. This characteristic
allows the genetic algorithms to be extended to a design space consisting of a mix of
continuous, discrete and integer variables,
GAs proceed from several points in the design space to another set of design points.
Consequently, GA techniques have a better chance of locating the global minima as
opposed to these schemes that proceed from one point to another,
Optimization Problem Formulation and Solution Techniques

142
GAs work on the function evaluations alone and do not require any of the function
derivatives. Although derivative based techniques contribute to faster convergence
toward the optimum, the derivative also directs the search process towards a local
optimum, and
GAs use probabilistic transition rules, which is an important advantage in guiding a
highly exploitative search. To this extent, GAs should not be considered as a variant
of the random walk approach.
The general features of the theory of the GA are widely accepted and applied,
which result in good solutions for different types of problems in different disciplines.
The following is a description of the features of natural evolution as observed by
Holland (1975).
1. Evolution is a process that operates on chromosomes rather than on the living beings
they encode.
2. Natural selection is the link between chromosomes and the performance of their
decoded structures. Processes of natural selection cause those chromosomes that
encode a successful structure to reproduce more often than those do not.
3. The process of reproduction is the point at which evolution takes place. Mutations
may cause the chromosomes of biological children to be different from those of their
biological parents and recombination processes may create quite different
chromosomes in the children by combining material from the chromosomes of two
parents.
4. Biological evolution has no memory. Whatever it knows about producing
individuals who will function well in their environment is contained in the gene
pool, the set of chromosomes carried by the current individuals, and in the structure
of the chromosome decoders.
Optimization Problem Formulation and Solution Techniques

143
5.6.3.2 Survival of the fittest
GAs are implicit enumeration procedures. A set of randomly created design alternatives
or individuals representing a population in a given generation are allowed to reproduce
and cross among themselves, with bias allocated to the most fit individuals. A
combination of the most desirable characteristics of mating members of the population
results in progenies that are more fit than their parents. Therefore, if a measure which
indicates the fitness of a generation is also the desired goal of a design process,
successive generations produce better values of the objective function.
5.6.3.3 Encoding the design variables
The technique for encoding solutions may vary from problem to problem and from
genetic algorithm to genetic algorithm. In Hollands work (Holland, 1975), encoding is
carried out using bit strings, 0 and 1. A major task is the encoding of different design
sets into chromosomes so that the GA can use them.
In structural design optimization, section properties of one member form a design
variable. The member section is then represented by a bit string. Each bitstring is then
merged to form chromosomes, which represent a design set. In the present study, the
possible crosssections of each design variable are presented in binary strings. The bit
string is associated with a position in the table and its corresponding sectional
properties. To make things clearer, assume a singlebay singlestorey framework shown
in Figure 5.2. This frame has two design variables x
1
and x
2
, which represent the
columns and the beam girder respectively. The variable x
1
takes a position out of 32 UCs
from BS 4 while x
2
takes a position out of 64 UBs.

Optimization Problem Formulation and Solution Techniques

144

The bitstring of each design variable implies a position in the corresponding
table and thus the properties of this section can be selected. The cross section, selected
from the catalogue, can be represented in the binary code according to the number of the
available cross sections. The string length of each design variable
v
n
should be
evaluated by
v
n
=
n
2 . (5.7)
For instance, there are 64 types of UBs, so the number of bits required to distinguish the
range is 6. A part of the encoded variables for UBs are listed in Table 5.1. Similar
encodings for the utilised UCs and CHS can be drawn. Hence, for the given framework
shown Figure 5.2, the chromosomes given in Figure 5.3 represent an individual in the
population. This can be read as a design in which the design variable x
1
takes the
position of number 8 in the table of UCs while x
2
takes the position of number 6 in the
table of UBs.

0 0 1 1 1 0 0 0 1 0 1

x
1
x
2


Figure 5.3. Chromosomes of a design set using binary representation
x
1
x
1
x
2
Figure 5.2. Singlebay singlestorey framework
Optimization Problem Formulation and Solution Techniques

145
Table 5.1. Part of the encoded variables for UBs
Second moment of area
(cm
4
)
Catalogue
position
Encode
variable
Cross section Area
(cm
2
)
about
major axis
about
minor axis

1 000000 914 419 388 UB 494 719000 45400

2 000001 914 419 343 UB 437 625000 39200

3 000010 914 305 289 UB 369 505000 15600

4 000011 914 305 253 UB 323 437000 13300

5 000100 914 305 224 UB 285 376000 11200

6 000101 914 305 201 UB 256 326000 9430

7 000110 838 292 226 UB 289 340000 11400

8 000111 838 292 194 UB 247 279000 9070

9 001000 838 292 176 UB 224 246000 7790

10 001001 762 267 197 UB 251 240000 8170

11 001010 762 267 173 UB 220 205000 6850

12 001011 762 267 147 UB 188 169000 5470

13 001100 686 254 170 UB 217 170000 6620

14 001101 686 254 152 UB 194 150000 5780


Different ways of encoding the variables were implemented. For example,
Goldberg (1990) presented a theory of convergence for realcoded (floatingpoint)
GAs, and also real numbers and other alphabets have been proposed by Wright (1991).
The term floating may seem misleading since the position of the implied decimal
point is at a fixed position, and the term fixed point representation seems to be more
appropriate. However, the reason is that the variable, representing a parameter to be
optimized, may have a point at any position along the string. This means that even when
the point is fixed for each gene, it is not necessarily fixed along the chromosomes.
Therefore, some variables could have a precision of 32 decimal places, while others are
integers. As Eshelman and Schaffer (1993) point out, many researchers in the GA
T
h
e

r
e
s
t

o
f

c
r
o
s
s

s
e
c
t
i
o
n
a
l

p
r
o
p
e
r
t
i
e
s

f
r
o
m

t
h
e

c
a
t
a
l
o
g
u
e


Optimization Problem Formulation and Solution Techniques

146
community agreed to use real coded GAs for numerical optimization despite the fact
that there are theoretical arguments that seem to show that small alphabets should be
more effective than large alphabets. Muhlenbein and SchilierkampVoosen (1993) also
used real numbers directly for continuous function optimization.
5.6.3.4 Why bit string encoding?
Over other encodings, bit strings have several advantages that can be summarised as
follows:
1. They are simple to create and manipulate.
2. They are theoretically tractable, in that their simplicity makes it easy to prove
theorems.
3. Performance theorems have been proved for bit string chromosomes that
demonstrate the power of natural selection on bit string encodings.
4. Just about anything can be encoded in bit strings, so onepoint crossover and
mutation operators can be applied to a wide range of problems.
5.6.3.5 The anatomy of a simple GA
In a simple GA, one starts with a randomly created set of designs. From this set, new
and better designs are reproduced using the fittest members of the set. The entire process
is similar to a natural population of biological creatures, where successive generations
are conceived, born and raised until they are ready to reproduce. A simple GA is
composed of three operations. These are reproduction, crossover and mutation.
Reproduction is an operation where an old string is copied into the new population
according to the string fitness. Here, fitness is defined according to the objective
function value. More fit strings, i.e. those with smaller objective function values, receive
Optimization Problem Formulation and Solution Techniques

147
higher numbers of offsprings. The reproduction operator may be implemented in
algorithmic form in a number of ways. Perhaps the easiest way is to create a biased
roulette wheel (see DeJong, 1975) where each current string in the population has a
roulette wheel slot sized in proportion to its fitness. Consequently, each time we require
offspring, a simple spin of the weighted wheel yields the reproduction candidate. In this
way, more highly fit strings have a higher number of offspring based on the probability
of selection
sel
i
P in the succeeding generation as given by

=
=
p
sel
N
1 j
j
i
i
F
F
P (5.8)
where F
i
is the value of the objective function of the individual ith and
p
N is the
number of individuals in the population and is known as population size.
Once a string has been selected for reproduction, an exact replica of the string is
made. This string is then entered into a mating pool, a tentative new population, for
further genetic operator action.
Other selection schemes can be used, among them stochastic remainder selection
suggested by Brindle (1981), stochastic universal selection proposed by Baker (1987).
Ranking selection is presented by Baker (1985) in which the population is sorted from
best to worst, and each individual is copied as many times as possible. According to a
nonincreasing assignment function, and then proportionate selection is performed
according to that assignment. Goldberg and Deb (1991) implemented tournament
selection in which the population is shuffled and then is divided into groups of
g
n
elements from which the best individual, i.e. the fittest, will be chosen. The number of
parents N, which are selected can be evaluated by
Optimization Problem Formulation and Solution Techniques

148
g
P
n
N
N = . (5.9)
After being selected, crossover takes place. Crossover corresponds to allowing
selected members of the population to exchange characteristics of the design among
themselves (Arora et al., 1994). Simple crossover may proceed in two steps. First,
members of the newly reproduced strings in the mating pool are mated at random.
Second, each pair of strings undergoes crossover. Several ways of performing crossover
are used in the literature, the most simple one termed onepoint crossover Goldberg
(1989). This can be illustrated as follows: an integer position b along the string as
indicated in Figure 5.4a is selected uniformly at random between 1 and the string length
less one [1, 1
v

n
]. Two new strings termed children are created by swapping all
characters between positions b+1 and
v
n
of the parents inclusively. Figure 5.4b shows
a similar way of presenting the crossover. This is termed twopoint crossover.
In order to find an effective search, Eshelman et al. (1989) and Syswerda (1989)
use different types of crossover such as segment crossover, uniform crossover, shuffle
crossover. Multipoint traditional crossover is also presented by DeJong (1975) as
natural extension of twopoint crossover. He treats the chromosome in the multipoint
crossover as a ring, which the crossover points cut into segments.
Mutation is the third step in simple GA, and this step safeguards the process from
a complete premature loss of valuable genetic material during reproduction and
crossover. In terms of binary string, this step corresponds to selecting a few members
of the population, determining at random a location on the strings, and switching the 0
or 1 at that location. To illustrate mutation in an example, assume two crossedover
children as given in Figure 5.5. The 2
nd
bit was randomly selected over the child 1
Optimization Problem Formulation and Solution Techniques

149
while it is the 6
th
bit over the child 2. Then, the procedure is to change the 1 to a 0 and
vice versa as shown in Figure 5.5.



Parent 1 0 0 0 1 0 1 1 1 1 0 1 0 1 1 1 0 Parent 2



Child 1 1 0 1 1 0 1 1 1 0 0 0 0 1 1 1 0 Child 2







Parent 1 0 0 0 1 0 1 1 1 1 0 1 0 1 1 1 0 Parent 2



Child 1 0 0 1 0 1 1 1 1 1 0 0 1 0 1 1 0 Child 2









1 0 1 1 0 1 1 1 0 0 0 0 1 1 1 0


1 1 1 1 0 1 1 1 0 0 0 0 1 0 1 0
Figure 5.5. Assembling the mutation stage
Crosspoint
Crosspoint
(a) Singlepoint crossover
Crosspoints Crosspoints
Figure 5.4. Most used crossover operators in the binary strings
(b) Twopoint crossover
Crossedover child 1
Crossedover child 2
Mutated child 1 Mutated child 2
Optimization Problem Formulation and Solution Techniques

150
It is observed that the mutation is a random walk through the string space. When
used sparingly with reproduction and crossover, it is an insurance policy against
premature loss of important notions (Golgberg, 1989). Consequently, it plays a
secondary role in GAs.
The foregoing three steps are repeated for successive generations of the
population until no further improvement in the fitness is attainable. The member in this
generation with the highest level of fitness is the optimum design.
Figure 5.6 illustrates the flowchart for a simple GA linked to a structural design
problem. At the beginning, all the necessary data GA parameters and structural
geometry will be read and the process of the GA will start for the first generation.
The initial population will be generated randomly. Then, the objective function
regarded as the weight of the structure as well as the constraint functions, which are
reflected on the design criteria requested by BS 5950, are computed. At this stage, the
average, maximum and the fittest design are obtained. Convergence criteria described
later are also checked. The GA process is terminated if the convergence is achieved.
Otherwise, the GA process resumes. By creating the mating pool and applying the GA
operators, the next population is created. The GA process will proceed until either the
convergence is achieved or the maximum number of generations is reached.

Optimization Problem Formulation and Solution Techniques

151
























Yes
No
Yes
No
Generation 1: Randomly generate the initial
population
Design set i
Decode binary values to integer values
Analyse the framework, compute the weight of the
structure, and investigate the constraint violation
(see Figure 2.15)
Design set =N
p
?
Convergence
occurred?
Store the best individuals, and impose them into the next
generation and carry out the crossover and mutation
New generation
New design
Select the cross sectional properties from the
proper catalogue for each design variable
Evaluate the objective and penalised functions for each
design set
Start
Stop
Figure 5.6. Flowchart for genetic algorithm linked to structural design problem
Input data files: GA
parameters, structural
geometry, etc
Optimization Problem Formulation and Solution Techniques

152
5.6.3.6 Constraints management
GAs have traditionally been applied to unconstrained problems as they have no builtin
method to handle constraints. Constraints can be classified as two types: explicit and
implicit. Explicit constraints are those that can be checked without a system simulation.
Cost is often one example of an explicit constraint. Implicit constraints require a system
simulation i.e. analysis and design checks. For example, cross sections have design
criteria as requested by the code of practice, therefore, a system simulation must be run
before this information can be ascertained. Several approaches have been used to handle
constraints including:
1. using specialised operators that maintain feasibility,
2. allowing only feasible solutions in the population and
3. applying a penalty to those solutions that violate one or more constraints.
Specialised operators work only for explicit constraints, and are useful for those
problems such as the travelling salesman problem. The second and third approach can
be used with explicit or implicit constraints, or a combination of both. The second
approach, eliminating those designs from the population that violate one or more
constraints, can be very ineffective for large problems that have few viable solutions
compared with the number of infeasible ones. The most prevalent technique for coping
with constraint violations is to penalise a population member for one or more violations.
The main difficulty in applying penalty functions is that they are generally problem
dependent. Different techniques of employing penalty functions are used in the literature
among them Moe (1973), Fletcher (1975), Haftka and Starnes (1976), Shin et al. (1990),
Hajela and Yoo (1995), Huang and Arora (1997), and Camp et al. (1998). Generally, the
problems attempted using GAs are all of the constrained optimization type, and
Optimization Problem Formulation and Solution Techniques

153
consequently the optimization problem must be converted into unconstrained problems
This can be dealt with using a penaltybased transformation method (Hajela and Yoo,
1995), resulting in the following problem:
Minimize )) ( ) ( ( ) ( ) ( x x x x H , G , r P F r , F

+ = (5.10)
where

F is the modified objective function that also contains the penalty term P ,
which brings the constraint functions into the problem and r is called a penalty
multiplier. The way in which the penalty parameters and the constraint functions are
combined and the rules for updating the penalty parameters specify the particular
method.
In the present work, the design optimization problem has been attacked differently
because careful consideration must be given to the selection of the penalty function, and
in the present context, the "exact" penalty function is used. This results in the following
definition of the fitness function combined with the simple "exact" penalty function:
Maximize

=
violated s constraint of any 0
satisfied s constraint all ) ( C
) (
,
, F -
F
x
x (5.11)
where C is a constant evaluated at each generation. The technique used for penalty
function is described in Chapter 6.
5.6.3.7 Convergence criteria and termination conditions
Convergence criteria have to be evolved to decide when to terminate the process of
optimization. In the present study, three criteria are used and if any of them are satisfied,
then the process will terminate. These criteria are:
Optimization Problem Formulation and Solution Techniques

154
If the fittest design has not changed for 30 successive generations, or if the
difference between the fittest design
cu
F of the current generation and that of 30
generations before is very small value .
cu
C This could be expressed in the form
cu
cu
30 cu cu
C
F
F F

. (5.12)
As we proceed with more generation the population gets filled by more fit
individuals, with perhaps a very small deviation from the fitness of the best
individuals. Consequently, the average fitness comes very close to the fitness of the
best design. This could result in another convergence criterion such that the
percentage difference between the average fitness
av
F of the current population and
the current fitness of the best design
cu
F reaches a very small value
av
C . This can
be expressed by
av
cu
av cu
C
F
F F

. (5.13)
The simplest one is when a total allocated number of generations ( 200
max
= gen )
are reached.

Вам также может понравиться