Вы находитесь на странице: 1из 50

A Second Review Report

On
“Understanding of Genetic Algorithm
& Welded Beam Design”
By
Jhaveri Ronak Kirtikumar
(ID NO-2018H1430036H)
Study Oriented Project

Under Prof. A Vasan


SUBMITTED IN PARTIAL FULLFILLMENT OF THE REQUIREMENTS OF
CE G514-STRUCTURAL OPTIMIZATION COURSE

BITS PILANI,HYDERABAD CAMPUS


February 2018
LIST OF CONTENTS

Chapter Topics Page


No. no.
1 Various Search Techniques used for Optimization 3
2.1 - Metaheuristic Methods – Hill Climbing 5
2.2 - Metaheuristic Methods – Simulated Annealing 5
2 2.3 - Metaheuristic Methods – Tabu Search 6
2.4 - Metaheuristic Methods – Evolutionary Algorithms 6
2.5 - Metaheuristic Methods – Genetic Algorithms 7
3.1 - Nature Inspired – Genetic Algorithm 8
3 3.2 - Darwin’s Biological Theory of Evolution 8
3.3 - Basics of Genetic Algorithm 9
3.4 - Genetic Algorithm – Evolutionary Process 9
4.1 - Chromosome Encoding – Binary, Real, Permutation, 11
Tree
4.2 – Initialization 14
4 4.3 - Selection – Roulette Wheel, Remainder Stochastic 15
Sampling, Elitism, Tournament Selection etc.
4.4 - Cross-Over and Mutation 18
4.5 - Generating Off-Spring 22
5 Summary and Flowchart of Genetic Algorithm 23
6.1 - Basics of Welding Process 25
6.2 - Problem Statement 26
6.3 - Parameters Involved while Modelling 27
6 6.4 - Generating Cost Function 27
6.5 - Various Engineering Relationships used while Coding 28
6.6 - Design Constraints to be adopted 29
6.7 - Bounds on Variables + Procedure for Coding 30
6.8 - Results by Genetic Algorithm Tool-Box 30
6.9 - Results by Genetic Algorithm MATLAB Coding 31
6.10 – Graphical Results 37
7 7.1 - 72 Bar 3D Truss Problem + Nodes + Elements + 38
Properties + Objective Function
7.2 – Co-ordinates Assigned + Forces + Supports + Variables 39
7.3 – MATLAB Coding 40
7.4 – Genetic Algorithm Coding 46
7.5 – Results 50
CHAPTER 1 – SEARCH TECHNIQUES FOR OPTIMIZATION

Fig 1
Basically, Calculus Based Techniques helps us to find the best functional value by
moving towards an increasing gradient for maxima and decreasing gradient for
minima. Basically it deals with finding out the First Order Derivative and then
Equating it to Zero and will get the Stationary Points and then Evaluate Stationary
Points by finding the Second Order Derivative and if it is Lesser than Zero, then
function will have a Maxima at that point and if greater, then minima at that critical
point and if it is zero, then will get Saddle point. It can be evaluated further by
finding out the higher order derivatives in an intelligent manner to find out the Best
Optimal Solution among various Opportunities. Basically in Non Linear
Programming, there are various methods such as Newton Raphson, Steepest
Descent Method etc. to find out the Best Functional Value of Objective Function in
Calculus based techniques.

The Enumerative techniques again could be depth first, breadth first, also involves
dynamic programming methods which is another class of search techniques which
are really enumerative in nature. In these all methods, Computations are carried
out in stages by breaking the problems into sub-problems which are
interdependent and that generate feasible solution for the entire problem stuff. It is
having huge applications such as to find out shortest Route, Linear and Non- linear
programming problems, Integer Linear and Non-Linear Programming problems,
Cardo Loading, Reliability Problems, Capital Budget etc.

Metaheuristic Methods are Nature Inspired Techniques and it comprises of


Iterative Type. The Nature of Solution after finding by these methods will be
somewhat inexact and it will result into a Near Optimal Solution. It Comprises of
Combination of Randomization and Heuristic Methods. It works in an iterative
manner along subordinate heuristic. So, it is a guided random search technique.
So, it is not really starting from single point from a given Solution Space Region, it
could be starting from multiple points from a given Solution Space and really does
a random search and tries to explore and exploit the entire search space.

For Example, suppose we are having a function having multiple Peaks and
Troughs. So, there exists a Local as well as Global Maxima in the entire Solution
Space. Now, Heuristic Method deals with finding out the first peak among all
existing and then getting trapped there; So here the value which we will get by
these methods may be a Local Maxima or something else. These Limitation will be
overcome by Metaheuristic Method which deals with finding out the Best Solution
among all Decision Modelling by a Random Search Technique; that is, it will start
from multiple points and do a random search and tries to explore and exploit search
space and bring out the best optimal solution. There are so many methods involved
in it. We will Discuss one after another.
CHAPTER 2 – METAHEURISTIC METHODS

2.1 - HILL CLIMBING

Fig 2

Hill Climbing Method ensures to move towards a point (Going like an Up Hill) which
is having a better functional value, if we are searching maximum among all the
values. From a given Search Space, we can Start from Random Points and then
identify all its neighbouring points and move towards the point which is having best
functional value among all. Repeat the same until all the neighbouring points are
of lower functional value and finally we will reach towards a peak by these methods.

As shown in figure, if we start to Climb a Hill from Random Point A and by using
these technique, will reach towards a Peak; that is Global Maximum as shown in
figure. But if it all, we start from Random Point B, then by using These Technique,
we will reach towards the another peak which is having Local Maxima and we are
unable to reach the Global Maxima by using Technique. So we didn’t get the Best
Optimal Solution. These Limitation will be overcome by next Technique.

2.2 - SIMULATED ANNEALING TECHNIQUE

Simulated Annealing Technique is a variant of Hill Climbing Technique. It allows a


Selected Random Point to move towards Down-Hill also. Probabilistic Analysis
plays an important role for a selected random point whether to move towards Up
or Down-Hill in a such a manner that it will tends to get the Global Maxima among
the selected function. Additionally, it explores the search space in such a manner
that the starting point does not influence much towards the final solution.

Now, coming back to our figure 2. If we start from random Point B, then by
Probabilistic Analysis, we can move towards the Down-Hill in order to get the
Global Maxima which finally leads to Optimal Solution. So, that is the main
advantage of using these techniques over Hill Climbing.
2.3 - TABU SEARCH TECHNIQUE

Tabu Search is basically a Guided Local Search Method which explore the solution
space beyond local Optimality and it uses Memory Based Strategies. In these
Method, an initial solution is obtained by using initial memory structures and the
neighbourhood is explored by certain Aspiration Criteria which deals with moving
towards the right direction.

Based upon our Previous Results and from the Memory, suppose we want to move
from one point to another and the way which we approach to move is not perfect,
then such a Directions will be blocked by the use of Tabu Restrictions. So, by using
these technique, Best neighbour will be selected not only by appropriate choice but
also by looking into accountant, the Tabu Restrictions as well as Aspiration Criteria.

Some Excellent Specialized procedures are adopted such as Strategic Oscillation,


Path Relinking etc. to update best solution and memory structures with short and
long term memory. And these Technique needs Successive Iterations until there
are improvements.

2.4 - EVOLUTIONARY ALGORITHM

Fig 3

Evolutionary Algorithms Are Nature Inspired Algorithms. Evolutionary Algorithm


mimics the Darwin’s Theory of Evolution by Natural Selection for Problem Solving.
So, Basically Darwin’s Theory is related to Survival of Fittest Theory. It talks about
the Basic law of nature that The one who will Survive will grow further and other
will Die. The same is applicable here while finding out the Optimal Solution from a
given Search Space. The Function which is Fittest, will participate further into next
iterations and other will be Discarded.

As Shown in Figure 3, Our Cell is having Nucleus at the Centre. Nucleus are having
Chromosomes inside it. It comprises of total 23 pairs; say 46 Chromosomes in
each of the Nucleus. Chromosomes are having DNA Structures which are made
up of Genes. Proteins are linked in between DNA. Each Protein will give us
information regarding the Characteristics of an individual; like Eye Colour of Body,
Hair Colour, Body structure, Skin Colour etc. Everything will be given by Strings.
In our Case, we will take all Strings in form of Numbers.

Let us take another Example of Population. Here in such a case, Each


Chromosome represents a solution fitness function which evaluates How good the
Chromosome is! That is How Fitter the Population Is! Initially, Population is
generated and from this initial Population, specific methods like Crossover and
Mutation are used to obtain offspring. So, Here the Reproduction process starts by
the selecting the Parent Chromosomes by various techniques. Those Parents
which are having Higher Fitness Value will be Selected and then the offspring is
generated out of those parents through a process which does not lose the Fitness
value up-to a great extent. So, From Generation to Generation, Population
becomes fitter. And the Parents which are more Fiter, will participate into next
round and other will be discarded. So, likewise, we will reach towards the Best
Solution.

2.5 - GENETIC ALGORITMS

Genetic Algorithms are the most important family member of the Evolutionary
Computing Techniques. These are Intelligent Search Techniques maintaining a
population of candidate solutions for a given problem which search the solution
space by applying various variation operators.

The essential idea is that there should be a balance between Selection Pressure
and Population Diversity. Here, we are going to select the wide range of Solution
Space from Multiple Points. Selection Pressure deals with Selecting those
Population which are having Higher Fitness Value. On the another hand, we are
having Couple of points from all around the Solution Space. So, that is the role of
Population Diversity which will explore the entire Population rather than exploring
smaller number of points in a given Solution Space. These is also an advantage
that we will never going to stuck in Local Optima. Genetic Algorithms basically uses
Probabilistic Analysis for Selection, Evaluation and Discarding the Population in
the given Solution Space.
CHAPTER 3 – BASICS OF GENETIC ALGORITHM

3.1 - GENETIC ALGORITHM – NATURE INSPIRED


Metaheuristics are the Evolutionary Computing Methods Which is essentially
coming from the Genetic Algorithms. Genetic Algorithms are coming from the
essential idea of Genetics. so, they are all Nature Inspired Algorithms. So, here
there is a Comparison in the tabular form between the Nature and the Genetic
Algorithms which are as follows:

Sr No. Nature Genetic Algorithm


1) Environment Optimization Problem
2) Individual Feasible Solution
3) Individual’s Adaption Solution Quality
4) Population Species Set of Feasible Solutions
5) Selection, Recombination and Variation Operators
Reproduction Processes

As we move from Generations to Generations, Function improve upon the fitness


to the environment through a process of Cross over and Mutation. The similar thing
is also adapted in case of optimization problems. Here, basically we create the
Population of Feasible Solutions and as we all know that fitness function is
equivalent to the objective function and solutions which are more fit than others
have more chance of getting selected. And out of them, we have a Cross over and
a Mutation process through which we create the new Population for the next
Generation and finally we create a set of feasible solutions with higher average
fitness values.

3.2 - HISTORY OF GENETIC ALGORITHMS – DARWIN’S


THEORY

Genetic Algorithms Are Nature Inspired Algorithms. It mimics the Darwin’s Theory
of Evolution by Natural Selection for Problem Solving. So, Basically Darwin’s
Theory is related to Survival of Fittest Theory. It talks about the Basic law of nature
that The one who will Survive will grow further and other will Die.

Let us take example of a Hilly Area comprises of tremendous amount of Trees


which vary from Size to Size. Now, the Sun Rays coming to all the trees which are
residing in the Hilly Area will not going to be uniformly distributed. The Tall trees
will get sufficient amount of Sun Rays compared to all others. So, now looking
further to the Biological Evolution Process, Over Generations of Generations, we
will find that particular Tall Trees Still residing and Smaller Trees Will Die out
because Nature Selection does not favour them.

These are all the Natural situations that we observe in our Daily Life. Let us take
an another Example of Reproduction Process. The essential process of nature is
that the Parents which are selected in a process are by the Survival of the Fittest.
The fittest species are allow to participate in Reproduction and then the
chromosomes are supposed to join each other by processes like Cross over and
Mutation to really produce offspring. So, this is the essential Biological Process.
Genetic Algorithm Completely Mimics it. However, we can apply the Genetic
Algorithm in a Best Possible manner to find out the Optimal Solution.

3.3 - BASICS OF GENETIC ALGORITHM


Genetic Algorithms are inspired by natural evolution. It involves direct manipulation
of the coding achieved by Cross over and Mutation and begins the search from
couple of points and not from a single point. So, therefore, it contains population of
feasible solutions to the problem.

Example – We are having Solution Space Comprises of many Points. GA Starts


with Selecting Randomly Points based on techniques among the Solution Space
such that it covers the entire region surrounded by it. So, let us Select 8 Populations
among the given region to define the First Generation. Now based upon the fitness
value, let us Select 4 out of 8 in Reproduction Process. Now Cross-Over and
Mutation is to be done to produce the off-springs. Let us say, Our Second
Generation Selects 6 Off-springs + 2 Parents. Now, if we compare the Fittness
value of First Generation to the Second, it Concludes that Second Generation is
much more fitter compared to First. Likewise, Number of Generations will be
Produced with Increasing value of Fitness – Objective Function. In these all
procedure, the Balance has to be maintained between Population Diversity and
Selection Pressure.

Genetic Algorithm explores the Solution Space by moving from Generation to


Generation by continuously evaluating the Fitness Function in order to reach the
Optimum Value. Example – Let us consider a task to carry out the City Survey to
find out the most Knowledgeable Persons Residing in the City. It is a Foolish way
to go to each house and to ask them certain Questions related to Assignment given
to us. So, let us call a Conference of 50 People which are related to this task and
in Conference, tell each of them to take a Conference of another 50 People. So,
here total Population involved in next generation will be 50x50=2500. Likewise
Based upon Cross-Over and Mutation, Our Solution Space will be continuously
evaluating the Fitness Function in order to reach the Optimum Value. Thus,
Genetic Algorithm explores the Solution Space by moving from Generation to
Generation.

3.4 - GENETIC ALGORITHM EVOLUTIONARY PROCESS

Genetic Algorithm Process is such that when we do an encoding, then we select


the Initial Size on the basis of fitness, then we employ Recombination Operators
such as selecting Cross over and Mutation repeatedly to preserve the good portion
of the string and in our case, the string is the Chromosome. Here, the good portion
of the strings or Chromosomes usually lead to an optimal or near optimal solution.
The method is applied over a desired number of generations, if it is well designed,
then the population will converge faster.

Fig 4

Let us Consider an Example to Illustrate the above mentioned Cycle. Let us take
a Population from 1 to 100 which fits the functional value. That means, it returns
the Maximum Value of the Function. Now, we Randomly generate through random
numbers - 4 Values among them - say 29, 49, 87, 92 which gives most fittest value.
Now, we will evaluate each of them and say we get, f (29) = 0.10, f (49) = 0.87, f
(87) = 0.98, f (92) = 0.36. Now, based upon the Functional Value, we Conclude
that 49 and 87 are best possible parents. Now by the Process of Cross-Over and
Mutation, we get 4 Different Numbers – Say 66, 13, 56, 98. Now again Evaluate all
of these and among them those who is fitter, will Participate in Next generation and
Others will be Discarded.
Fig 5

As Shown in Figure which is there in one of the Reference Book of Genetic


Algorithm by Goldberg, first of all, we will Initialize the Population and then Based
upon Fitness Value, will Select Best Few among it and then by Process of Cross-
Over and Mutation, Will Generate Next Population and will again Evaluate it and
this Cycle will keep on Going. The Same thing has been followed by Nature too.
As we move from Generations to Generations, Function improve upon the fitness
through a process of Cross over and Mutation. The similar thing is also adapted in
case of optimization problems. Here, basically we create the Population of Feasible
Solutions and as we all know that fitness function is equivalent to the objective
function and solutions which are more fit than others have more chance of getting
selected. And out of them, we have a Cross over and a Mutation process through
which we create the new Population for the next Generation and finally we create
a set of feasible solutions with higher average fitness values. Best one will be closer
to Optimal Solution and the entire process is called Genetic Algorithm.
CHAPTER 4 – GENETIC ALGORITHM PROCESS

STEPS FOR GENETIC ALGORITHM

4.1) STEP 1 – CHROMOSOME ENCODING

The first step of genetic Algorithm is to do Encoding. There are Different Methods
of Encoding like Binary, Real, Permutation etc. Every Chromosome will represent
a Feasible Solution. Sometimes, it may not be feasible by looking into accountant
Proper Constraints and Bounds which are given to the Variables. So, depending
upon the number of Constraints, Sometimes, we may get infeasible solution whose
fitness value is pretty low.

Each Bit comprises of a Gene. Chromosome Comprises of Gene. The individual


value for a gene is called an Allele. So, there are Alleles which come up from
individual values of the Genes and the Bits are essentially like a Gene. There are
Various Types of Encoding. They are as follows:

1) Binary Encoding –

1 0 1 0 1 1 0

Suppose, we have a number 4 which is 22. Now, In Binary Coding, we can write it
as a 1x22 +0x21 +0x20. So, Number 4 can be written as 1-0-0 in Binary Coding.
Likewise, as shown in above Table, we can represent some number in Binary
Encoding Likewise. So, we can represent some numbers through a chromosome
that is called the binary encoding comprises of zeroes and ones.

Binary Encoding is difficult to apply directly and it is not a natural encoding. Binary
encoding plays a vital role in knapsack problems.

2) Value or Real Encoding –

2.336 5.336 2.336 5.778 0.369 1.589 2.693

Real Number Encoding Plays a Vital role in Constant Optimization Problems. Each
Bit comprises of a Gene and it represented by Real Numbers. Chromosome
Comprises of Gene. The individual value for a gene which is Allele is also
represented likewise in a Real Number.

3) Permutation Encoding –

1 2 3 4 5 6 7 8
Travelling Salesman, Aircrew Problems, Assignment Problems, Quadratic
Problems etc. Plays a crucial role in Permutation Encoding. Each number
represents a Gene. Consider a Travelling Salesman Problem comprises of 8 cities
and a person has to go through all the cities exactly once before coming back to
the city of origin and No City should be travelled by him exactly once. The one such
Permutation Encoding is 1-2 3-4-5-6-7-8. So, Here Travelling Salesman Will Start
his Journey from City 1 and he travels to all the cities exactly once and coming
back to his origin from the last city. So, here the Permutation Encoding plays a vital
role.

4) Tree Encoding –
Whenever, we want to Discover a new Pattern or New Formula, at that time, Tree
Encoding Plays a Vital role there.

So, all these are different types of encoding and they are applicable for different
kind of problems and given problem must satisfy the Chromosome Encoding

Fig 6
Consider a Same Travelling Salesman Problem comprises of 4 Cities. Here, as
shown in Fig 6, The Solution Space will be a Tour that has to be carried out by
Travelling Salesman while moving from a one city to another one. The Coding
Space as shown in Fig 6 will represent a Number of Chromosomes that particularly
we define for a given problem statement.

The Coding Space and Solution Space may have a Feasible Solution, Infeasible
or may be illegal one. Let us Consider a Travelling Salesman problem comprises
of 4 cities. Now, if Coding represent as a 1-2-3-4, then will get a Feasible Solution.
But if any number is repeated – say if we have a coding like 1-2-2-3, then in such
a case, will get Infeasible Solution or Illegal one.
At the time of encoding, one must ensure that all the Chromosomes must come
from a Feasible Region. If at all under some circumstances, we need to
accommodate some infeasible ones, then take care of them by giving low Fitness
Values.

There Should be one to one mapping between Coding space and Solution Space.
It should not be likewise that one chromosome represents more than one solution
– one to n mapping and vice versa of it i.e. n to one mapping shouldn’t exists.

Some important points to be kept in mind while doing Chromosome Encoding.


1) The feasibility of a Chromosome must be decoded from a chromosome that lies
in a feasible region of the problem.

2) While doing Encoding, one must ensure that Chromosome represents a solution
to the problem

3) There should be uniqueness of a problem – one to one mapping between


Chromosome and Solution to a Problem.

4.2) STEP 2 – INITALIZATION

The Step initialization involves to create a population of solutions by a Process of


Randomization or through a local search procedure and select only a Feasible
Solutions among it by well-known Nature Inspired Darwin’s Theory of evolution.

Let us Consider an Example: Consider a Problem which we are supposed to


Minimize Z=f (x1, x2, x3) and each variable ranges between 0 to 15. So, With the
help of Binary Coding, we came to know:

1) Maximum Range = 15 which can be evaluated by 1x23 +1x22 +1x21 +1x20 =


15 which can be represented in Binary Form as 1-1-1-1

2) Minimum Range = 0 which can be evaluated by 0x23 +0x22 +0x21 +0x20 = 0


which can be represented in Binary Form as 0-0-0-0

So, likewise we are having a 12-digit string – Each variable is having 4 bits inside
it. First will be for x1, Next x2 and Last 4 will be for x3. By Randomization, we select
any the Population Range in between 0 to 15. And by Tossing a Coin, we will
identify the Binary Digit whether it’s 0 or 1. If Head Exists, then it will code as a 1
and for tail, it will be 0. So, for one Chromosome, let us say, we get this result-

1 0 0 1 0 1 1 0 1 0 0 0

Let us Assume that Likewise, we are having total 10 Chromosomes. So, we have
to toss a coin 12x10 = 120 times. The Chromosomes along with their Fitness Value
is as mentioned in the given Table-
Chromosomes Fitness value
1 0 0 0 1 0 1 1 1 0 0 1 0.11
0 1 0 0 1 1 0 0 0 0 1 0 0.26
0 1 1 0 1 0 1 0 1 1 1 0 0.09
0 1 0 0 1 0 0 0 1 0 1 0 0.87
0 1 1 0 1 0 1 0 1 1 1 0 0.32
1 1 0 0 1 0 0 0 1 0 1 0 0.23
1 1 1 0 0 1 1 0 1 1 1 1 0.98
1 1 0 0 0 1 0 0 1 0 1 1 0.08
1 1 1 0 0 1 1 0 1 1 1 1 0.36
1 1 0 0 0 1 0 0 1 0 1 1 0.14
Chromosome and their Fitness Value Table

As shown in Table, Various Chromosomes has assigned various fitness values.


So, total 10 Chromosomes of the initial population having a ten Chromosome
value. So, if we take average of them, then we will get the average fitness. Among
all the Chromosomes, The Highest average fitness value is supposed to be
consider for the next generation evaluation and others will be Discarded. Among
the Table, Chromosome number 4 and 7 get the Highest value as 0.87 and 0.98.
So, it has to be considered at the time of Evaluation and Remaining will not
Participate further in the next iteration rounds.

4.3) STEP 3 – SELECTION (SURVIVAL OF FITTEST)

There are various Methods available for the Selection of the Population which has
been assigned early by the Evaluating Process

1) Roulette Wheel Selection with or without Scaling

10%
15%

20%

55%

Fig 7

Roulette Wheel Selection Method Depends upon the Fitness value which is
assigned to the Chromosome. Each region is directly proportional to the fitness
value. It works on the Methodology of Rotating a Wheel and Whenever it stops, a
Pointer will point towards the Specific Region getting Selected. So, The Probability
of Higher Fitness value getting Selected is more.

As shown in the Pie- Chart which is shown in Fig 7 comprises of total 4 Regions.
Each Region is having its Probability of getting selected depending upon their
fitness value. We are having total 4 Chromosomes and their Chances of getting
Selected are 20%,10%,15% and 55% respectively. So, Whenever Roulette Wheel
Rotates, the Chances of getting selection of 55% is higher compared to others and
rest others will not be entertained in further Evaluation Process.

Modified Version of Roulette Wheel Selection is done through Scaling and it plays
a vital role in selection rather than without scaling. So, here will Assign a Particular
Scale to Chromosomes and find their Fitness value and then Rotate Wheel and
Stop and then Pointer Will Select a Particular Value.

2) Remainder Stochastic Sampling with or without Replacement

Select Chromosome from space stochastic sampling roulette wheel selection and
determine survival probability which is proportional to the fitness value. It
comprises of Deterministic as well as Probabilistic Sampling. Sometimes,
Combination of the two will also to be consider at the time of Selection from Various
Sampling Size.

Deterministic Sampling Comprises of Selection based on Average Fitness Value.


Coming Back to the Example as we have seen at the time of selection, Among the
Table, Chromosome number 4 and 7 get the Highest value as 0.87 and 0.98. So,
it has to be considered at the time of Evaluation and Remaining will not Participate
further in the next iteration rounds.

Probabilistic Sampling comprises of Selection through Randomization Process.

Fig 8
Here, in this Sampling the Chance of getting a Particular population to be selected
is represented by the ratio of Fitness Value of that particular function to the sum of
Fitness values of all the functions which takes part in the evaluation. So, The
Selection of Probability of Kth individual is represented by Pk = fk / ∑ f

Here, Pk = Selection of Probability of Kth individual


fk = Fitness Value of Kth individual
f = Sum of all Fitness Value of all the Population Size from 1 to n

As shown in Figure 8, Based on Roulette Wheel, we select Population for


Recombination factors which is coming up in further steps for Cross-Over and
Mutation Process. And the chance of their selection will be fitness value of the
function divided by Sum of all Fitness Value of all the Population Size. Among
them, Those which will give higher Fitness value, will be selected in next iterations
and remaining will be Discarded

Example –

1% 22% 1st Qtr


44% 2nd Qtr
33% 3rd Qtr
4th Qtr

Chromosome No. Fitness Value F / Favg


C1 20 20/35 = 4/7 = 0.571
C2 30 30/35 = 6/7 = 0.857
C3 40 40/35 = 8/7 = 1.142
C4 50 50/35 = 10/7 = 1.428
Total 140 -
Average 35 -

As Shown by Example, consider 4 Chromosomes having Functional Value as a


Fitness – 20,30,40 and 50. Calculate the Average of them and then Divide Fitness
Value by Average Value and we get – 0.571,0.857,1.142,1.428. So, the chance of
selection of C3 and C4 is higher and remaining are lower. In further Iterations, Last
two Chromosomes value can be represented by (1+0.142=1.142) and
(1+0.428=1.428). So, here in next iteration participation factors will be done as C1,
C2, C3, C4 = 0.571,0.857,0.142,0.428 and based on it, will evaluate it further in
Reproduction Process to Produce Offsprings.

3) Stochastic Tournament Selection with a Size of Two

As of all we have seen the above methods for the selection process and in all of
them, Scaling is something where we actually modify the fitness values based on
certain criteria and sometimes we select two of them and then make a tournament
between the better ones.

So, as shown in above Example, we will have a Tournament between C3 and C4


here.

4) Elitism

These method works on the principle of sending the population directly to the next
generation without even Evaluating them. Basically, Those Parents which are
having Higher Fitness Value will directly sended to the next Generation without
even applying any kind of Evaluating procedures as we have seen above. There
is no Cross-Over and Mutation in these selection procedure.

In these all the Selection Procedure, one has to maintain balance between
Population Diversity and Selection Pressure in order to get the solution nearby
Optimal Value.

4.4) STEP 4 – REPRODUCTION (CROSS – OVER AND MUTATION)

A) Cross-Over Operation - There are various Reproduction Possibilities. So,


there exists a Crossover operation based on Crossover Probability. Here, we need
to select parents from population based on Crossover Probability. Afterwards,
randomly select two points between strings and perform Crossover operations on
selected strings.
Assume Crossover probability as 70%. Those Parents
whose Crossover Probability is below 70%, will not
Participate in Crossover and Remaining Does.
As shown in Figure 10, The offspring 1&2 are produced
by Crossover Operation from Parent 1&2. Here, First 3
Characteristics and Last 2 Characteristics of Offspring 1
Inherent from Parent 1 and in between inherent from
Parent 2. Similarly, Likewise, Offspring 2 is Produced by
matching the Characteristics that inherent from Parent
1 &2 whose Crossover Probability exceeds 70% and are
allowed to take part in Crossover Operation to Produce
new Generation as fitter as possible.

Fig 10

There are Various types of Crossover Operations which are likely to be seen-

1) Single Point Cross-Over

Chromosome 1 - 11011|10110001

Chromosome 2 - 10110|01111110
Off Spring 1 - 10110|10110001

Off Spring 2 - 11011| 01111110

2) Two Point Cross-Over

Chromosome 1 - 1 1 0 1 1 | 1 0 1 1 |0 0 0 1

Chromosome 2 - 10110| 0111|1110

Off Spring 1 - 10110| 1011 | 1110

Off Spring 2 - 11011| 0111 | 0001

3) Arithmetic Cross-Over

Chromosome 1 - 11011|10110001

Chromosome 2 - 10110| 01111110


Offspring (AND) - 10010|00110000

So, The Cross-Over Operation is a Critical Feature of Genetic Algorithm. It greatly


accelerates Search early in evolution of a Population. It leads to effective
Combination of Subsolutions on Different Chromosomes.

Fig 11
Fig 12

Fig 13

As shown in Figure 11, There is a Carrier Mother and Carrier Father in First
Generation having some kind of Symptoms associated with them. Now, By the
Process of Cross-Over, the results came out to be the child fully affected with
symptom that inherent partially from both of the parents, one having No Symptoms
I.e. Healthier Child and the others are Partially affected Child with Symptoms
inherent from both of the parents. Figure 12 Shows an example of Cross-Over
Operation among the Parents Characteristics especially body colour from one
Generation to Another by the Process of Cross-Over Operation and Same thing in
Figure 13 is likely to be seen – Cross-Over Operation in terms of Body Colour,
Skills, Intelligence Power, Behaviour Modification etc.
B) Mutation Operation – The Process Mutation involves a Drastic Change in the
Solution Space by applying it in a particular Chromosome to produce Off-Spring.
So, Tremendous amount of Change is likely to be seen in Solution Space and as
a result of which the Population Diversity will suddenly boost up to a larger extent.

The probability of applying a Mutation Operation is very much low [0.001,…..,0.01].


If we apply Mutation Process up to a larger extent, then Selection Pressure will go
out of gear and the Solution which tends to reach the Convergence will never ever
be going to Converge again because of Solution Space getting Localized due to
sudden increase in Population Diversity.

The main Difference between Cross-Over and Mutation Operators is that in Cross-
Over Operator- Let we have two Parents each having Fitness value 10 and 20
respectively. Now, the offspring’s produced by them will have a Fitness value
ranges in between 10 and 20. Whereas, in mutation Operator, let us take example

Parent 1 – Having Binary Encoding as 1-0-1-0 = 1x23 +0x22 +1x21 +0x20 = 10.

1 0 1 0

Offspring 1 – Having Binary Encoding as 0-0-1-0 = 0x23 +0x22 +1x21 +0x20 = 2

0 0 1 0

So, here as we see the first bit of Parent 1 (1) has just changed to Offspring 1 (0).
And we see a Drastic Change from 10 to 2 by the process of Mutation.

Let us Consider an another example of Mutation –

Parent 1-
1 0 0 1 0 1 0 1 1 0 1 0

Probability of Mutation-
0.58 0.67 0.58 0.59 0.01 0.58 0.98 0.76 0.02 0.76 0.58 0.39

Offspring 1-
1 0 0 1 1 1 0 1 0 0 1 0

As we see in the above mentioned Table, there are total 12 bits in a particular
string. Hence, Probability of Mutation is 1/12 = 0.0833. Hence, those values which
are lesser than or equal to 0.0833 will take participate in mutation process. So,
here looking at the Fifth and Ninth Gene having Probability 0.01 and 0.02 which
are lesser than 0.083. Hence, these two Bits will take Participate in Mutation
Process. And Remaining will be kept as it is. There are various Methods of
Mutation. One such applied here is interchange of Bits from 0 to 1.
Fig 14 – Positive Mutation Fig 15 – Negative Mutation

Figure 14 and 15 tells us about the Positive Mutation and Negative Mutation.
Einstein Parents are not that much clever as of what he is. So, there is a Drastic
Change in IQ Level between Parents and Offspring. Fig 15 Tells us about the
Negative Symptoms which are transferred by the process of Mutation from Parents
to Offspring.

4.5) STEP 5 – GENERATING OFFSPRING

Population Chromosome Cross-Over and Mutation

Offspring Chromosome
Fig 16 – Generating Offspring

The above Figure represents How the Offspring has been generated by applying
the Process of Cross-Over and Mutation. So,Here the same steps are supposed
to be followed in order to create Offspring – Intialize Population, Select few of them
by various Techniques as mentioned above, Generate New Chromosome by
Process of Mutation and Cross-Over, And Finally we produce a Offspring here.
CHAPTER 5 – SUMMARY OF GENETIC ALGORITHM

SUMMARY OF GENETIC ALGORITHM

Begin
{
Initialize Population;
Evaluate Population;
While {Termination Criteria Not Satisfied}
{
Select Parents for Reproduction;
Perform Cross-Over and Mutation;
Evaluate Population;
}
}

Here, The Flowchart represents the Summary of Genetic Algorithm. First of all,
initialize population, then evaluate population, while termination criteria not
satisfied, select parent for reproduction, perform crossover and mutation, then
again evaluate population in such a manner that more and more fitter Generation
will going to be produced by this Algorithm and will bring our Solution Nearby
Optimal Value.

There are some important points to be kept in mind while performing Genetic
Algorithm. They are as follows:

1) Cross-Over Rate should be generally kept as high about 70-80%


2) Mutation Rate should be kept as low as possible – near about 0.5-1%
3) Cross-Over and Mutation Type Operators depends upon the Chosen encoding
associated with the problem statement.
4) Population Size should not be kept very high while performing Genetic
Algorithm. Otherwise, it reduces the Speed. The Best Population Size depends
upon the size of encoded strings.
5) Selection Criteria should be based upon any of the methods such as Roulette
Wheel, Elitism, Steady State Selection etc. depends upon the type of Problem.
6) Encoding also can be Binary, Real, Permutation etc. depends upon the Problem.
CHAPTER 6 – WELDED BEAM DESIGN
6.1 - BASICS OF WELDING

Welding can be defined as a Process of joining various kinds of Metallic parts by


applying some kind of Techniques such as Heating them to a suitable particular
Temperature with or without the application of Pressure. Welding is basically well
known to be an Economical as well as an Efficient Method of obtaining a
Permanent Joint by the use of several well-known Metallic Parts Compared to Bolt
Connections.

The Welding Process is broadly classified into the following well known two
Groups. They are as follows:

1) The Welding process that basically uses Heat alone to join the two parts such
as any Beam which has been attached with Rigid Member by the use of Welding.
2) The Welding process that uses a Combination of Heat and Pressure to join the
two Parts which can be again any Cantilever Beam applying Load at the end of a
member attached with a particular Rigid Wall or something else by Welding.

The Welding Process that uses Heat alone is basically termed technically as a
Fusion Welding Process. In this Particular Method, the parts have to be joined and
are held in position and the Molten Metal is supplied to the joint. The Molten metal
basically can come either from the Parts Themselves Which is known as Parent
metal or sometimes if require, then an External Filler metal is supplied to the joint.
The Joining Surface of the two parts becomes Plastic or even is Molten under the
action of Heat. When the Joint Solidifies, the two parts Fuse into a single unit.
6.2 - PROBLEM STATEMENT

Fig 17

Consider a Figure 17 which represents a Beam which is Welded onto a Rigid


Member as shown. The beam supports a load P (6000 lb) at its end and which is
lying at a distance of L from the Rigid Member as Shown in Figure. The beam is
welded onto the Rigid Member with Upper and Lower Welds, each having Length
l and Thickness h. The beam has a Particular Rectangular Cross-Section having a
Width b and Height t. The material of the beam used here is High Yield Strength
Deformed Bars - Fe415

Now, here coming to the Optimization Problem, we are supposed to Minimize the
Overall Cost including Set up, Material, Fabrication, Labour, Operational etc. by
varying the Size of the Weld and its Member Dimensions which are nothing but our
Decision Variables according to the Optimization Term and which are as explained
below-

The Design Variables are:


1) X = X1= h, Thickness of the Welds which has been there on a Rigid Member
2) Y = X2 = l, Length of the Welds which has been there on a Rigid Member
3) Z = X3 = t, Height of the Beam which is Welded onto a Rigid Member
4) W = X4 = b, Width of the Beam which is Welded onto a Rigid Member

The Design Constraints will include the Limits of the Shear Stress, Bending Stress,
Buckling Load and End Deflection which will be explained in upcoming Pages of
Project. These all limits will be as per IS 800:2007
6.3 - PARAMETERS INVOLVED IN PROBLEM
1) Young’s Modulus (E) to be used in Welding Design = 30 x 106 psi
2) Shearing modulus (G) for the beam material used in Design = 12 x 106 psi
3) Overhang length of the member (L) = 14 inch
4) Design stress of the weld to be used in Design (Ͳmax) = 13600 psi
5) Design normal stress for the beam material (σmax) = 30000 psi
6) Maximum Deflection (δmax) which beam may undergo = 0.25 inch
7) Load (P) applied at the end of the member = 6000 lb

6.4 - COST FUNCTION

As the main Optimization of this Particular Problem is related to the Design of


Welded Beam which is directly proportional to Cost of a Weld Assembly. The Major
Cost Components of such an Assembly are
(1) Set up Labour Cost including Fixtures available for the Setup and Holding of
the Bar during Welding
(2) Welding Labour Cost including Operation and Maintenance Expense
(3) Material Cost for both the work – Beam as well as Weld.

So, Basically Optimization Function will be Minimize Cost f(x) = A1 + A2 + A3

Here, A1 = Set up Labour Cost including Fixtures available for the Setup and
Holding of the Bar during Welding
A2 = Welding Labour Cost including Operation and Maintenance Expense
A3 = Material Cost for both the work – Beam as well as Weld.

1) Set up Cost – It is assumed that this particular Component can be considered


as a Weldment, because of the Existence of a Welding Assembly Line.
Furthermore, it is assumed that Fixtures available for the Setup and Holding of the
Bar during Welding are readily available. The Set up Cost can therefore be ignored
in this particular Total-Cost Model analysis of Welding Beam.

2) Welding Labour Cost - It is assumed that the welding will be done by machine
at a total cost of $10 per hr (including operating and maintenance expense).
Furthermore, it is assumed that the machine can lay down a cubic inch of weld in
6 min. The labour cost is then calculated as – Here, 𝑉𝑊 = Weld Volume

$ 𝑀𝐼𝑁 1 𝐻𝑂𝑈𝑅 $
𝐴2 = 〈10 〉 〈6 〉〈 〉 𝑉𝑤 = 1 𝑉𝑤
𝐻𝑟 𝐼𝑁𝐶𝐻 3 60 𝑀𝐼𝑁 𝐼𝑁𝐶𝐻 3

3) Material Cost – The Material Cost for both the work – Beam as well as Weld
is as expressed follows:
A3 = C3VW + C4VB

Here, C3 = cost per volume per weld material, $/ inch3=(0.37) (0.283)=0.1047


C4 = cost per volume of bar stock, $/ inch3 = (0.17) (0.283) = 0.04811
VB = Volume of bar A, inch3

Now, VW = h2l = 𝑋 2 𝑌 as per our Decision Variables


VB = tb(L+l) = 𝑍𝑊(𝐿 + 𝑌) as per our Decision Variables

Thus, substitute all these values in above mentioned Material Cost Equation,
We get
A3 = (0.1047)( 𝑋 2 𝑌) + (0.04811)( 𝑍𝑊(𝐿 + 𝑌))

4) Total Cost – Total Cost can be calculated by adding up Set up Labour Cost
including Fixtures available for the Setup and Holding of the Bar during Welding,
Welding Labour Cost including Operation and Maintenance Expense and Material
Cost for both the work – Beam as well as Weld.

By adding all of them, we will get

Total Cost = ( 𝑋 2 𝑌) + (0.1047)( 𝑋 2 𝑌) + (0.04811)( 𝑍𝑊(𝐿 + 𝑌))


= (1.10471) (𝑋 2 𝑌) + (0.04811)( 𝑍𝑊(14 + 𝑌))
(1.10471)(X12X2) +(0.04811)(X3X4(14+X2))

6.5 - ENGINEERING RELATIONSHIP

To complete the model, it is necessary to define the weld stress, bar bending
stress, bar deflection and the buckling load.

1) The Primary Stress acting over a Weld Throat Area is represented by Equation
1
T1 =
√2x1 x2

2) The Secondary Torsional Stress is represented by Equation

𝑋2
(𝐿 + )𝑅
T2 = 2
𝑋22
√2𝑋1 𝑋3 ( 3 + (𝑋1 + 𝑋3 )2 )

Here, 𝑅 = √𝑋22 + (𝑋1 + 𝑋3 )2


3) Weld Stress can be calculated by The Primary Stress acting over a Weld Throat
Area and The Secondary Torsional Stress and is represented by-

2 T2 T1 𝑋2
𝑇(𝑥) = 𝑃√ T12 + T22 +
𝑅

Thus, By the Help of all These Equations, we can Calculate the Optimal Cost of
welding.

The other two important Properties which will be useful to us in Calculating the
Various Constraints are-

1) Bar Bending stress – The Maximum Bending Stress is equal to –

6𝑃𝐿 6∗6000∗14 504000


𝑋4 𝑋3 2
= 𝑋4 𝑋3 2
= 𝑋4 𝑋3 2

2) Bucking Load – The Buckling Load is calculated by –

𝑋 𝑋 2 6
4.013𝐸 √ 3 4 𝑋 𝐸
P=
36
(1 − 2𝐿3 √4𝐺 )
𝐿2

Substitute all the Values of E, L, G in Bucking Load Equation and finally We get,

P = 64746.022 (1-0.028234X3) X3X43

6.6 - DESIGN CONSTRAINTS

1) The Shear Stress at the Beam Support Location cannot exceed the Maximum
Allowable for the material
Ͳ(x) ≤ 13600

2) The Maximum Bending Stress at the Beam Support location cannot exceed
the Maximum Yield Strength for the Material

504000
≤30000
𝑋4 𝑋3 2

3) The Applied Load which is acting at the end of Beam should be less than the
Buckling Load.
6000 ≤ 64746.022 (1-0.028234X3) X3X43

6.7 - BOUNDS ON VARIABLES

0.125 ≤X(1)≤ 5
0.1 ≤X(2)≤ 10
0.1≤ X(3)≤ 10
0.125 ≤X(4)≤ 5.

Thus, by knowing the Objective Function, Constraints and Bounds on Variables,


we will Design the Optimal Size of Welding by keeping in mind all such Invariants
and will solve by Genetic Algorithm in MATLAB and will try to bring the Best
Possible Output among all by Various Process of Genetic Algorithm as discussed
above in the Report.

The Basic Steps involved while Coding the Algorithm are as follows:

1) Set Population Size, Cross-Over and Mutation Probability, Maximum


Generation, Initial Generation and Initial Fitness Value. (These all the things are
well explained in Chapter 3 & 4)
2) Generate Initial Population Randomly as Discussed in Topic 4.2 & 4.3
3) Calculate Each Chromosome Fitness Value and set Gene for each by any
Coding as explained in 4.1 & 4.3
4) If value of Gene is lesser than Maximum Gene as assigned earlier, then Go to
Next Step. If Both of them are Equal, then Output will be done based on Maximum
Evaluation and then Terminate
5) Reproduce New Chromosome using Cross-Over and Mutation Process and
Perform the Selection Based on any of the Techniques as Discussed in Chapter 4
6) Select any one Chromosome in the Chromosome Population and Calculate the
Fitness of Chromosome.
7) Determine the Best Chromosome by looking all the particular things in mind –
Constraints, Bounds on Variables, Engineering Relationships, Parameters etc. and
Calculate its Reliability Interval and Fitness and Register them and Return to Step
3.

6.8 – Results by Genetic Algorithm Tool-Box:

A) Fitness Function :

function y = ronak2(x)
y = (1.10471)*(x(1)^2)*x(2) + (0.04811)*(x(3)*x(4)*(14+x(2)));
end

B) Constraints Equation:
function [c,c_eq] = pooja2(x)
c = [(6000 *
(sqrt(((1/(sqrt(2)*x(1)*x(2)))^2)+((((14+(x(2)/2))*(sqrt((x(2)^2)+((x(1)+x(3))^2))))/(sqrt(
2)*x(1)*x(3)*((x(2)^2/3)+((x(1)+x(3))^2))))^2)+(2*(1/(sqrt(2)*x(1)*x(2)))*(((14+(x(2)/2))*
(sqrt((x(2)^2)+((x(1)+x(3))^2))))/(sqrt(2)*x(1)*x(3)*((x(2)^2/3)+((x(1)+x(3))^2))))*x(2)/(s
qrt((x(2)^2)+((x(1)+x(3))^2)))))))-13600; 504000/(x(4)*(x(3)^2))-30000; 64746.022 *
(1-(0.028234*x(3)))*(x(3)*(x(4)^3))-6000];
c_eq = [];
end

C) Genetic Algorithm Tool-Box Results:

Thus, the results came out to be likewise-


1) x1 = 0.338
2) x2 = 1.041
3) x3 = 9.99
4) x4 = 0.235
5) Cost = 1.81341059281867

6.9 – Results by Genetic Algorithm MATLAB Coding:


A-1) MATLAB Coding for Objective Function & Constraints:

function [Cineq,Ceq] = nonlcon(x)


sigma = 5.04e5 ./ (x(:,3).^2 .* x(:,4));
P_c = 64746.022*(1 - 0.028236*x(:,3)).*x(:,3).*x(:,4).^3;
tp = 6e3./sqrt(2)./(x(:,1).*x(:,2));
tpp = 6e3./sqrt(2) .*
(14+0.5*x(:,2)).*sqrt(0.25*(x(:,2).^2 + (x(:,1) +
x(:,3)).^2)) ./ (x(:,1).*x(:,2).*(x(:,2).^2 / 12 +
0.25*(x(:,1) + x(:,3)).^2));
tau = sqrt(tp.^2 + tpp.^2 +
(x(:,2).*tp.*tpp)./sqrt(0.25*(x(:,2).^2 + (x(:,1) +
x(:,3)).^2)));
Cineq = [tau - 13600,sigma - 3e4,6e3 - P_c];
Ceq = [];
end

function F = objval(x)
f1 = 1.10471*x(:,1).^2.*x(:,2) +
0.04811*x(:,3).*x(:,4).*(14.0+x(:,2));

end

function z = pickindex(x,k)
z = objval(x); % evaluate objectives
z = z(k); % return objective k
end

A-2) GENETIC ALGORITHM CODING:

% GENETIC ALGORITHM (GA) WELDED BEAM DESIGN PROBLEM


clear;clc;close;
%% GA PARAMETERS WITH THEIR USUAL RANGE USED IN
LITERATURE
S = 200; % POPULATION SIZE
D = 4; % NUMBER OF DESIGN VARIABLES
C1F = 0.5; % FINAL STRONGEST PARAMETER [0-4]
C1I = 2.5; % INITIAL STRONGEST PARAMETER [0-4]
C2F = 2.5; % FINAL SIZE-STRONGEST PARAMETER [0-4]
C2I = 0.5; % FINAL SIZE-STRONGEST PARAMETER [0-4]
WF = 0.4; % FINAL INERTIA WEIGHT [0-1]
WI = 0.9; % INITIAL INERTIA WEIGHT [0-1]
XMAX = 3; % MAXIMUM POSITION
ITERMAX = 300; % MAXIMUM NO OF ITERATIONS
ITER = 0; % CURRENT ITERATION
PB = cell(1,S); % BEST POSITION CELL
PBV = ones(1,S)*1E50; % BEST POSITION OBJECTIVE VALUE
GBV = 1E50; % GLOBAL BEST POSITION OBJECTIVE VALUE
GB = zeros(1,D); % GLOBAL BEST POSITION
X = cell(1,S); % CELL POSITION
OBJX = zeros(1,S); % OBJECTIVE FUNCTION VALUE OF PARTICLE
V = cell(1,S); % GENERATION
DUMMY = zeros(1,length(D)); % DUMMY LIST
DT = 1; % TIME STEP
%% WELDED BEAM DESIGN PROBLEM DEFINITION (all units are
in british system)
P = 6000; % APPLIED TIP LOAD
E = 30e6; % YOUNGS MODULUS OF BEAM
G = 12e6; % SHEAR MODULUS OF BEAM
L = 14; % LENGTH OF CANTILEVER PART OF BEAM
PCONST = 1000000; % PENALTY FUNCTION CONSTANT
TAUMAX = 13600; % MAXIMUM ALLOWED SHEAR STRESS
SIGMAX = 30000; % MAXIMUM ALLOWED BENDING STRESS
DELTMAX = 0.25; % MAXIMUM ALLOWED TIP DEFLECTION
M = @(x) P*(L+x(2)/2); % BENDING MOMENT AT WELD POINT
R = @(x) sqrt((x(2)^2)/4+((x(1)+x(3))/2)^2); % SOME
CONSTANT
J = @(x)
2*(sqrt(2)*x(1)*x(2)*((x(2)^2)/12+((x(1)+x(3))/2)^2)); %
POLAR MOMENT OF INERTIA
OBJ = @(x)
1.10471*x(1)^2*x(2)+0.04811*x(3)*x(4)*(14+x(2)); %
OBJECTIVE FUNCTION
SIGMA = @(x) (6*P*L)/(x(4)*x(3)^2); % BENDING STRESS
DELTA = @(x) (4*P*L^3)/(E*x(4)*x(3)^3); % TIP DEFLECTION
PC = @(x) 4.013*E*sqrt((x(3)^2*x(4)^6)/36)*(1-
x(3)*sqrt(E/(4*G))/(2*L))/(L^2); % BUCKLING LOAD
TAUP = @(x) P/(sqrt(2)*x(1)*x(2)); % 1ST DERIVATIVE OF
SHEAR STRESS
TAUPP = @(x) (M(x)*R(x))/J(x); % 2ND DERIVATIVE OF SHEAR
STRESS
TAU = @(x)
sqrt(TAUP(x)^2+2*TAUP(x)*TAUPP(x)*x(2)/(2*R(x))+TAUPP(x)^
2); % SHEAR STRESS
G1 = @(x) TAU(x)-TAUMAX; % MAX SHEAR STRESS CONSTRAINT
G2 = @(x) SIGMA(x)-SIGMAX; % MAX BENDING STRESS
CONSTRAINT
G3 = @(x) x(1)-x(4); % WELD COVERAGE CONSTRAINT
G4 = @(x) 0.10471*x(1)^2+0.04811*x(3)*x(4)*(14+x(2))-5; %
MAX COST CONSTRAINT
G5 = @(x) 0.125-x(1); % MAX WELD THICKNESS CONSTRAINT
G6 = @(x) DELTA(x)-DELTMAX; % MAX TIP DEFLECTION
CONSTRAINT
G7 = @(x) P-PC(x); % BUCKLING LOAD CONSTRAINT
PHI = @(x) OBJ(x) +
PCONST*(max(0,G1(x))^2+max(0,G1(x))^2+max(0,G2(x))^2+...
max(0,G3(x))^2+max(0,G4(x))^2+max(0,G5(x))^2+...
max(0,G6(x))^2+max(0,G7(x))^2); % PENALTY FUNCTION
%% GA INITIALIZE CELL POSITION
for I = 1:S
for J = 1:D
DUMMY(J) = rand()*XMAX;
end
X{I} = DUMMY;
V{I} = X{I}/DT;
end
%% GA MAIN LOOP
while ITER < ITERMAX
C1 = (C1F-C1I)*(ITER/ITERMAX)+C1I; % CHANGING
POSITION
C2 = (C2F-C2I)*(ITER/ITERMAX)+C2I; % CHANGING SIZE-
EACH PROCESS
W = (WF-WI)*(ITER/ITERMAX)+WI; % CHANGING INERTIA
WEIGHT
ITER = ITER + 1;
for I = 1:S
OBJX(I) = PHI(X{I});
if OBJX(I) < PBV(I) & OBJX(I) >= 0 % FINDING
BEST LOCAL SOLUTION
PB{I} = X{I};
PBV(I) = OBJX(I);
end
end
if min(OBJX) < GBV & min(OBJX) >= 0 % FINDING BEST
GLOBAL
GB = X{OBJX == min(OBJX)};
GBV = min(OBJX);
end
for I = 1:S % CELLS POSITION UPDATE
R1 = rand(); % random number [0-1]
R2 = rand(); % random number [0-1]
V{I} = W*V{I}+C1*R1*(PB{I}-X{I})/DT+C2*R2*(GB-
X{I})/DT; % CELLS UPDATE
X{I} = X{I}+V{I}*DT; % CELLS POSITION UPDATE
end
disp(['Best Fitness Value >> ' num2str(GBV)]);
end
disp(['Best Fitness Value in Generation Loc. >> '
num2str(GB)]);

B) MATLAB OUTPUT’S

Best Fitness Value >> 1734326.4551


Best Fitness Value >> 78987.4656
Best Fitness Value >> 78987.4656
Best Fitness Value >> 78987.4656
Best Fitness Value >> 78987.4656
Best Fitness Value >> 10526.463
Best Fitness Value >> 8.364
Best Fitness Value >> 6.0479
Best Fitness Value >> 6.0479
Best Fitness Value >> 6.0479
Best Fitness Value >> 6.0479
Best Fitness Value >> 6.0479
Best Fitness Value >> 6.0479
Best Fitness Value >> 6.0479
Best Fitness Value >> 6.0479
Best Fitness Value >> 6.0479
Best Fitness Value >> 6.0479
Best Fitness Value >> 6.0479
Best Fitness Value >> 6.0479
Best Fitness Value >> 3.12934
Best Fitness Value >> 3.12934
Best Fitness Value >> 3.12934
Best Fitness Value >> 3.12934
Best Fitness Value >> 3.12934
Best Fitness Value >> 3.12934
Best Fitness Value >> 3.12934
Best Fitness Value >> 3.12934
Best Fitness Value >> 3.12934
Best Fitness Value >> 3.12934
Best Fitness Value >> 3.12934
Best Fitness Value >> 3.12934
Best Fitness Value >> 3.12934
Best Fitness Value >> 3.12934
Best Fitness Value >> 3.12934
Best Fitness Value >> 3.12934
Best Fitness Value >> 3.12934
Best Fitness Value >> 3.12934
Best Fitness Value >> 3.12934
Best Fitness Value >> 3.12934
Best Fitness Value >> 3.12934
Best Fitness Value >> 3.12934
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 2.589361
Best Fitness Value >> 1.833729
Decision Variables Best Value >> 0.2067 5.0491 8.0306 0.2092

C) CONCLUSIONS

X1 = 0.2067
X2 = 5.0491
X3 = 8.0306
X4 = 0.2092
Cost = 0.00073721

1) Thickness of the Weld = X1 = 0.2067


2) Length of the Weld = X2 = 5.0491
3) Height of the Beam = X3 = 8.0306
4) Width of the Beam = X4 = 0.2092

6.10 – GRAPHICAL RESULTS

Thus, Graphs are obtained by the following steps in MATLAB:


1) Set Population Size, Cross-Over and Mutation Probability, Maximum
Generation, Initial Generation and Initial Fitness Value.
2) Generate Initial Population Randomly.
3) Calculate Each Chromosome Fitness Value and set Gene.
4) If value of Gene is lesser than Maximum Gene as assigned earlier, then Go to
Next Step. If Both of them are Equal, then Output will be done based on Maximum
Evaluation and then Terminate.
5) Reproduce New Chromosome using Cross-Over and Mutation Process and
Perform the Selection Based on any of the Techniques.
6) Select any one Chromosome in the Chromosome Population and Calculate the
Fitness of Chromosome.
7) Determine the Best Chromosome by looking all the particular things in mind –
Constraints, Bounds on Variables, Engineering Relationships, Parameters etc. and
Calculate its Reliability Interval and Fitness and Register them.

Chapter 7 – 72 Bar Truss 3D Problem

7.1+2 – Problem Statement + Variables + Elements+ Co-ordinates + Objective Fun.:

The material density is 0.1 lb/in3and the modulus of elasticity is 10,000


ksi. The members are subjected to stress limitations of ±25 ksi. The
uppermost nodes are subjected to displacement limitations of ±0.25 in
both in x and y directions. There are 72 members, which are divided into
16 groups, as follows:

(1) A1 A4, (2) A5 A12, (3)A13 A16, (4) A17 A18, (5) A19 A22, (6) A23
A30, (7) A31 A34, (8) A35 A36, (9) A37 A40, (10) A41 A48, (11 A49 A52,
(12) A53 A54, (13) A55 A58, (14) A59 A66, (15) A67 A70, (16) A71 A72.

The discrete variables are selected from the set are as follows:
D = {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2, 1.3, 1.4, 1.5,
1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3.0, 3.1,
3.2} (in2)

Number of Nodes – 20
Number of Elements – 72

The Following Table Shows the Node no. and Co-ordinated with respect
to it.

Node No. X Y Z
1 0 0 275.59
2 273.26 0 196.8
3 236.65 136.63 196.8
4 136.65 236.65 196.8
5 0 273.26 196.8
6 492.12 0 118.11
7 475.35 127.37 118.11
8 426.188 246.06 118.11
9 347.198 347.981 118.11
10 246.06 426.188 118.11
11 127.37 475.35 118.11
12 -475.35 127.37 118.11
13 426.188 -246.06 118.11
14 475.35 -127.37 118.11
15 625.59 0 0
16 541.777 312.798 0
17 312.795 541.777 0
18 0 625.59 0
19 -312.8 541.777 0
20 312.795 -541.78 0

The Following Table shows the Cross-Sectional Area and Modulus of


Elasticity and various Nodes connecting from Elements to Element.

E = Modulus of
Elasticity
A=Area of Cross-
Section

Element E A Start End


1 2E+11 0.7854 1 2
2 2E+11 0.7854 1 3
3 2E+11 0.7854 1 4
4 2E+11 0.7854 1 5
5 2E+11 0.7854 1 6
6 2E+11 0.7854 1 7
7 2E+11 0.7854 1 8
8 2E+11 0.7854 1 9
9 2E+11 0.7854 1 10
10 2E+11 0.7854 1 11
11 2E+11 0.7854 1 12
12 2E+11 0.7854 1 13
13 2E+11 0.7854 5 20
14 2E+11 0.7854 5 4
15 2E+11 0.7854 5 19
16 2E+11 0.7854 19 20
17 2E+11 0.7854 19 4
18 2E+11 0.7854 19 18
19 2E+11 0.7854 18 4
20 2E+11 0.7854 4 3
21 2E+11 0.7854 4 17
22 2E+11 0.7854 17 18
23 2E+11 0.7854 17 3
24 2E+11 0.7854 17 16
25 2E+11 0.7854 3 16
26 2E+11 0.7854 3 15
27 2E+11 0.7854 16 15
28 2E+11 0.7854 3 2
29 2E+11 0.7854 2 15
30 2E+11 0.7854 2 14
31 2E+11 0.7854 14 15
32 2E+11 0.7854 2 6
33 2E+11 0.7854 14 7
34 2E+11 0.7854 2 13
35 2E+11 0.7854 13 11
36 2E+11 0.7854 10 11
37 2E+11 0.7854 13 11
38 2E+11 0.7854 9 12
39 2E+11 0.7854 13 12
40 2E+11 0.7854 12 12
41 2E+11 0.7854 12 13
42 2E+11 0.7854 12 13
43 2E+11 0.7854 8 13
44 2E+11 0.7854 12 14
45 2E+11 0.7854 12 12
46 2E+11 0.7854 11 15
47 2E+11 0.7854 11 15
48 2E+11 0.7854 11 16
49 2E+11 0.7854 11 10
50 2E+11 0.7854 7 17
51 2E+11 0.7854 10 18
52 2E+11 0.7854 6 18
53 2E+11 0.7854 10 19
54 2E+11 0.7854 5 19
55 2E+11 0.7854 4 10
56 2E+11 0.7854 9 10
57 2E+11 0.7854 3 9
58 2E+11 0.7854 2 13
59 2E+11 0.7854 2 9
60 2E+11 0.7854 9 9
61 2E+11 0.7854 1 10
62 2E+11 0.7854 8 11
63 2E+11 0.7854 8 9
64 2E+11 0.7854 20 12
65 2E+11 0.7854 8 13
66 2E+11 0.7854 19 13
67 2E+11 0.7854 8 14
68 2E+11 0.7854 7 8
69 2E+11 0.7854 19 14
70 2E+11 0.7854 7 15
71 2E+11 0.7854 18 15
72 2E+11 0.7854 7 16

The Following Table Shows the Load applied on Supports:

ID X-Force Y-Force Z-force


1 0 0 -13.49
2 0 0 -6.744
3 0 0 -6.744
4 0 0 -6.744
5 0 0 -6.744
6 0 0 -6.744
7 0 0 -6.744
8 0 0 -6.744
9 0 0 -6.744
10 0 0 -6.744
11 0 0 -6.744
12 0 0 -6.744
13 0 0 -6.744
14 0 0 -6.744
15 0 0 -2.247
16 0 0 -2.247
17 0 0 -2.247
18 0 0 -2.247
19 0 0 -2.247
20 0 0 -2.247
21 0 0 -2.247
22 0 0 -2.247
23 0 0 -2.247
24 0 0 -2.247
25 0 0 -2.247
26 0 0 -2.247
27 0 0 -2.247
28 0 0 -2.247
29 0 0 -2.247
30 0 0 -2.247
31 0 0 -2.247
32 0 0 -2.247
33 0 0 -2.247
34 0 0 -2.247
35 0 0 -2.247
36 0 0 -2.247
37 0 0 -2.247

7.3 – MATLAB Coding :

1) MATLAB Code for Displacement + Force Vector + Stiffness Matrix +


Element Forces + Weight:
% Project Title: Implementation of Genetic Algorithm (GA)

function [TW,disp_vec,axial_vec,force_vec]= Sphere(Q)


n=20; % no of nodes
ne=72; % no of elements
ndof=3; % no of DOF's
nen=2; % no of nodes for each element
nee=nen*ndof;

% area of the elements in sq.m

ar(1,1:4)=Q(1);
ar(1,5:12)=Q(2);
ar(1,13:16)=Q(3);
ar(1,17:18)=Q(4);
ar(1,19:22)=Q(5);
ar(1,23:30)=Q(6);
ar(1,31:34)=Q(7);
ar(1,35:36)=Q(8);
ar(1,37:40)=Q(9);
ar(1,41:48)=Q(10);
ar(1,49:52)=Q(11);
ar(1,53:54)=Q(12);
ar(1,55:58)=Q(13);
ar(1,59:66)=Q(14);
ar(1,67:70)=Q(15);
ar(1,71:72)=Q(16);
fixed_dof=[49,50,51,52,53,54,55,56,57,58,59,60]; % constrained DOF's
L=[1 2 3 4 5 1 2 3 3 4 1 4 8 6 5 5 5 8 5 6 7 8 9 5 6 7 7 8 5 8 12 10 9 9 9
12 9 10 11 12 13 9 10 11 11 12 9 12 16 14 13 13 13 16 13 14 15 16 17 13 14
15 15 16 13 16 20 18 17 17 17 20;5 6 7 8 2 6 7 6 8 7 8 5 7 7 6 8 7 6 9 10
11 12 6 10 11 10 12 11 12 9 11 11 10 12 11 10 13 14 15 16 10 14 15 14 16 15
16 13 15 15 14 16 15 14 17 18 19 20 14 18 19 18 20 19 20 17 19 19 18 20 19
18]; % element connecting matrix
coord=[0 120 120 0 0 120 120 0 0 120 120 0 0 120 120 0 0 120 120 0;0 0 0 0
60 60 60 60 120 120 120 120 180 180 180 180 240 240 240 240;0 0 120 120 0 0
120 120 0 0 120 120 0 0 120 120 0 0 120 120]; % coordinate vector for the 6
nodes in m
load=[0;0;0;5;5;-
5;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0
;0;0;0;0;0]; % load vector at 10th dof and 12th dof i.e.downwards at 5th &
6th nodes respectively in N
E=1e4; % youngs modulus in Pa
den=0.1; % density of material in Kg/m3

%% Calculation
%getting the ID matrix from the information given
ID=zeros(ndof,n);
r=size(ID);
c=r(1,2);
r=r(1,1);
number=1;
for c1=1:c
for r1=1:r
ID(r1,c1)=number;
number=number+1;
end
end

fixed_dof=sort(fixed_dof);
fixed1=fixed_dof;
node=1:1:n*ndof;
free_dof=node;
for i=1:length(fixed_dof)
free_dof(fixed1(i))=[];
fixed1=fixed1-1;
end
clear fixed1;
free_dof=sort(free_dof);
free_dof=free_dof';
for e=1:ne
for i=1:nen
for a=1:ndof
p=ndof*(i-1)+a;
LM(p,e)=ID(a,L(i,e));
end
end
end
K=zeros(n*ndof);+
+
lambda_vec=[];
h_vec=[];
%assembley
for e=1:ne %coordinates matrix of
each element
localcoord=[coord(:,L(1,e)) coord(:,L(2,e))];
h=0;
for count=1:ndof %length of member
temp=(localcoord(count,2)-localcoord(count,1))^2;
h=h+temp;
end
h=sqrt(h);
h_vec=[h_vec;h];
lambda=[];
for count=1:ndof %direction cosines in
lambda matrix like cosx
lambda=[lambda;(localcoord(count,2)-localcoord(count,1))/h];
%first one is lambda x ans second one is lambda y and so on
end
lambda_vec=[lambda_vec lambda];
A=lambda*lambda';
k=(E*ar(e))/h*[A -A;-A A];
for p=1:nee
P=LM(p,e);
for q=1:nee
Q=LM(q,e);
K(P,Q)=K(P,Q)+k(p,q); % GLobal Stiffness matrix
end
end
end
lambda_vec=lambda_vec';
K1=K;
%applying boundary conditions
for counter=1:length(fixed_dof)
K1(fixed_dof(counter),:)=[];
K1(:,fixed_dof(counter))=[];
fixed_dof=fixed_dof-1;
end
d=K1\load;
disp_vec=zeros(n*ndof,1);
for count=1:length(free_dof)
disp_vec(free_dof(count))=d(count);
end

%% Weight
TW=0;
for i=1:ne
W=den*ar(i)*h_vec(i);
TW=TW+W; % weight in Kg
end

%% post processing
D_big=[];
count=1;
for i=1:n
D=[];
for j=1:ndof
t=disp_vec(count);
count=count+1;
D=[D;t];
end
D_big=[D_big D];
end
axial_vec=[];
force_vec=[];
for i=1:ne
d1=lambda_vec(i,:)*D_big(:,L(1,i));
d2=lambda_vec(i,:)*D_big(:,L(2,i));
axial=(E/h_vec(i))*(d2-d1);
axial_vec=[axial_vec;axial];
force=ar(i)*axial_vec(i);
force_vec=[force_vec;force];
end
axial_vec;

% R=sqrt((x(2)^2)/4+((x(1)+x(3))/2)^2);
% P=6000;
% L=14;
% E=30e6;
% G=12e6;
% Px=((4.013*E*sqrt((x(3)^2*x(4)^6)/36))/(L^2))*(1-
(x(3)/(2*L))*sqrt(E/(4*G)));
% tap=P/(sqrt(2)*x(1)*x(2));
% Q=P*(L+(x(2)/2));
% J=2*(sqrt(2)*x(1)*x(2)*((x(2)^2/12)+((x(1)+x(3))/2)^2));
% tapp=(Q*R)/J;
% ta=sqrt(tap^2+((2*tap*tapp*x(2))/(2*R))+tapp^2);
% sigma=(6*P*L)/(x(4)*(x(3)^2));
% delta=(4*P*(L^3))/(E*(x(3)^3)*x(4));
% c=[ta-13600;
% sigma-30000;
% x(1)-x(4);
% 0.10471*x(1)^2+0.04811*x(3)*x(4)*(14+x(2))-5;
% 0.125-x(1);
% delta-0.25;

2) MATLAB Code for Population + Sampling Size + Reproduction :


function [pop, SortOrder] = SortPopulation(pop)

% Get Costs
Costs = [pop.Cost];

% Sort the Costs Vector


[~, SortOrder]=sort(Costs);

% Apply the Sort Order to Population


pop = pop(SortOrder);

end

function L = RandSample(P, q, replacement)

if ~exist('replacement','var')
replacement = false;
end

L = zeros(q,1);
for i=1:q
L(i) = randsample(numel(P), 1, true, P);
if ~replacement
P(L(i)) = 0;
end
end

end

function b = IsInRange(x, VarMin, VarMax)

b = all(x>=VarMin) && all(x<=VarMax);

end

3) MATLAB Code for Cost Function involving all Variables:


clc;
clear;
close all;

%% Problem Definition

% Objective Function
CostFunction = @(Q)Sphere(Q);

nVar = 16; % Number of Unknown Variables


VarSize = [1 nVar]; % Unknown Variables Matrix Size

VarMin = 1; % Lower Bound of Unknown Variables


VarMax = 60; % Upper Bound of Unknown Variables

DS=[0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8
1.9 2 2.1 2.2 2.3 2.4 2.5 2.6 2.8 3 3.2 3.4 3.5 3.6 3.7 3.8 3.9 4 4.1 4.2
4.3 4.4 4.5 4.6 4.7 4.8 4.9 5 ...
5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9 6 6.1 6.2 6.3 6.4]; % in sq.in

L=[1 2 3 4 5 1 2 3 3 4 1 4 8 6 5 5 5 8 5 6 7 8 9 5 6 7 7 8 5 8 12 10 9 9 9
12 9 10 11 12 13 9 10 11 11 12 9 12 16 14 13 13 13 16 13 14 15 16 17 13 14
15 15 16 13 16 20 18 17 17 17 20;5 6 7 8 2 6 7 6 8 7 8 5 7 7 6 8 7 6 9 10
11 12 6 10 11 10 12 11 12 9 11 11 10 12 11 10 13 14 15 16 10 14 15 14 16 15
16 13 15 15 14 16 15 14 17 18 19 20 14 18 19 18 20 19 20 17 19 19 18 20 19
18]; % element connecting matrix

%% GA Parameters

MaxIt = 500; % Maximum Number of Iterations

nPopMemeplex = 5; % Memeplex Size


nPopMemeplex = max(nPopMemeplex, nVar+1); % Nelder-Mead Standard

nMemeplex = 5; % Number of Memeplexes


nPop = nMemeplex*nPopMemeplex; % Population Size

I = reshape(1:nPop, nMemeplex, []);

% GA Parameters
ga_params.q = max(round(0.3*nPopMemeplex),2); % Number of Parents
ga_params.alpha = 3; % Number of Offsprings
ga_params.beta = 5; % Maximum Number of Iterations
ga_params.sigma = 2; % Step Size
ga_params.CostFunction = CostFunction;
ga_params.VarMin = VarMin;
ga_params.VarMax = VarMax;

%% Initialization

% Empty Individual Template


empty_individual.Position = [];
empty_individual.Cost = [];

% Initialize Population Array


pop = repmat(empty_individual, nPop, 1);

% Initialize Population Members


for i=1:nPop
ar=ceil(60*rand(1,nVar));
pop(i).Position=ar;
% pop(i).Position = unifrnd(VarMin, VarMax, VarSize);
pop(i).Cost = CostFunction(((pop(i).Position)));
end

% Sort Population
pop = SortPopulation(pop);

% Update Best Solution Ever Found


BestSol = pop(1);

% Initialize Best Costs Record Array


BestCosts = nan(MaxIt, 1);

%% GA Main Loop

for it = 1:MaxIt

ga_params.BestSol = BestSol;

% Initialize Memeplexes Array


Memeplex = cell(nMemeplex, 1);

% Form Memeplexes and Run GA


for j = 1:nMemeplex
% Memeplex Formation
Memeplex{j} = pop(I(j,:));

% Run GA
Memeplex{j} = RunGA(Memeplex{j}, ga_params);

% Insert Updated Memeplex into Population


pop(I(j,:)) = Memeplex{j};
end

% Sort Population
pop = SortPopulation(pop);

% Update Best Solution Ever Found


BestSol = pop(1);

% Store Best Cost Ever Found


BestCosts(it) = BestSol.Cost;

% Show Iteration Information


disp(['Iteration ' num2str(it) ': Best Cost = '
num2str(BestCosts(it))]);

end

%% Results

figure;
plot(BestCosts, 'LineWidth', 2);
% semilogy(BestCosts, 'LineWidth', 2);
xlabel('Iteration');
ylabel('Best Cost');
grid on;
pop(1)

4) MATLAB Coding for Genetic Algorithm:


% Project Title: Implementation of Genetic Algorithm (GA)in 72 Bar Truss 3D

function pop = RunGA(pop, params)

%% Genetic Algorithm Parameters


q = params.q; % Number of Parents
alpha = params.alpha; % Number of Offsprings
beta = params.beta; % Maximum Number of Iterations
sigma = params.sigma;
CostFunction = params.CostFunction;
VarMin = params.VarMin;
VarMax = params.VarMax;
VarSize = size(pop(1).Position);
BestSol = params.BestSol;

nPop = numel(pop); % Population Size


P = 2*(nPop+1-(1:nPop))/(nPop*(nPop+1)); % Selection Probabilities

% Calculate Population Range (Smallest Hypercube)


LowerBound = pop(1).Position;
UpperBound = pop(1).Position;
for i = 2:nPop
LowerBound = min(LowerBound, pop(i).Position);
UpperBound = max(UpperBound, pop(i).Position);
end

%% GA Main Loop

for it = 1:beta

% Select Parents
L = RandSample(P,q);
B = pop(L);
% Generate Offsprings
for k=1:alpha

% Sort Population
[B, SortOrder] = SortPopulation(B);
L = L(SortOrder);

% Weakest Discard & Strongest Consider


ImprovementStep2 = false;
Censorship = false;

% Improvement Step 1
NewSol1 = B(end);
Step = sigma*rand(VarSize).*(B(1).Position-B(end).Position);
NewSol1.Position = B(end).Position + Step;
if IsInRange(NewSol1.Position, VarMin, VarMax)
NewSol1.Cost = CostFunction(NewSol1.Position);
if NewSol1.Cost<B(end).Cost
B(end) = NewSol1;
else
ImprovementStep2 = true;
end
else
ImprovementStep2 = true;
end

% Improvement Step 2
if ImprovementStep2
NewSol2 = B(end);
Step = sigma*rand(VarSize).*(BestSol.Position-
B(end).Position);
NewSol2.Position = B(end).Position + Step;
if IsInRange(NewSol2.Position, VarMin, VarMax)
NewSol2.Cost = CostFunction(NewSol2.Position);
if NewSol2.Cost<B(end).Cost
B(end) = NewSol2;
else
Censorship = true;
end
else
Censorship = true;
end
end

% Censorship
if Censorship
B(end).Position = unifrnd(LowerBound, UpperBound);
B(end).Cost = CostFunction(B(end).Position);
end

end

% Return Back Subcomplex to Main Complex


pop(L) = B;

end

end
7.5 – RESULTS:

Thus, Minimum Weight came out to be as a 399.87 lb

Вам также может понравиться