Вы находитесь на странице: 1из 54

3720821- Optimization Techniques

By

UPADHYAY DISHANK SURESHKUMAR


Enrolment No: 180280708014
M.E. (CAD-CAM) 2018-20

Under Guidance of

Mr. A.V. PATEL Dr. U.A. PATEL


Associate Professor, Assistant Professor,
Mechanical Engineering Department Mechanical Engineering Department
L.D.College of Engineering, Ahmedabad, L.D.College of Engineering, Ahmedabad,
Gujarat Gujarat

Gujarat Technological University


In the Partial Fulfilment of the Requirements for
Degree of Masters of Engineering
In Mechanical (CAD-CAM)

Mechanical Engineering Department


L. D. College of Engineering, Ahmedabad – 380 001

Page 1 of 8
CERTIFICATE

This is to certify that report entitled “Optimization Techniques” submitted by


Mr. Upadhyay Dishank Sureshkumar (Enrolment No. 180280708014)
towards the fulfilment of the requirement for the subject Optimization
Techniques (3720821), 2nd Semester in the field of Master if Engineering in
Mechanical Engineering (CAD-CAM) of Gujarat Technological University.
This work has been carried out under my guidance and supervision and it is up
to my satisfaction.

Date:
Signature:

Guide:
Mr. A.V patel Head of Mechanical Department
Associate Professor Mechanical Engineering Department,
Mechanical Engineering Department, L. D. College of Engineering
L. D. College of Engineering Ahmedabad
Ahmedabad

Page 2 of 8
Lab Manual
3720821 – Optimization Techniques

List of Experiments

Date of
Sr No Name of Topic Signature
Completion
1. Optimization of conventional design problem.

2. Optimization of Single Variable optimization.

3. Optimization using simplex method.


Nonlinear programming using Unrestricted search
4.
method.
Nonlinear programming using Golden section
5.
method.
6. Nonlinear programming using Fibonacci method.
Nonlinear programming using Newton, Quasi
7.
newton and Secant method.
8. Nonlinear programming using Univariate method.
Nonlinear programming using Indirect search
9.
method.
10. Optimization using Evolutionary Algorithm.

Page 3 of 8
Experiment No. 1 Date: _______________
Optimization of conventional design problem.

Basic Steps to solve the optimization problem using MATLAB 2018.

• Defining objective function


Step 1 • Create a seperate m file for defining function
• Defining Constrain function
• Create a separate m file dor defining constrain function for its equality &
Step 2 inequality constrains.
• Main File
• Program using fundamentals of calling objective function and constrain function
Step 3 files in a mail m file.

Figure 1 MATLAB Programs or Functions for Solving Optimization Problems

Page 4 of 8
Problem Statement:

Design a uniform column of tubular section, with hinge joints at


both ends, (Figure 2) to carry a compressive load P = 2500 kgf
for minimum cost. Thecolumn is made up of a material that has
a yield stress (σy) of 500 kgf/cm2, modulusof elasticity (E) of
0.85 × 106 kgf/cm2, and weight density (ρ) of 0.0025
kgf/cm3.The length of the column is 250 cm. The stress induced
in the column should be lessthan the buckling stress as well as
the yield stress. The mean diameter of the columnis restricted to
lie between 2 and 14 cm, and columns with thicknesses outside
therange 0.2 to 0.8 cm are not available in the market. The cost
of the column includesmaterial and construction costs and can
be taken as 5W + 2d, where W is the weightin kilograms force
and d is the mean diameter of the column in centimeters.
Figure 2 Problem Statement

Solution using MATLAB:

Objective Function: (For Minimization)


f (X) = 5W + 2d
= 5ρlπ dt + 2d
= 9.82 x1 x2 + 2 x1

Constrain Functions:
g3(X) = −x1 + 2.0 ≤ 0
g4(X) = x1 − 14.0 ≤ 0
g5(X) = −x2 + 0.2 ≤ 0
g6(X) = x2 − 0.8 ≤ 0

MATLAB Program

Step1: m file for Objective function


function f = (X)
f = 9.82*X(1)*X(2)+2*X(1);

Step 2: m file for Constrain function

function [c, ceq] = conprobformin(X)


c = [2500/(pi*X(1)*X(2)) - 500; 2500/(pi*X(1)*X(2)) -
(pi^2*(X(1)^2+X(2)^2))/0.5882; -X(1)+2; X(1)-14; -X(2)+0.2; X(2)-0.8];
ceq = [];

Page 5 of 8
Step 3: Main file
clc;
clearall;
warningoff
X0 = [7 0.4];
fprintf ('the values of function value and constrains at starting
point\n');
f = probofminobj (X0)
options = optimset ('LargeScale','off');
[X,fval] = fmincon (@probofminobj, X0, [], [], [], [], [], [],
@conprobformin, options)
fprintf ('The values of constrains at optimum solutions\n');
[c,ceq]= conprobformin (X)

Output:

the values of function value and constrains at starting point

f =

41.4960

X =

5.4510 0.2920

fval =

26.5310

The values of constrains at optimum solutions

c =

-0.0001
-0.0002
-3.4510
-8.5490
-0.0920
-0.5080

ceq =

[]

Page 6 of 8
Method 2: By using optimization tool box

Steps:

Click on Optimization;
MATLAB > Applications > Optimization

Problem Setup & Results

1
2
3
4
5

1 Select solver as per requirement. (Min, Max, etc.)

2 Select algorithm as per requirement

3 Selective objective function (call objective function m file by “@”Name of file”

4 Select Derivatives as per requirement

5 Select start point for optimization

6 Select constrain function (Call constrain function m file by “@”name of file””

7 If you need more information about plot then select the plots that you require
for your functions.
Page 7 of 8
8 Start the solver (It will take some time to iterate your values)

9 Results will be shown here and for plot a pop up window will come that will
show all the plots that you have selected to get results.

Result for current problem statement using optimization tool box.

Copied from Optimization Tool Result block.


-----------------------------
Optimization running.
Objective function value: 26.5309989238107
Local minimum found that satisfies the constraints.

Optimization completed because the objective function is non-decreasing in


feasible directions, to within the default value of the function tolerance,
and constraints are satisfied to within the default value of the constraint tolerance.

Plot

Page 8 of 8
Experiment No. 2 Date: _______________

Optimization of Classical optimization techanique

Basic Steps to solve the optimization problem using MATLAB 2018.

• Defining objective function


Step 1 • Create a seperate m file for defining function
• Defining local optimim of function
• Create a separate m file dor defining local minimum and maximum for a given
Step 2 function .
• Main File
• Program using fundamentals of calling objective function and constrain function
Step 3 files in a mail m file.

1.Single variable optimization .

I.Local optimum of a function

Local maximum

Consider a function f(x). The point x∗ is called local maximum if and only if there is I,
an open interval containing x∗, such that f (x∗) ≤ f (x) for all x in I.

Local minimum

Given a function f(x). The point x∗ is called a local minimum if and only if there is I, an
open interval containing x∗ , such that f (x∗) ≤ f (x) for all x in I.
Example

The following graph presents a function that has a local maximum as well as a local
minimum.

Page 9 of 8
A local maximum is found at the point x = —1 since there is an open interval I1
containing x = —1 on which f (—1) ≤ f (x) for all x in I1.

A local minimum is found at the point x = 2 since there is an open interval I2


containing x = 2 on which f (2) ≤ f (x) for all x in I2.

Stationary points and critical points

In economics, production strategies of a company are determined according to objectives


sought out by the company. Sometimes we want to minimize the costs, but usually, we
want to maximize profits. Whatever the situation, we are particularly interested in
optimal values (maximum or minimum)…

Let us consider the graph of the following functions, identified f(x) and g(x). Each has
a local minimum. What can we see at these points ?

We must first notice that both functions cease to decrease and begin to increase at the
minimum point (x = 0). However, this transition is not made in the same manner for both.
The function f(x) goes from decreasing to increasing progressively and at the minimum
point, the slope is zero. For g(x), the passage from decreasing to increasing is abrupt,
such that the slope is not defined at the minimum. These two types of optima are,
Page 10 of
8
Definition : Stationary point

Consider a continuous function f(x), differentiable at x = x∗ . The point x = x ∗ is


called a stationary point if the derivative of f is zero at that point, i.e. if f ′(x∗) = 0.

Example

f(x) = x2 + 2x — 1 → f ′(x) = 2x + 2

The derivative of f is zero when

2x + 2 = 0 → 2x = —2 → x = —1

x = —1 is therefore a stationary point of the function f.

Definition : Critical point

Given a function f(x), well defined at x = x∗ . The point x = x∗ is called critical


point if the derivative of f does not exist at that point.

Example

f(x) =x2/3 →
2 1/3
= x
f′(x) 3

Page 11 of
8
The derivative of f does not exist when x = 0 since the denominator then takes the
value 0. x = 0 is therefore a critical point of f.

Search for a local optimum

Theorem

Given a function of x, f defined on an open interval I. A local optimum of f is


either a stationary point or a critical point.

What the previous theorem claims is that it is useless to search anywhere else but at the
stationary and critical points when looking for a local optimum. Pay attention though!
The theorem does not state that all stationary or critical point are local optima... Once the
stationary and critical points are found, you must determine the nature of these points to
know if they are minima, maxima or neither.

Study of a stationary or critical point using the first derivative

Let us revisit the graphical example that we presented above. The functions f(x) et
g(x) both have a local minimum at x = 0.

The point x = 0 is a stationary point of function f(x). However, x = 0 is a critical


point of function g(x). Nonetheless, a common phenomenon occurs in both cases: the
derivative changes sign at the minimum point. The study of the first derivative allows us
to determine the nature of a stationary or critical point.

The first derivative rule

Given the function †(x) and x = x∗ a stationary or critical point of the function.

† (x∗) is :

 a local minimum if the derivative goes from negative to positive at x∗.

 a local maximum if the derivative goes from positive to negative at x∗. Page 12 of
8
Given a mathematical function of a single variable, you can use the fminbnd function to find a local
minimizer of the function in a given interval. For example, consider the humps.m function, which is
provided with MATLAB®. The following figure shows the graph of humps.

x = -1:.01:2;
y = humps(x);
plot(x,y)
xlabel('x')
ylabel('humps(x)')
grid on

To find the minimum of the humps function in the range (0.3,1), use

x = fminbnd(@humps,0.3,1)
x = 0.6370

You can ask for a tabular display of output by passing a fourth argument created by the optimset
command to fminbnd:

opts = optimset('Display','iter');
x = fminbnd(@humps,0.3,1,opts)

Page 13 of
8
Result
1 0.567376 12.9098 initial
2 0.732624 13.7746 golden
3 0.465248 25.1714 golden
4 0.644416 11.2693 parabolic
5 0.6413 11.2583 parabolic
6 0.637618 11.2529 parabolic
7 0.636985 11.2528 parabolic
8 0.637019 11.2528 parabolic
9 0.637052 11.2528 parabolic

Optimization terminated:
the current x satisfies the termination criteria using OPTIONS.TolX of
1.000000e-04
x = 0.6370

The iterative display shows the current value of x and the function value at f(x) each time a
function evaluation occurs. For fminbnd, one function evaluation corresponds to one iteration of
the algorithm. The last column shows what procedure is being used at each iteration, either a
golden section search or a parabolic interpolation. For more information, see Iterative Display.

Minimizing Functions of Several Variables

The fminsearch function is similar to fminbnd except that it handles functions of many variables.
Specify a starting vector x0 rather than a starting interval. fminsearch attempts to return a vector x
that is a local minimizer of the mathematical function near this starting vector.

To try fminsearch, create a function three_var of three variables, x, y, and z.

function b = three_var(v)
x = v(1);
y = v(2);
z = v(3);
b = x.^2 + 2.5*sin(y) - z^2*x^2*y^2;

Now find a minimum for this function using x = -0.6, y = -1.2, and z = 0.135 as the starting
values.

v = [-0.6,-1.2,0.135];
a = fminsearch(@three_var,v)

a =
0.0000 -1.5708 0.1803

Page 14 of
8
Maximizing Functions

The fminbnd and fminsearch solvers attempt to minimize an objective function. If you have a
maximization problem, that is, a problem of the form

then define g(x) = –f(x), and minimize g.

For example, to find the maximum of tan(cos(x)) near x = 5, evaluate:

[x fval] = fminbnd(@(x)-tan(cos(x)),3,8)

x =
6.2832

fval =
-1.5574

The maximum is 1.5574 (the negative of the reported fval), and occurs at x = 6.2832. This answer
is correct since, to five digits, the maximum is tan(1) = 1.5574, which occurs at x = 2π = 6.2832.

Page 15 of
8
Experiment No. 4 Date: _______________

Nonlinear programming using Unrestricted search method.

Basic Steps to solve the optimization problem using MATLAB 2018.

• Defining the problrm including step size


Step 1

• Defining Step size and going for iteration


Step 2

• Main File
Step 3 • By using accelerated step size we can find the optimum itearation in m file

Search with fixed step size

1. Start with an initial guess point, say, x1


2. Find f1 = f (x1)
3. Assuming a step size s, find x2=x1+s
4. Find f2 = f (x2)
5. If f2 < f1, and if the problem is one of minimization, the assumption of unimodality indicates that
desired minimum can not lie at x < x1. Hence the search can be continued furtheralong points x3,
x4,….using the unimodality assumption while testing each pair of experiments. This procedure is
continued until a point, xi=x1+(i-1)s, shows an increase in the function value.
6. The search is terminated at xi, and either xi or xi-1 can be taken as the optimum point
7. Originally, if f1 < f2 , the search should be carried in the reverse direction at points x-2, x-3,…., where
x-j=x1- ( j-1 )s
8. If f2=f1 , the desired minimum lies in between x1 and x2, and the minimum point can be taken as
either x1 or x2.
9. If it happens that both f2 and f-2 are greater than f1, it implies that the desired minimum will lie in the
double interval
x-2 < x < x2

Page 16 of
8
Search with accelerated step size
• Although the search with a fixed step size appears to be very simple, its major limitation comes
because of the unrestricted nature of the region in which the minimum can lie.
• For example, if the minimum point for a particular function happens to be xopt=50,000 and in the
absence of knowledge about the location of the minimum, if x1 and s are chosen as 0.0 and 0.1,
respectively, we have to evaluate the function 5,000,001 times to find the minimum point. This
involves a large amount of computational work.
• An obvious improvement can be achieved by increasing the step size gradually until the minimum
point is bracketed.
• A simple method consists of doubling the step size as long as the move results in an improvement
of the objective function.
• One possibility is to reduce the step length after bracketing the optimum in ( xi-1, xi). By starting
either from xi-1 or xi, the basic procedure can be applied with a reduced step size. This procedure can
be repeated until the bracketed interval becomes sufficiently small.
Find the minimum of f = x (x-1.5) by starting from 0.0 with an initial step size of 0.05.

Solution:
The function value at x1 is f1=0.0. If we try to start moving in the negative x direction, we find
that x-2=-0.05 and f-2=0.0775. Since f-2>f1, the assumption of unimodality indicates that the
minimum can not lie toward the left of x-2. Thus, we start moving in the positive x direction and
obtain the following results:

i Value of s xi=x1+s fi = f (xi) Is fi > fi-1

1 - 0.0 0.0 -

2 0.05 0.05 -0.0725 No

3 0.10 0.10 -0.140 No

4 0.20 0.20 -0.260 No

5 0.40 0.40 -0.440 No

6 0.8 0.80 -0.560 No

Page 17 of
8
Experiment No. 5 Date: _______________

Nonlinear programming using Golden section method..

Basic Steps to solve the optimization problem using MATLAB 2018.

• Defining the problem to find maxima of modal function


Step 1

• Defining Total number of experiment to be performed using golden search


Step 2

• Main File
Step 3 • By using golden section method we can find the optimum itearation in m file

Golden Section method


The golden section method is same as the Fibonacci method except that in the Fibonacci method, the total
number of experiments to be conducted has to be specified before beginning the calculation, whereas this is
not required in the golden section method.
The Golden Section Search is a method to minimize or maximize a unimodal function of one variable. The
algorithm maintains the function values for triples of points whose distances form a golden ratio. This
method does not require information about the derivatives. If the minimum is known to exist inside a
region of the variable, this method successively narrows the range of function values inside which the
minimum exists. In mathematics, two quantities are in the golden ratio if the ratio of the sum of the
quantities to the larger quantity is equal to the ratio of the larger quantity to the smaller quantity. Figure
illustrates the geometric relationship that defines the golden ratio. The total length is lac, the larger
segment is lab, and the smaller segment is lbc. From Fig., the golden ratio can be expressed as

• In the Fibonacci method, the location of the first two experiments is determined by the total number
of experiments, n.
• In the golden section method, we start with the assumption that we are going to conduct a large
number of experiments.
• Of course, the total number of experiments can be decided during the computation.
• The intervals of uncertainty remaining at the end of different number of experiments can be
computed as follows:
FN 1
L2  lim L0
N  F
N

FN  2 F F
L3  lim L0  lim N  2 N 1 L0 Page 18 of
N  F N  F
N N 1 FN
8
2
F 
 lim  N 1  L0
N 
 FN 
• This result can be generalized to obtain.
k 1
F 
Lk  lim  N 1  L0
N 
 FN 

Using the relation:


FN  FN 1  FN 2
We obtain, after dividing both sides by FN-1,
FN F
 1  N 2
FN 1 FN 1

By defining a ratio  as,

 FN 
  lim  
N 
 FN 1 
The equation

FN F
 1  N 2
FN 1 FN 1

can be expressed as:

1
 1

that is:

 2   1  0

Page 19 of
8
Example of golden section :
Problem Statement : Find the maximum value of the function fx = 0.4/sqrt(1+x^2)-sqrt(1+x^2)*(1-
.4/(1+x^2))+x using golden section method
Program:
clear all
clc
format short e
syms x
%%Input
fx = 0.4/sqrt(1+x^2)-sqrt(1+x^2)*(1-.4/(1+x^2))+x;
maxit = 50;
es = 10^-5;
R = (5^.5-1)/2;
%%Determine the Interval for the Initial Guess
x=[-10:10];
f = subs(fx,x);
plot(x,f);
xlow = 0.5;
xhigh = 1.5;
%%Perform Golden Search
xl = xlow;
xu = xhigh;
iter = 1;
d = R*(xu-xl);
x1 = xl+d;
x2 = xu-d;
f1 = subs(fx,x1);
f2 = subs(fx,x2);
if f1>f2
xopt = x1;
fx = f1;
else

xopt = x2;
fx = f2;
end

while(1)
d = R*d;

if f1>f2
xl = x2;
x2 = x1;
Page 20 of
8
x1 = xl+d;
f2 = f1;
f1 = subs(fx,x1);
else
xu = x1;
x1 = x2;
x2 = xu-d;
f1 = f2;
f2 = subs(fx,x2);
end
iter = iter+1;

if f1>f2
xopt = x1;
fx = f1;

else
xopt = x2;
fx = f2;
end
if xopt~=0
ea = (1-R)*abs((xu-xl)/xopt)*100;
end
if ea<=es||iter>=maxit,break
end
end
Gold = xopt
and here is the results which I get from the program:

Output

Gold =
8.8197e-001

Page 21 of
8
Experiment No. 6 Date: _______________

Nonlinear programming using Fibonacci method..

Basic Steps to solve the optimization problem using MATLAB 2018.

• The initial interval of uncertainty, in which the optimum lies, has to be known.
Step 1

• The function being optimized has to be unimodal in the initial interval of


Step 2 uncertainty.

• Main File
• the Fibonacci method can be used to find the minimum of a function of one
Step 3 variable even if the function is not continuous.

Fibonacci method
The limitations of the method
• The exact optimum cannot be located in this method. Only an interval known as the final
interval of uncertainty will be known. The final interval of uncertainty can be made as
small as desired by using more computations.
• The number of function evaluations to be used in the search or the resolution required has
to be specified before hand.
• This method makes use of the sequence of Fibonacci numbers, {Fn}, for placing the
experiments. These numbers are defined as:

F0  F1  1
Fn  Fn 1  Fn 2 , n  2,3,4,

• which yield the sequence 1,1,2,3,5,8,13,21,34,55,89,...

Page 22 of
8
Procedure:
Let L0 be the initial interval of uncertainty defined by a x  b and n be the total number of
experiments to be conducted. Define:
Fn 2
L*2  L0
Fn
and place the first two experiments at points x1 and x2, which are located at a distance of L2* from each
end of L0.

This gives :
Fn  2
x1  a  L*2  a  L0
Fn
Fn  2 F
x2  b  L*2  b  L0  a  n 1 L0
Fn Fn

Discard part of the interval by using the unimodality assumption. Then there remains a smaller
interval of uncertainty L2 given by:

 F  F
L2  L0  L*2  L0 1  n2   n1 L0
 Fn  Fn

The only experiment left in will be at a distance of


Fn  2 F
L*2  L0  n  2 L2
Fn Fn 1

from one end and

Fn 3 F
L2  L*2  L0  n 3 L2
Fn Fn 1

from the other end. Now place the third experiment in the interval L2 so that the current two
experiments are located at a distance of:

Fn 3 F
L*3  L0  n 3 L2
Fn Fn 1

This process of discarding a certain interval and placing a new experiment in the remaining
interval can be continued, so that the location of the jth experiment and the interval of uncertainty
at the end of j experiments are, respectively, given by:

Page 23 of
8
Fn  j
L*j  L j 1
Fn ( j 2)
Fn ( j 1)
Lj  L0
Fn
The ratio of the interval of uncertainty remaining after conducting j of the n predetermined
experiments to the initial interval of uncertainty becomes:

Lj Fn ( j 1)

L0 Fn

and for j = n, we obtain,

Ln F1 1
 
L0 Fn Fn

The ratio Ln/L0 will permit us to determine n, the required number of experiments, to achieve any
desired accuracy in locating the optimum point.Table gives the reduction ratio in the interval of
uncertainty obtainable for different number of experiments.

Page 24 of
8
Example:
Minimize
f(x)=0.65-[0.75/(1+x2)]-0.65 x tan-1(1/x) in the interval [0,3] by the Fibonacci method using
n=6.
Solution: Here n=6 and L0=3.0, which yield:

Fn  2 5
L2 *  L0  (3.0)  1.153846
Fn 13
Thus, the positions of the first two experiments are given by x1=1.153846 and x2=3.0-
1.153846=1.846154 with f1=f(x1)=-0.207270 and f2=f(x2)=-0.115843. Since f1 is less than f2, we
can delete the interval [x2,3] by using the unimodality assumption.

The third experiment is placed at x3=0+ (x2-x1)=1.846154-1.153846=0.692308, with the corresponding


function value of f3=-0.291364. Since f1 is greater than f3, we can delete the interval [x1,x2]
The next experiment is located at x4=0+ (x1-x3)=1.153846-0.692308=0.461538, with f4=-0.309811. Noting
that f4 is less than f3, we can delete the interval [x3,x1]

Page 25 of
8
The next experiment is located at x4=0+ (x1-x3)=1.153846-0.692308=0.461538, with f4=-
0.309811. Noting that f4 is less than f3, we can delete the interval [x3,x1]

The location of the next experiment can be obtained as x5=0+ (x3-x4)=0.692308-


0.461538=0.230770, with the corresponding objective function value of f5=-0.263678. Since f4 is
less than f3, we can delete the interval [0,x5]

Page 26 of
8
The ratio of the final to the initial interval of uncertainty is

L6 0.461540  0.230770
  0.076923
L0 3.0
This value can be compared with
Ln F1 1
 
L0 Fn Fn

which states that if n experiments (n=6) are planned, a resolution no finer than 1/Fn=
1/F6=1/13=0.076923 can be expected from the method.

Page 27 of
8
Program:
Problem Statement : fibonacci search for maximum of unknown unimodal function in one variable

function x=fibosearch(fhandle,a,b,npoints)
% fibonacci search for maximum of unknown unimodal function in one
variable
% x = fibosearch(fhandle,a,b,npoints)
% a,b define the search interval with resolution 1/npoints
%create fibonacci sequence of length nfibo
nfibo=22;
fibo=[1,1,zeros(1,nfibo-2)];
for k=1:nfibo-2
fibo(k+2)=fibo(k+1)+fibo(k);
end
%find number of required iterations
fiboindex=3;
while fibo(fiboindex)<npoints
fiboindex=fiboindex+1;
end
for k=1:fiboindex-2
if k==1
x1 = a+fibo(fiboindex-k-1)/fibo(fiboindex-k+1)*(b-a);
x2 = b-fibo(fiboindex-k-1)/fibo(fiboindex-k+1)*(b-a);
fx1 = fhandle(x1);
fx2 = fhandle(x2);
end
if fx1<fx2
a=x1;
x1=x2; fx1=fx2;
x2=b-fibo(fiboindex-k-1)/fibo(fiboindex-k+1)*(b-a);
fx2=fhandle(x2);

Page 28 of
8
else
b=x2;
x2=x1; fx2=fx1;
x1=a+fibo(fiboindex-k-1)/fibo(fiboindex-k+1)*(b-a);
fx1=fhandle(x1);
end
end
if fx1<fx2
x=x2;
else
x=x1;
end
disp(fiboindex-2)

Page 29 of
8
Experiment No. 7 Date: _______________

Nonlinear programming using Newton, Quasi newton and Secant method...

Basic Steps to solve the optimization problem using MATLAB 2018.

• Find necessary condition for minium.


Step 1

• Finding Itearation at various section of the step size .


Step 2
• Main File
• the Elimination meyhod can be used to find the minimum of a function of one
Step 3 variable even if the function is not continuous.

Direct root methods


The necessary condition for f () to have a minimum of * is that
f ( )  0
Three root finding methods will be considered here:
• Newton method
• Quasi-Newton method
• Secant methods
1. Nweton Method

Consider the quadratic approximation of the function f () at = i using the Taylor’s series
expansion:

1
f ( )  f (i )  f (i )(  i )  f (i )(  i ) 2
2
By setting the derivative of this equation equal to zero for the minimum of f (), we obtain:

f ( )  f (i )  f (i )(  i )  0

Page 30 of
8
If i denotes an approximation to the minimum of f (), the above equation can be rearranged to
obtain an improved approximation as:
f (i )
i 1  i 
f (i )
Thus, the Newton method is equivalent to using a quadratic approximation for the function f ()
and applying the necessary conditions.
The iterative process given by the above equation can be assumed to have converged when the
derivative, f’(i+1) is close to zero:

f (i 1 )  
where  is a small quantity

• If the starting point for the iterative process is not close to the true solution *, the Newton
iterative process may diverge as illustrated:

Page 31 of
8
Remarks:
• The Newton method was originally developed by Newton for solving nonlinear equations
and later refined by Raphson, and hence the method is also known as Newton-Raphson
method in the literature of numerical analysis.
• The method requires both the first- and second-order derivatives of f ().
• Find the minimum of the function:

0.75 1
f ( )  0.65   0.65 tan 1
1  2

• Using the Newton-Raphson method with the starting point 1=0.1. Use =0.01 in the
equation
f (i 1 )  
where  is a small quantity

• for checking the convergence.


Solution: The first and second derivatives of the function f () are given by :
1.5 0.65 1
f ( )    0.65 tan 1
(1   )
2 2
1  2

1.5(1  32 ) 0.65(1  2 ) 0.65 2.8  3.22
f ( )    
(1  2 ) 3 (1  2 ) 2 1  2 (1  2 ) 3
Page 32 of
8
Iteration 1
1=0.1, f (1) = -0.188197, f’ (1) = -0.744832, f’’ (1)=2.68659

f (1 )
2  1   0.377241
f (1 )

Convergence check: | f (2)| =|-0.138230| > 


Iteration 2
f (2 ) = -0.303279, f’ (2) = -0.138230, f’’ (2) = 1.57296

f (2 )
3  2   0.465119
f (2 )
Convergence check: | f’(3)| =|-0.0179078| > 
Iteration 3
f (3 ) = -0.309881, f’ (3) = -0.0179078, f’’ (3) = 1.17126

f (3 )
4  3   0.480409
f (3 )

Convergence check: | f’(4)| =|-0.0005033| < 


Since the process has converged, the optimum solution is taken as * 4=0.480409
Convergence check: | f’(4)| =|-0.0005033| < 
Since the process has converged, the optimum solution is taken as * 4=0.480409
The Newton-Raphson Method is a better version of the Fixed Point Interation Method, increasing the speed
of the convergence to find the root of the equation. The NRM uses divisions, so it can give a indefinite math
error, once the denominator is zero. The formula of the NMR is: x = x0 -(f(x0)/f'(x0)). It is my first
MATLAB code. After getting the number of iterations and the root itself, the code plots a graph to show the
root, highlighted. Tell me what you like, what you would do it different, and help me get better at
MATLAB.

Page 33 of
8
Program:

function [] = metodo_nr()
clear all %Line to clear previous values and clear the screen
clc
% Creation of variable X and input data to the function
syms x;
fun = input('Type a function with a variable x (Ex: x^2-3*x+2) >
');
f= inline(fun);
% Derivative of the function
z= diff(f(x));
flinha= inline(z);
% Input of the initial approximation (x0)
x0 = input('Initial Aprox. > ');
x=x0;
% Counter and Error admitted
cont=1; %
erro = 5*10^-3;
y=0;
% Loop Logic
while (abs(f(x))>erro)||(abs(x-y)>erro)
y=x;
x=x-(f(x)/flinha(x));
cont=cont+1;
end
% Logic to evaluate the results
if flinha(x0)== 0
fprintf('Derivative of the function in the point is zero\n')
else
fprintf('Number of iterations ')
cont %Results shown
x
%Logic to plot the graph of the function and highlight the root
a=x-2;
b=x+2;
plot(x,f(x),'r*');
hold on;
x=a:0.1:b;
plot(x,f(x));
grid on;
xlabel('x');
ylabel('y');
end

Page 34 of
8
2. Newton-Qausi method
If the function minimized f () is not available in closed form or is difficult to differentiate, the
derivatives f’ () and f’’ () in the equation,

f (i )
i 1  i 
f (i )

can be approximated by the finite difference formula as:

f (i   )  f (i   )
f (i ) 
2
f (i   )  2 f (i )  f (i   )
f (i ) 
2

where  is a small step size.


Substitution of:

f (i   )  f (i   )
f (i ) 
2
f (i   )  2 f (i )  f (i   )
f (i ) 
2

İnto
f (i )
i 1  i 
f (i )

leads to

[ f (i   )  f (i   )]


i 1  i 
2[ f (i   )  2 f (i )  f (i   )]

This iterative process is known as the quasi-Newton method. To test the convergence of the
iterative process, the following criterion can be used:

f (i 1   )  f (i 1   )
f (i 1 )  
2

where a central difference formula has been used for evaluating the derivative of f and  is a
small quantity.
Page 35 of
8
Remarks:
The equation
[ f (i   )  f (i   )]
i 1  i 
2[ f (i   )  2 f (i )  f (i   )]

requires the evaluation of the function at the points i+ and i - in addition to i in each
iteration.
Example
Find the minimum of the function,

0.75 1
f ( )  0.65   0.65 tan 1
1  2

using the quasi-Newton method with the starting point 1=0.1 and the step size =0.01 in
central difference formulas. Use =0.01 in equation ,

f (i 1   )  f (i 1   )
f (i 1 )  
2
for checking the convergence.
Itearation 1

  0.1,   0.01, f1  f (1 )  0.188197


f1  f (1   )  0.195512, f1  f (1   )  0.180615
 ( f1  f1 )
2  1   0.377882
2( f1  2 f1  f1 )
Convergence check :
f 2  f 2
f (2 )   0.137300  
2

Itearation 2

f 2  f (2 )  0.303368, f 2  f (2   )  0.304662


f 2  f (2   )  0.301916,
 ( f 2  f 2 )
3  2   0.465390
2( f 2  2 f 2  f 2 )
Convergence check :
f 3  f 3
f (3 )   0.017700  
2 Page 36 of
8
Program:

Problem Statement : The script quasi_newton optimizes a general multi variable real valued function
using DFP quasi Newton method.

disp('Would you like to clear your workspace memory and command screen?');
z=input('To do so enter 1, any other number to continue:');
if z==1
clc;clear
end
choice=input('To do a maximization enter 1, for minimization enter 2:');a=2;
while a==2
if choice~=1&&choice~=2
disp('Wrong input!');a=2;
choice=input('To do a maximization enter 1, for minimization enter 2:');
else
a=1;
end
end
a=2;mark=0;flag=0;count=0;syms b;u=[];warning off all
try
while a==2
n=floor(input('Enter the dimension of function:'));
if n<1
disp('Kindly enter a positive integer');a=2;
else
a=1;
end
end
catch
disp('Kindly enter a positive integer');
end
a=2;
for i=1:n
syms (['x' num2str(i)]);
end
while a==2
try
f=input('Enter the function in terms of variables x_i(i.e. x1,x2,etc.):');a=1;
catch
disp('Kindly recheck the index of variables and format of expression!');a=2;
end
end
if choice==1
f=-f;
end
x0=(input('Enter the initial approximation row vector of variables:'))';a=2;
while a==2
Page 37 of
8
if length(x0)~=n
disp('The dimension of initial approximation is incorrect!');a=2;
x0=(input('Enter the initial approximation as row vector of variables:'))';
else
a=1;
end
end
try
eps=abs(input('Enter the error tolerance:'));
catch
disp('Kindly enter a real number!');
end
g=f;
try
for i=1:n
h(i)=diff(g,['x' num2str(i)]);
for j=1:n
k(i,j)=diff(h(i),['x' num2str(j)]);
end
end
catch
disp('Your optimization problem can not be solved');
return
end
disp('Hessian matrix of the function=')
if choice==1
disp(-k)
elseif choice==2
disp(k)
end
for i=1:n
for j=1:n
if i==j
d=det(-k(1:i,1:j));
if choice==1
d=-d;
end
SOL=solve(d);
if str2num(char(d))<=0
mark=mark+1;u=[u i];
elseif isempty(SOL)==0
for m=1:length(SOL)
if isreal(SOL(m))==1||isa(SOL(m),'sym')
mark=mark+1;
if (length(find(u==i))==0)
u=[u i];
end

Page 38 of
8
if mark>0
if choice==1
fprintf('\nThe %gth principal minor of Hessian is not negative at all real x!\n',u);

fprintf('So the function is not concave globally and hence the global maximization is not
guaranteed!\n');
elseif choice==2
fprintf('\nThe %gth principal minor of Hessian is not positive at all real x!\n',u);
fprintf('So the function is not convex globally and hence the global minimization is not
guaranteed!\n');
end
else
if choice==1
fprintf('\nAll the principal minors of Hessian are negative so the function is concave globally!\n');
disp('Hence a global maximization is possible!');
elseif choice==2
fprintf('\nAll the principal minors of Hessian are positive so the function is convex globally!\n');
disp('Hence a global minimization is possible!');
end
X=x0;
if mark>0
S=eye(n);
else
S=k;
for i=1:n
S=(subs(S,['x' num2str(i)],X(i)));
end
%S=input('Enter a positive definite matrix ');
if n==2
A=[];B=[];C=f;
end
while flag~=1
count=count+1;steplength=0;
grad=h';fprintf('\n-------------------------%gth Iteration------------------\n\n\n',count)
if n==2
A=[A X(1)];B=[B X(2)];
end
for i=1:n
grad=subs(grad,['x' num2str(i)],X(i));
end
disp('Present point:');disp(X);
disp('Gradient=');
if choice==1
disp(-grad)
elseif choice==2
disp(grad)

Page 39 of
8
end
if max(abs(grad))>eps
fprintf('\nThe error tolerance you provided has not been achieved yet\n');
flag=input('To terminate enter 1, any other number to continue:');
end
if flag==1
hes=k;
for i=1:n
hes=subs(hes,['x' num2str(i)],X(i));
end
fprintf('\n\n..........Result...........\n\n')
if length(find(eig(hes)>0))==length(eig(hes))
disp('At the present point the function is convex so it may be a local minimum!');
elseif length(find(eig(hes)<0))==length(eig(hes))
disp('At the present point the function is concave so it may be a local maximum!');
else
disp('The present point is not a local extremum!');
end
disp('Presently the hessian matrix is:');
if choice==1
disp(-hes)
elseif choice==2
disp(hes)
end
fprintf('The optimum point at %g error is:\n',max(abs(grad)));
disp(X);
for i=1:n
f=subs(f,['x' num2str(i)],X(i));
end
fprintf('\nAnd the function value here is:\n')
if choice==1
disp(-f)
elseif choice==2
disp(f)
end
fprintf('The total number of iterations performed=%g\n',count);
end
if max(abs(grad))<=eps
fprintf('\n\nThe error tolerance you provided has been achieved.\n');
hes=k;
for i=1:n
hes=subs(hes,['x' num2str(i)],X(i));
end
fprintf('\n\n..........Result...........\n\n')
if length(find(eig(hes)>0))==length(eig(hes))
disp('Also at the present point the function is convex so it may be a local minimum!');
elseif length(find(eig(hes)<0))==length(eig(hes))

Page 40 of
8
disp('Also at the present point the function is concave so it may be a local maximum!');
else
disp('However the present point is not a local extremum point!');
end
disp('Presently the hessian matrix is:');
if choice==1
disp(-hes)
elseif choice==2
disp(hes)
end
fprintf('The optimum point at %g error is:\n',max(abs(grad)));
disp(X);
for i=1:n
f=subs(f,['x' num2str(i)],X(i));
end
fprintf('\nAnd the function value here is:\n')
if choice==1
disp(-f)
elseif choice==2
disp(f)
end
fprintf('The total number of iterations performed=%g\n',count);flag=1;
elseif max(abs(grad))>eps && flag~=1
fun=f;dir=-S*grad;
for i=1:n
fun=subs(fun,['x' num2str(i)],X(i)+b*dir(i));
end
diff(fun,b);
d=solve(diff(fun,b));
if isempty(d)==1
steplength=1;
else
t=double(d);
dd=diff(diff(fun,b));
for i=1:length(t)
if isreal(t(i))==1
if subs(dd,'b',t(i))>0
steplength=t(i);break
end
if steplength==0
for i=1:length(t)
if isreal(t(i))==1
if t(i)>0
steplength=t(i);break
end
if steplength==0
steplength=1;

Page 41 of
8
end
funct=f;
for i=1:n
funct=subs(funct,['x' num2str(i)],X(i));
end
disp('Functional value at present=');
if choice==1
disp(-funct)
elseif choice==2
disp(funct)
end
disp('Step size taken=');disp(steplength);;;
X=X+steplength*dir;p=steplength*dir;
grad_new=h';
for i=1:n
grad_new=subs(grad_new,['x' num2str(i)],X(i));
end
q=grad_new-grad;
S=S-(S*q*q'*S)/(q'*S*q)+(p*p')/(p'*q);
if count>100
disp('You already have performed 100 iterations and it seems that no extremum of the function
exists!');
flag=input('It is recommended that you terminate the procedure, to do so enter 1, any other number
to continue:');
end
if n==2&&length(A)>=2
[a,b]=meshgrid(A,B);C=subs(C,{x1,x2},{a,b});
view([-50,30]);axis tight; hold on
surfc(a,b,C,'facecolor','green','edgecolor','b','facelighting','gouraud')
view([-50,30]);axis tight;shading interp;
plot(A,B,'-mo',...
'LineWidth',2,...
'MarkerEdgeColor','k',...
'MarkerFaceColor',[.49 1 .63],...
'MarkerSize',12);
end
catch
disp('Unable to generate plot of function!');
end

Page 42 of
8
Output:

3.Secant Method:

The secant method uses an equation similar to equation:


f ( )  f (i )  f (i )(  i )  0

as:
f ( )  f (i )  s(  i )  0

where s is the slope of the line connecting the two points (A, f’(A)) and (B, f’(B)), where A and
B denote two different approximations to the correct solution, *. The slope s can be expressed
as:

f ( B)  f ( A)
s
B A
The iterative process given by the above equation is known as the secant method. Since the
secant approaches the second derivative of f () at A as B approaches A, the secant method can
also be considered as a quasi-Newton method.
f (i ) f ( A)( B  A)
i 1  i   A
s f ( B)  f ( A)
1. Set 1=A=0 and evaluate f’(A). The value of f’(A) will be negative. Assume an initial trial
step length t0.
Page 43 of
8
2. Evaluate f’(t0).
3. If f’(t0)<0, set A= i=t0, f’(A)= f’(t0), new t0=2t0, and go to step 2.
4. If f’(t0)≥0, set B= t0, f’(B)= f’(t0), and go to step 5.
5. Test for convergence:
6. where  is a small quantity. If the above equation is satisfied, take * i+1 and stop the
procedure. Otherwise, go to step 7.
7. If f’(İ+1) ≥ 0, set new B= İ+1, f’(B) =f’(İ+1), i=i+1, and go to step 5.
8. If f’(İ+1) < 0, set new A= İ+1, f’(A) =f’(İ+1), i=i+1, and go to step 5.

Find the new approximate solution of the problem as:

f ( A)( B  A)
i 1  A 
f ( B)  f ( A)
Program:

function y = secant(f,xn_2,xn_1,maxerr)
xn = (xn_2*f(xn_1) - xn_1*f(xn_2))/(f(xn_1) - f(xn_2));
disp('xn-2 f(xn-2) xn-1 f(xn-1) xn f(xn)');
disp(num2str([xn_2 f(xn_2) xn_1 f(xn_1) xn f(xn)],'%20.7f'));
flag = 1;
while abs(f(xn)) > maxerr
xn_2 = xn_1;
xn_1 = xn;
xn = (xn_2*f(xn_1) - xn_1*f(xn_2))/(f(xn_1) - f(xn_2));
disp(num2str([xn_2 f(xn_2) xn_1 f(xn_1) xn f(xn)],'%20.7f'));
flag = flag + 1;
if(flag == maxiter)
break;
end
end
if flag < maxiter
display(['Root is x = ' num2str(xn)]);
else
display('Root does not exist');
end
y = xn;

Output

Page 44 of
8
Page 45 of
8
Experiment No. 8 Date: _______________

Nonlinear programming using Univariate method

Basic Steps to solve the optimization problem using MATLAB 2018.

• Find necessary condition for minium.


Step 1

• Finding Itearation at various section of the step size .


Step 2
• Main File
• the Elimination meyhod can be used to find the minimum of a function of one
Step 3 variable even if the function is not continuous.

Univariate Method

 Recall from Calculus that when you are set the problem of determining a minimum of a continuously
differentiable function f on a closed interval [a, b] first you determine the derivative function f!. Then
you find all the critical values x in [a, b] where f!(x) = 0, say x1, x2, . . . , xp. Finally, you determine

 the global minimum minx"[a,b] f(x) = min{f(a), f(x1), f(x2), . . . , f(xp), f(b)}. There is no need to
check whether the critical values x1, x2, . . . , xp correspond to local minima as we determine the
global minimum by enumeration. This approach suggests that we may find the extrema of f(x) by
computing the zeros of f!(x) and we’ll return to this approach at the end of this chapter.

Univariate optimization means optimization of a scalar function of a single variable:


y = P(x)
These optimization methods are important for a variety of reasons:
1) there are many instances in engineering when we want to find the optimum of functions such as these
(e.g. optimum reactor temperature, etc.),
2) almost all multivariable optimization methods in commercial use today contain a line search step in
their algorithm,
3) they are easy to illustrate and many of the fundamental ideas are directly carried over to
multivariable optimization Of course these discussions will be limited to nonlinear functions.
Page 46 of
8
Program:
Problem Statement: (1,-5) in the example and achieves the minimum by optimizing the function in x and y
dimensions intermittently till the net optimum is achieved.

clc
clear
format long
% Function Definition (Enter your Function here):
syms X Y;
f = X - Y + 2*X^2 + 2*X*Y + Y^2;
% Initial Guess (Choose Initial Values):
x(1) = 1;
y(1) = -5;
S = [1 0]';
I = [x(1),y(1)]';
% Tolerance and Step-size:
e = 0.01;
i = 1;
% Convergence Parameters:
F = subs(f, [X,Y], [x(1),y(1)]);
f_plus = I + e*S;
F_plus = subs(f, [X,Y], [f_plus(1), f_plus(2)]);
f_minus = I - e*S;
F_minus = subs(f, [X,Y], [f_minus(1), f_minus(2)]);
% Search Direction:
if F_minus < F
S = -S;
else
S = S;
end
% Optimization Algorithm:
while ((double(F) > double(F_minus))||(double(F) >
double(F_plus)))
syms h; % Step size
g = subs(f, [X,Y], [x(i)+S(1)*h,y(i)+h*S(2)]);
dg_dh = diff(g,h);
h = solve(dg_dh, h); % Optimal Step Size
x(i+1) = I(1)+h*S(1); % New x value
y(i+1) = I(2)+h*S(2); % New y value
i = i+1;
I = [x(i),y(i)]'; % Updated Point
if rem(i,2) == 0
S = [0 1]';
else
S = [1 0]';
end
Page 47 of
8
F = subs(f, [X,Y], [x(i),y(i)]);
f_plus = I + e*S;
F_plus = subs(f, [X,Y], [f_plus(1), f_plus(2)]);
f_minus = I - e*S;
F_minus = subs(f, [X,Y], [f_minus(1), f_minus(2)]);
if double(F_minus) < double(F)
S = -S;
else
S = S;
end
end
% Result Table:
Iter = 1:i;
X_coordinate = x';
Y_coordinate = y';
Iterations = Iter';
T = table(Iterations,X_coordinate,Y_coordinate);
% Plots:
fcontour(f,'Fill','On');
hold on;
plot(x,y,'*-r');
% Output:
fprintf('Initial Objective Function Value: %d\n\n',subs(f,[X,Y], [x(1),y(1)]));
if ((double(F)>=double(F_minus))||(double(F)>=double(F_plus)))
fprintf('Minimum succesfully obtained...\n\n');
end
fprintf('Number of Iterations for Convergence: %d\n\n', i);
fprintf('Point of Minima: [%d,%d]\n\n', x(i), y(i));
fprintf('Objective Function Minimum Value after Optimization: %f\n\n', subs(f,[X,Y], [x(i),y(i)]));
disp(T)

Page 48 of
8
Output

Page 49 of
8
Experiment No. 8 Date: _______________

Nonlinear programming using Indirect search method.

Basic Steps to solve the optimization problem using MATLAB 2018.

Indirect Search Algorithm

The indirect search algorithms are based on the derivatives or gradients of the objective function.
The gradient of a function in N-dimensional space is given by:

f  X   f  X i   f i T  X  X i  
1
 X  X i T J i  X  X i 
2
Indirect search algorithms include:
A) Steepest Descent (Cauchy) Method: In this method, the search starts from an initial trial point X1,
and iteratively moves along the steepest descent directions until the optimum point is found.
Although, the method is straightforward, it is not applicable to the problems having multiple local
optima. In such cases the solution may get stuck at local optimum points.
B) Conjugate Gradient (Fletcher-Reeves) Method: The convergence technique of the steepest descent
method can be greatly improved by using the concept of conjugate gradient with the use of the
property of quadratic convergence.
C) Newton’s Method: Newton’s method is a very popular method which is based on Taylor’s series
expansion. The Taylor’s series expansion of a function f(X) at X=Xi is given by:

Penalty Function…. f X i   f X i     M  2
 Search algorithms cannot be directly applied to constrained optimization.
 Constrained optimization can be converted into unconstrained optimization using penalty function
when there is a constraint violation.

=1( for minimization problem) and -1 ( for maximization problem), M=dummy variable with a very high
value

Page 50 of
8
Program:

fun = @psobj;

Find the minimum, starting at the point [0,0].

x0 = [0,0];
x = patternsearch(fun,x0)
Optimization terminated: mesh size less than options.MeshTolerance.

Output
x =

-0.7037 -0.1860

Page 51 of
8
Experiment No. 8 Date: _______________

Optimization using Evolutionary Algorithm.

Introduction

Suppose that a data scientist has an image dataset divided into a number of classes and an image
classifier is to be created. After the data scientist investigated the dataset, the K-nearest neighbor
(KNN) seems to be a good option. To use the KNN algorithm, there is an important parameter to
use which is K. Suppose that an initial value of 3 is selected. The scientist starts the learning
process of the KNN algorithm with the selected K=3. The trained model generated reached a
classification accuracy of 85%. Is that percent acceptable? In another way, can we get a better
classification accuracy than what we currently reached? We cannot say that 85% is the best
accuracy to reach until conducting different experiments. But to do another experiment, we
definitely must change something in the experiment such as changing the K value used in the KNN
algorithm. We cannot definitely say 3 is the best value to use in this experiment unless trying to
apply different values for K and noticing how the classification accuracy varies. The question is
“how to find the best value for K that maximizes the classification performance?” This is what is
called optimization.

In optimization, we start with some kind of initial values for the variables used in the experiment.
Because these values may not be the best ones to use, we should change them until getting the best
ones. In some cases, these values are generated by complex functions that we cannot solve
manually easily. But it is very important to do optimization because a classifier may produce a bad
classification accuracy not because, for example, the data is noisy or the used learning algorithm is
weak but due to the bad selection of the learning parameters initial values. As a result, there are
different optimization techniques suggested by operation research (OR) researchers to do such
work of optimization. According to [1], optimization techniques are categorized into four main
categories:

1. Constrained Optimization
2. Multimodal Optimization
3. Multiobjective Optimization
4. Combinatorial Optimization

Looking at various natural species, we can note how they evolve and adapt to their environments.
We can benefit from such already existing natural systems and their natural evolution to create our
artificial systems doing the same job. This is called bionics. For example, the plane is based on
how the birds fly, radar comes from bats, submarine invented based on fish, and so on. As a result,
principles of some optimization algorithms comes from nature. For example, Genetic Algorithm
(GA) has its core idea from Charles Darwin’s theory of natural evolution “survival of the fittest”.
Before getting into the details of how GA works, we can get an overall idea about evolutionary
algorithms (EAs).

Page 52 of
8
Evolutionary Algorithms (EAs)

We can say that optimization is performed using evolutionary algorithms (EAs). The difference
between traditional algorithms and EAs is that EAs are not static but dynamic as they can evolve
over time.

Evolutionary algorithms have three main characteristics:

1. Population-Based: Evolutionary algorithms are to optimize a process in which current


solutions are bad to generate new better solutions. The set of current solutions from which
new solutions are to be generated is called the population.
2. Fitness-Oriented: If there are some several solutions, how to say that one solution is better
than another? There is a fitness value associated with each individual solution calculated
from a fitness function. Such fitness value reflects how good the solution is.
3. Variation-Driven: If there is no acceptable solution in the current population according to
the fitness function calculated from each individual, we should make something to generate
new better solutions. As a result, individual solutions will undergo a number of variations
to generate new solutions.

Genetic Algorithm (GA)

The genetic algorithm is a random-based classical evolutionary algorithm. By random here we


mean that in order to find a solution using the GA, random changes applied to the current solutions
to generate new ones. Note that GA may be called Simple GA (SGA) due to its simplicity
compared to other EAs.

GA is based on Darwin’s theory of evolution. It is a slow gradual process that works by making
changes to the making slight and slow changes. Also, GA makes slight changes to its solutions
slowly until getting the best solution.

Here is the description of how the GA works:

GA works on a population consisting of some solutions where the population size (popsize) is the
number of solutions. Each solution is called individual. Each individual solution has a
chromosome. The chromosome is represented as a set of parameters (features) that defines the
individual. Each chromosome has a set of genes. Each gene is represented by somehow such as
being represented as a string of 0s and 1s as in the next diagram.

Also, each individual has a fitness value. To select the best individuals, a fitness function is used.
The result of the fitness function is the fitness value representing the quality of the solution. The
higher the fitness value the higher the quality the solution. Selection of the best individuals based
on their quality is applied to generate what is called a mating pool where the higher quality
individual has higher probability of being selected in the mating pool.

Page 53 of
8
Page 54 of
8