Вы находитесь на странице: 1из 30

 Introducing MATLAB

Process  Basic MATLAB Functions for


Linear and Non-Linear

Simulation Lab
Optimization
 Optimization by Numerical
methods: Solving Equations

Lecture – 5  Optimization Techniques Via


The Optimization Toolbox

(Section – 1)
1
 The optimization problem and
the relevant MATLAB
commands

Sunday, March
Process Simulation Lab, ChED, MNNIT 10, 2019
2 INTRODUCING MATLAB
 MATLAB is a platform for scientific calculation and high-level programming which uses an
interactive environment that allows you to conduct complex calculation tasks more efficiently than
with traditional languages, such as C, C++ and FORTRAN.

 MATLAB is an interactive high-level technical computing environment for algorithm development,


data visualization, data analysis and numerical analysis.

 It is possible to use MATLAB for a wide range of applications, including calculus, algebra, statistics,
econometrics, quality control, time series, signal and image processing, communications, control
system design, testing and measuring systems, financial modelling, computational biology, etc.

Sunday, March
Process Simulation Lab, ChED, MNNIT 10, 2019
3 Keys that can be used in MATLAB
 Up arrow (Ctrl-P) Retrieves the previous entry.
 Down arrow (Ctrl-N) Retrieves the following entry.
 Left arrow (Ctrl-B) Moves the cursor one character to the left
 Right arrow (Ctrl-F) Moves the cursor one character to the right.
 CTRL-left arrow Moves the cursor one word to the left.
 CTRL-right arrow Moves the cursor one word to the right.
 Home (Ctrl-A) Moves the cursor to the beginning of the line.
 End (Ctrl-E) Moves the cursor to the end of the current line.
 Escape Clears the command line.
 Delete (Ctrl-D) Deletes the character indicated by the cursor.
 Backspace Deletes the character to the left of the cursor.
 CTRL-K Deletes (kills) the current line.

Sunday, March
Process Simulation Lab, ChED, MNNIT
10, 2019
Basic MATLAB Functions for Linear and Non-Linear Optimization
Solutions of Equations and Systems of Equations
4  solve(‘equation’,‘x’) Solves the equation in the variable x.

 syms x; solve(equ(x), x) Solves the equation equ (x) in the variable x.

 solve(‘eq1,eq2,…,eqn’, ‘x1, x2,…,xn’) Solves n simultaneous equations eq1,…, eqn (in the variables x1,…, xn).

 X = linsolve (A, B) Solves A * X = B for a square matrix A, where B and X are matrices.

 x = nnls (A, b) Solves A * x = b in the sense of least squares, where x is a vector (x ≥ 0).

 x = lscov(A,b,V) Solves A * x = B in the least squares sense with covariance matrix


proportional to V, i.e. x minimizes (b - A*x)’*inv(V)*(b - A*x).

 roots (V) Returns the roots of the polynomial whose coefficients are given by the
vector V (from highest to lowest order).

 X = A\B Solves the system A * X = B.

 X = A/B Solves the system X * A = B.

 poly (V) Returns the coefficients of the polynomial whose roots are given by the
vector V
 x = bicg(A,b) Tries to solve the system Ax = b by the method of biconjugate gradients.
 Solve the following equations:  Solve the following system of two equations:
5 1. x3/2 *log(x) = x*log(x3/2) cos(x/12) /exp(x2 /16) = y
2. sqrt[1−x ] + sqrt[1 + x ] = a −5/4 + y = sin(x3/2)
3. x4 −1 = 0  Solution:
4. sin(z) = 2 [x, y] = solve('cos(x/12) /exp(x^2/16) = y','-5/4 + y = sin(x ^(3/2))')
 Solutions:  solve the system of equations:

1. s1 = solve('x^(3/2)*log(x) = x*log(x)^(3/2)') x + 2y + 3z = 6

2. s2 = solve('sqrt(1-x)+sqrt(1+x) = a','x') x + 3y + 8z = 19
2x + 3y + z = −1
3. s3 = solve('x^4-1=0')
5x + 6y + 4z = 5
4. s4=solve('sin(z)=2')
 Solution:
A = [1,2,3;1,3,8;2,3,1;5,6,4]
B = [1,2,3,6;1,3,8,19;2,3,1,-1;5,6,4,5]
[rank(A), rank(B)]
b = [6, 19, -1, 5]
X = linsolve(A,b')
A\b'

Sunday, March
Process Simulation Lab, ChED, MNNIT
10, 2019
Optimization by Numerical methods: Solving Equations
 Non-Linear Equations
6
 The Fixed Point Method for Solving x = g(x)
• The fixed point method solves the equation x = g(x), under certain conditions on the function g, using an iterative
method that begins with an initial value p0 and defines pk+ 1 = g(pk ).
• The fixed point theorem ensures that, in certain circumstances, this sequence will converges to a solution of the equation
x = g(x).
• In practice the iterative process will stop when the absolute or relative error corresponding to two consecutive iterations
is less than a preset value (tolerance).
This simple iterative method can be implemented using the M-file shown in Figure
7  Newton’s Method for Solving the Equation f(x) = 0
• Newton’s method (also called the Newton–Raphson method) for solving the equation f(x) = 0,
under certain conditions on f, uses the iteration
xr+1 = xr − f(xr ) /f '(xr )

Fig: MATLAB code for solving equations by Newton’s method to a given precision.

Sunday, March
Process Simulation Lab, ChED, MNNIT
10, 2019
8
Optimization Techniques Via The Optimization Toolbox

 The Optimization Toolbox provides algorithms for solving a wide range of optimization problems.
It contains routines that put into practice the most widely used methods for minimization and
maximization.

 The toolbox includes state-of-the-art algorithms for constrained and restricted non-linear
minimization, minimax optimization, objective achievement, semi-infinitely constrained
minimization, quadratic and linear programming, non-linear least-squares optimization, the
solution of non-linear equations and constrained linear least-squares systems.

Sunday, March
Process Simulation Lab, ChED, MNNIT
10, 2019
 Minimization Algorithms
9
fgoalattain Solves multiobjective goal attainment problems
fminbnd Finds the minimum of a single-variable function on fixed interval
fmincon Finds the minimum of a constrained non-linear multivariable function
fminimax Solves minimax constraint problems
fminsearch Finds the minimum of an unconstrained multivariable function using a
derivative-free method
fminunc Finds the minimum of an unconstrained multivariable function
fseminf Finds the minimum of a semi-infinitely constrained multivariable non-
linear function
linprog Solves linear programming problems
quadprog Quadratic programming
binfprog The solver for binary integer programming problems.
rcga A real-coded genetic algorithm for process intensification
Sunday, March
Process Simulation Lab, ChED, MNNIT
10, 2019
10  Multiobjective Problems
A general multiobjective problem may be defined as follows:
𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒
𝑥,𝛾
γ

subject to the following restrictions:


F(x)- weight.γ ≤ goal
c(x) ≤ 0
ceq(x) = 0
A*x ≤ b
Aeq*x = beg
lb ≤ x ≤ ub
where ‘x’, ‘weight’, ‘goal’, ‘b’, ‘beq’, ‘lb’, and ‘ub’, are vectors, A and Aeq are matrices, and c (x), ceq (x),
and F (x) are functions that return vectors. F (x), c(x) and ceq (x) may be non-linear functions.

Sunday, March
Process Simulation Lab, ChED, MNNIT
10, 2019
Contd…
 The function fgoalattain solves such problems with the following syntax:
11
x = fgoalattain(fun,x0,goal,weight,A,b,Aeq,beq,lb,ub,nonlcon)
[x,fval,attainfactor,exitflag,output,lambda] = fgoalattain(...)
 The solution of the problem is x, and fval is the objective value of the function at x. The amount of over- or
underachievement of the goals is indicated by attainfactor, exitflag is an indicator of output, output provides information
about the optimization process and lambda contains information concerning Lagrange multipliers.
 Non-Linear Scalar Minimization With Boundary Conditions
 A general problem of this type can be defined as follows:
𝒎𝒊𝒏
𝒙
f(x)

subject to the restriction:


x1< x < x2
where x, x1 and x2 are scalars and f(x) is a function that returns a scalar.
This problem is solved using the function fminbd, whose syntax is as follows:
x = fminbnd(fun,x1,x2,options)
[x,fval] = fminbnd(...)

Sunday, March
Process Simulation Lab, ChED, MNNIT
10, 2019
Non-Linear Minimization with Restrictions
12
 A general problem of this type can be defined as follows:
𝑚𝑖𝑛
𝑥
f(x)
subject to the constraints:
c(x)≤ 0
ceq(x)= 0
A*x ≤ 𝑏
Aeq*x= beg
lb≤ x ≤ub
where x, b, beq, lb and ub are vectors, A and Aeq are matrices and c(x), ceq(x) and F(x) are functions
that return vectors. F(x), c(x) and ceq(x) can be non-linear functions.

Sunday, March
Process Simulation Lab, ChED, MNNIT
10, 2019
Contd…
13  This problem is solved using the function mincon, whose syntax is as follows:
x = fmincon (fun, x 0, A, b, Aeq, beq, lb, ub, nonlcon, options)
[x,fval,exitflag,output,lambda,grad,hessian] = fmincon(...)

 Minimax Optimization: fminimax and fminuc


A general problem of this type can be defined as follows:
𝒎𝒊𝒏 𝒎𝒂𝒙
𝒙 {Fi}
{Fi(x)}
subject to the constraints:
c(x)≤ 0
ceq(x)= 0
A*x ≤ b
Aeq*x= beg
lb≤ x ≤ ub
where x, b, beq, lb and ub are vectors, A and Aeq are matrices and c(x), ceq(x) and F(x) are functions that
return vectors. F(x), c(x) and ceq(x) can be non-linear functions.

Sunday, March
Process Simulation Lab, ChED, MNNIT
10, 2019
Contd…
 This problem is solved using the function fminimax, whose syntax is as follows:
14 x = fminimax(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options)
[x, fval] = fminimax (...)
This minimizes the functions defined on the basis of the initial value x0 subject to the constraint A * x < = b or
Aeq * x = beq or solutions x in the range lb < = x < = ub.
 The function fminuc finds the minimum of a multivariate function without restrictions
𝑚𝑖𝑛
𝑥
f(x)
where x is a vector and f(x) is a function that returns a scalar.
x = fminunc(fun,x0,options)
[x, fval] = fminunc (...)
 Fminsearch
The function fminsearch finds the minimum of a multivariate function without restrictions
𝑚𝑖𝑛
𝑥
f(x)
where x is a vector and f(x) is a function that returns a scalar.
The syntax is as follows:
x = fminsearch(fun,x0,options)
[x,fval] = fminsearch(...)
Sunday, March
Process Simulation Lab, ChED, MNNIT
10, 2019
15  Linear Programming
A general problem of this type can be defined as follows:
𝒎𝒊𝒏 T
𝒙
f (x)
subject to the constraints:
A*x≤ b
Aeq*x = beg
lb ≤ x ≤ ub
where f, x, b, beq, lb and ub are vectors and A and Aeq are matrices.
This problem is solved using the function linprog, whose syntax is as follows:
x = linprog(f,A,b,Aeq,beq,lb,ub,x0,options)
[x,fval] = linprog(...)

Sunday, March
Process Simulation Lab, ChED, MNNIT
10, 2019
The optimization problem and the relevant MATLAB commands
16

 Optimization problems of a single decision variable


𝑚𝑖𝑛
a≤𝑥≤𝑏 f(x)
[x, fval]= fminbnd('fun', a, b, options)

Output argument Description

x The obtained optimal solution in the


feasible range [a b]
fval The objective function value
corresponding to the obtained optimal
point x

Sunday, March
Process Simulation Lab, ChED, MNNIT
10, 2019
17 Input argument Description
The name of the function file provided by the
user, and the format of the function is as follows:
function f=fun(x) f= ... % objective function
fun Besides, if the objective function is not
complicated it can be directly given by the inline
command.

a Lower limit of the decision variable


b Upper limit of the decision variable
Solution optional parameters provided by the
users, such as the maximal iteration number, the
accuracy level and the gradient information, and
options so on. Refer to the command ‘optimset’ for its
usage.

Sunday, March
Process Simulation Lab, ChED, MNNIT
10, 2019
 Solve the following optimization problem
𝑚𝑖𝑛
x3+x2-2x-5
18 0 ≤𝑥≤2

f=inline('x.^3+x.^2-2*x-5'); % use inline to provide the objective function


>> x=fminbnd(f, 0, 2)
x= 0.5486
>> y=f(x)
y= -5.6311
 Multivariate optimization problems without constraints
𝑚𝑖𝑛
𝑥
f(x)
[x, fval]= fminunc('fun', x0, options)
 Solve the following unconstrained optimization problem of two variables
min 3𝑥12 +2x1x2+5𝑥22
x1,x2

Sunday, March
Process Simulation Lab, ChED, MNNIT
10, 2019
Contd…
19
 Step 1: Provide the objective function file
function f=fun(x)
xl=x(1);
x2=x(2);
f=3*xl^2+2*xl*x2+5*x2^2; % objective function
 Step 2: Provide initialization
x0= [1 1]; % initial guess value
[x, fval]=fminunc(‘fun', x0)

x= 1.0e-006 *
-0.2047 0.2144
fval = 2.6778e-013

Sunday, March
Process Simulation Lab, ChED, MNNIT
10, 2019
 Note: Grad Obj = Gradient for the objective function defined by the user. The default ‘off’ causes fminunc to estimate
gradients using finite differences. You must provide the gradient, and set Grad Obj to ‘on’ to use algorithm. This is not
required for quasi-Newton algorithm.
20
 Solution:
function [f, G]=fun(x)
x1=x(1);
x2=x(2);
f=3*x1^2+2*x1*x2+5*x2^2; % objective function
if nargout > 1 % the number of the output arguments is more than 1
G=[6*x1+2*x2 % partial differentiation with respect to x1
2*x1+10*x2]; % partial differentiation with respect to x2
end
options=optimset('GradObj', 'on'); % declare a gradient has been provided
x0=[1 1]; % initial guess
[x, fval]=fminunc(‘fun', x0, options)
x = 1.0e-015 *
-0.2220 0.1110
fval = 1.6024e-031
[x, fval]=fminsearch('fun', x0, options)

Sunday, March
Process Simulation Lab, ChED, MNNIT
10, 2019
 Linear programming problems  Solution:
𝑚𝑖𝑛
fT x
21 𝑥 f=[-5 -4 -3]';
A=[1 -1 2; 3 2 1; 2 3 0];
Subject to
b=[30 40 20]';
A*x ≤ 𝑏
Aeq=[ ];
Aeq X= beq
xL ≤ x ≤ 𝑥 U beq=[ ];

[x, fval]=linprog(f, A, b, Aeq, beq, xL, xU, x0,options) xL=[0 0 0]';


 Solve the following linear programming problem xU=[ ];
𝑚𝑖𝑛
𝑥
-5x1-4x2-3x3 [x, fval]=linprog(f, A, b, Aeq, beq, xL, xU)
Subject to
x1-x2+2x3 ≤ 30
x= 0.0000
3x1+2x2+x3 ≤ 40
6.6667
2x1+3x2 ≤ 20
18.3333
x1≥ 0
x2≥ 0 fval = -81.6667
x3≥ 0

Sunday, March
Process Simulation Lab, ChED, MNNIT
10, 2019
 Quadratic programming problem  Solution
22 𝑚𝑖𝑛 1
xTHx+fTx H=[1 -1 ; -1 2];
𝑥 2
Subject to f=[-6 -2]';

Ax≤ 𝑏 A=[1 -1; -1 2; 2 1]; % inequality coefficient matrix

Aeqx = beq b=[1 2 3]'; % inequality vector

xL ≤ x ≤ 𝑥 U xL=[0 0]'; % lower limit of the variable

[x, fval] =quadprog(H, f, A, b, Aeq, beq, xL, xU, x0, options) [x, fval]=quadprog(H, f, A, b, [ ], [ ], xL, [ ])

 Example
𝑚𝑖𝑛 1 2  Note:
𝑥 +𝑥22 − x1x2 - 6x1 – 2x2
𝑥 2 1
The determination of the H matrix of the quadratic
Subject to objective function can be easily accomplished by using the
x1-x2 ≤ 1 Symbolic Math operations as follows.

- x1+2x2 ≤ 2 H=jacobian(jacobian(f))
H = [ 1, -1]
2x1+x2 ≤ 3
x1≥ 0 [ -1, 2]

x2≥ 0

Sunday, March
Process Simulation Lab, ChED, MNNIT
10, 2019
 The constrained nonlinear optimization problems  Solution
𝑚𝑖𝑛 x0=[10 10 10]';
f(x)
23
𝑥
Subject to A=[-1 -2 -2; 1 2 2];
c(x) ≤ 0 b=[0 72]';
ceq(x) = 0 [x, fval]=fmincon(@objfun, x0, A, b, [ ], [ ], [ ], [ ],@nonlcon);
Ax ≤ 𝑏
objfunc.m nonlcon.m
Aeqx = beq
xL ≤ 𝑥 ≤ 𝑥 U
function f=objfun(x) function[c,ceq]=nonlcon(x)
[x, fval]= fmincon('fun', x0, A, b, Aeq, beq, xL, xU, 'nonlcon', x1=x(1); x1=x(1);
options)
x2=x(2); x2=x(2);
 Example x3=x(3); x3=x(3);
𝑚𝑖𝑛
f(x) = − 𝑥22 x1x3 f=-x1*x2^2*x3; c=x1*x3-0.5*x2^2-380;
𝑥 ceq=x2*x3-144;
Subject to
- x1- 2x2- 2x3 ≤ 0
x1+2x2+2x3 ≤ 72
x2 x3 = 144
1
x1 x3 - 𝑥22 ≤ 380
2

Sunday, March
Process Simulation Lab, ChED, MNNIT
10, 2019
 Minimax problem  Solution

24 𝑚𝑖𝑛 𝑚𝑎𝑥
𝑥 𝑖
{ 𝑓1(x),f2(x),……,fn(x)} x0=[2 2]';

Subject to [x, fval, fmax]=fminimax(‘objfunc’, x0);

c(x) ≤ 0
Objfunc.m
ceq(x) = 0
Function f = objfunc(x)
Ax ≤ 𝑏
x1=x(1);
Aeqx = beq
x2=x(2);
xL ≤ x ≤ xU
f(1) = x1^2+x2^4;
[x, fval]= fminimax('fun', x0, A, b, Aeq, beq, xL, xU, 'nonlcon',
options) f(2) = (2-x1)^2+ (2-x2)^2;

 Example f(3) = 2*exp(x2-x1);


𝑚𝑖𝑛 𝑚𝑎𝑥 f = [f(1) f(2) f(3)]';
𝑥 𝑖
fi(x)
Where
f1 = 𝑥12 +𝑥24
f2 = (2-x1)2 + (2-x2)2
f = 2𝑒 x2−x1
3

Sunday, March
Process Simulation Lab, ChED, MNNIT
10, 2019
 Application of the real-coded genetic algorithm to solve optimization problems

25 𝑚𝑖𝑛
𝑥
f(x)

Subject to:
c(x) ≤ 0
ceq(x) = 0
Ax ≤ 𝑏
Aeqx = beq
xL ≤ x ≤ xU
[x, fval]= rcga('fun', A, b, Aeq, beq, xL, xU, 'nonlcon', param)

Param Description

param(1) Number of chromosomes in the population, N

param(2) Reproduction probability, pr

param(3) Crossover probability threshold, pc

param(4) Mutation probability, pm

param(5) Maximum number of generations, G


Sunday, March
Process Simulation Lab, ChED, MNNIT
10, 2019
 Fundamental principles of the realcoded genetic algorithm
26  Genetic algorithm (GA) is a kind of population-based stochastic approach that mimics the natural
selection and survival of the fittest in the biological world.
 Let θ = [θ1 , θ2 , …, θn] be a set of possible solutions
 The feasible solution Ωθ is expressed as follows:
Ωθ = {θ ϵ Rn| θi,min ≤ θi ≤ θi,max , i = 1,2,…..n }
 Reproduction
the reproduction is an evolutionary process to determine which chromosome should be eliminated or
reproduced in the population.
Using the scheme, prr× N chromosomes having worst objective function values will be discarded and at
the same time other rp × N chromosomes that have better objective functions are reproduced
 Crossover
Crossover operation, which acts as a scheme to blend the parents’ information for generating the
offspring chromosomes, is well recognized as the most important and effective searching mechanism in
RCGAs.

Sunday, March
Process Simulation Lab, ChED, MNNIT
10, 2019
Contd…
 if obj (θ1 ) < obj (θ2 )
27 θ1 ← θ1 + r (θ1–θ2 )
θ2 ← θ2 + r (θ1–θ2 )
else θ1 ← θ1 + r (θ2–θ1 )
θ2 ← θ2 + r (θ2–θ1 )
end
 Mutation
 Mutation serves as a process to change the gene of a chromosome (solution) randomly in order to increase the
population diversity and thus prevent the population from premature convergence to a suboptimal solution.
θ← θ + s × Φ
 The operational procedure of the RCGA
Step 1: Set the number of chromosomes in the population N, and the relevant probability values pr , pc , and pm .
Meanwhile, the maximum number of generations G is also assigned.
Step 2:Randomly produce N chromosomes from the feasible domain Ωθ .
Step 3:Evaluate the corresponding objective function value for each chromosome in the population.
Step 4:The genetic evolution terminates once the computation process has reached the specified maximum number
of generations or the objective function has attained an optimal value.
Sunday, March
Process Simulation Lab, ChED, MNNIT
10, 2019
Contd…
28 Step 5:Implement the operations of reproduction, crossover, and mutation in the genetic evolution. Note that if the
chromosomes of the offspring produced exceed the feasible range Ωθ , the chromosomes before the evolution are
retained.
Step 6: Go back to Step 3.
 Example
𝑚𝑖𝑛
𝑥
f(x) = 5.3578547𝑥32 + 0.8356891x1x5 + 37.293239x1 – 40792.141
Subject to:
0 ≤ 85.334407 + 0.0056858x2 x5+ 0.00026x1 x4 – 0.0022053x3 x5 ≤ 92
90 ≤ 80.51249 + 0.0071317x2 x5+ 0.0029955x1 x2+ 0.0021813𝑥32 ≤ 110
20 ≤ 9.300961+ 0.0047026x3 x5+ 0.0012547x1 x3+ 0.0019085x3 x4 ≤ 25
78 ≤ x1 ≤ 102
33 ≤ x2 ≤ 45
27 ≤ x3 , x4 , x5 ≤ 45

Sunday, March
Process Simulation Lab, ChED, MNNIT
10, 2019
Contd…
 Solution:
29
N=200; % Number of populations
Pr=0.2; % Reproduction probability
Pc=0.3; % Crossover probability threshold
Pm=0.7; % Mutation probability
G=500; % Maximum generation numbers
A=[ ];
b=[ ];
Aeq=[ ];
beq=[ ];
xL=[78 33 27 27 27]; % lower bound
xU=[102 45 45 45 45]; % upper bound
param=[N Pr Pc Pm G]; % RCGA parameter vector
[x, f]=rcga(‘fun’, A, b, Aeq, beq, xL, xU, ‘nonlcon’, param);

Sunday, March
Process Simulation Lab, ChED, MNNIT
10, 2019
Contd…
 fun.m

30 function f=fun(x)
x1=x(1);
x2=x(2);
x3=x(3);
x4=x(4);
x5=x(5);
f=5.3578547*x3^2+0.8356891*x1*x5+37.293239*x1-40792.141;
 nonlcon.m
function [c, ceq]=nonlcon(x)
x1=x(1);
x2=x(2);
x3=x(3);
x4=x(4);
x5=x(5);
f1=85.334407+0.0056858*x2*x5+0.00026*x1*x4-0.0022053*x3*x5;
f2=80.51249+0.0071317*x2*x5+0.0029955*x1*x2+0.0021813*x3^2;
f3=9.300961+0.0047026*x3*x5+0.0012547*x1*x3+0.0019085*x3*x4; c=[-f1; f1-92; 90-f2; f2-110; 20-f3; f3-25]; ceq=[ ];
Sunday, March
Process Simulation Lab, ChED, MNNIT
10, 2019