Вы находитесь на странице: 1из 5

Multiple Shooting with YALMIP Exercise 1:

Convex Optimal Control and Sequential Convex


Programming
Moritz Diehl, Rien Quirynen, Mario Zanon

Aim of this exercise is to formulate and solve optimal control problems using YALMIP together
with the convex solver SDPT3. We start with a standard linear quadratic optimal control problem
as it arises in MPC, and then add an elliptical terminal constraint. Then we add a nonconvex
equality constraint and treat the problem with Sequential Convex Programming (SCP). In the
appendix is information on YALMIP and on SCP.

1 Exercise Tasks
1. Install the MATLAB based optimization environment YALMIP and the solver SDPT3, that
is interfaced to YALMIP, on your computer.

2. Solve the simple LP described in the appendix, first using the matlab solver linprog, after-
wards with the solver SDPT3.

3. Now we want to formulate and solve a simple convex optimal control problem. For this aim
regard the linear system
xk+1 = Axk + Buk
with matrices    
1 1 0
A= and B = .
0 1 1
Formulate and solve with YALMIP (in combination with e.g. quadprog) the problem
N
X 1
minimize kxk k22 + kuk k22
x,u
k=0
subject to xk+1 = Axk + Buk , k = 0, . . . , N 1,
1 uk 1, k = 0, . . . , N 1,
x0 = [10, 0]> ,
xN = 0.

Choose N = 20.
TIP With the following two commands you can make an appropriate cell array of sdpvar
objects for the states as well as the controls:
u = s d p v a r ( o n e s ( 1 ,N) , o n e s ( 1 ,N ) ) ;
x = s d p v a r ( 2 o n e s ( 1 ,N+1) , o n e s ( 1 ,N+ 1 ) ) ;

1
This way, it is for example easy to access the states at a specific node x{i}.

4. Visualize the solution trajectories of states and controls.

5. Now replace the terminal constraint in the above problem by a convex quadratic constraint,
namely by
xTN P xN 0.1
Here, P is chosen to be the solution of the discrete time algebraic Riccati equation

P = I + A> P A A> P B(1 + B > P B)1 B > P A,

that we can e.g. solve by a Picard iteration, as follows:

P0 = I, Pk+1 = I + A> Pk A A> Pk B(1 + B > Pk B)1 B > Pk A.

To check your results or to save time, here is the solution P for our problem:
 
2.947122966707013 2.369205407092466
P = .
2.369205407092466 4.613134260996180

Note that you should use a different solver (e.g. SDPT3 instead of quadprog) now because
of the convex quadratic constraint.

6. Now we want to apply the concept of Sequential Convex Programming (SCP) to a nonconvex
optimal control problem. The idea of the SCP method is explained in the appendix. We will
in the next exercise sheet regard nonlinear dynamics, but for learning how the SCP method
works, let us for simplicity now just add one nonlinear equality constraint to the convex
optimization problem, namely an equality constraint on the tenth state:

kx10 k22 = 4.

Note that the linearization of this constraint at some point x


10 is given by

k x>
x10 k22 + 2 10 (x10 x
10 ) = 4.

Adding this constraint to the above problem results in a convex problem that can reliably
be solved with YALMIP/SDPT3. Iteratively updating the value of x 10 with its most recent
solution will implement the SCP method. In order to enforce convergence of the SCP
method to a local minimum, it might be necessary to add a regularization term of the form
1
10 k22
kx10 x
2
to the objective in each subproblem with a sufficiently large positive constant. Note that
the convex optimization problem does only depend on the variable x 10 and that all other
values are uniquely determined by the convex optimization problem.

7. Run the SCP algorithm with different values of , and plot one component of x10 versus
the iteration number for each of them. What speed of convergence do you observe, and
how does it depend on ?

2
2 Appendix: YALMIP

YALMIP (Yet Another LMI (linear matrix inequality) Parser) is a modeling language for
advanced modeling and solution of convex and nonconvex optimization problems. It is
implemented as a free (as in no charge) toolbox for MATLAB. Further information at:

http://users.isy.liu.se/johanl/yalmip/

Note: YALMIP is only the modeling language, to solve the problems, it needs some under-
lying solvers, e.g., quadprog, CPLEX, Sedumi, SDPT3, etc. Make sure that you have one
of those solvers on your computer.
YALMIP can solve many types of convex problems such as linear programming, convex
quadratic programming, quadratically constrained quadratic programming, linear second
order cone programming, linear semidefinite programming (of course, depending on the
solvers that you have).

2.1 How to download and install YALMIP?


(a) Create a directory on your computer (e.g. ex1/).
(b) Download YALMIP from
http://www.control.isy.liu.se/~johanl/YALMIP.zip

to the directory that you created in Step 1 and unzip it.


(c) Start MATLAB and go to the directory that you created in Step 1.
(d) Add the path of YALMIP to MATLAB path, by entering the command
>> addpath(genpath(yalmip))

(e) Test your YALMIP by invoking


>> yalmiptest

2.2 Getting familiar with YALMIP

After installing YALMIP, solve the optimization problem

min 4x + 7y 3z (1)
x,y,z

s.t. x 7 (2)
y 4 (3)
z = 10 (4)

by the following MATLAB code

3
% Declaring variables
x = sdpvar(1,1);
y = sdpvar(1,1);
z = sdpvar(1,1);

% We could use also


%x = sdpvar(3,1);

% Defining objective function


f = 0;
f = f + 4 * x;
f = f + 7 * y;
f = f - 3 * z;

% Defining constraints
C = [];
C = [C; x >= 7];
C = [C; y >= -4];
C = [C; z == 10];

options = sdpsettings(solver, linprog);

diagn = solvesdp(C, f, options);

xopt(1) = double(x);
xopt(2) = double(y);
xopt(3) = double(z);

3 Appendix: Sequential Convex Programming

Generally speaking, SCP is a very easy implementable method to solve a nonlinear optimiza-
tion problem with convex objective f (x) and convex constraints h(x) 0, which also has
some nonlinear equality constraints g(x) = 0 which render the problem non-convex:

minimize f (x)
x
subject to h(x) 0, (5)
g(x) = 0.

The full step SCP method solves in each iteration a convex approximation that is based on
a linearization of g at the latest iterate, say x
. Based on x
, it generates the next iterate as
the solution of a regularized convex problem, with H  0.

1
minimize f (x) + (x x )> H(x x
)
x 2
subject to h(x) 0, (6)
g
g(x) + x)(x x
( ) = 0.
x
It can be shown that the SCP method, if it converges, converges to a local minimum of the
original non-convex problem. Its convergence speed is linear. By choosing the matrix H

4
sufficiently large (in the positive definite sense) for a given local minimizer, one can ensure
that the SCP method is attracted by this localPminimizer (to be specific, H is ideally a
ng
positive semidefinite matrix that satisfies H  i=1 i 2 gi (x )). On the other hand, by
choosing H too big, one will create a very slow linear convergence rate. In practice, we can
regard H as a tuning parameter, and might call the quadratic term a Levenberg-Marquardt
regularization term. A detailed convergence analysis and some variants of the SCP method
are given in the paper [1].

References
[1] Q. Tran-Dinh, C. Savorgnan, and M. Diehl. Adjoint-based Predictor-Corrector Sequen-
tial Convex Programming for Parametric Nonlinear Optimization. SIAM J. Optimization,
22(4):12581284, 2012.

Вам также может понравиться