Вы находитесь на странице: 1из 9

ECE 2240: Numerical Methods for Engineers

Winter 2015

Lecture 25: Hyperbolic PDEs and an Intro to Optimization


Ian Jeffrey

April 7, 2015

This lecture is based on and uses material from Numerical Methods for Engineers, 6th
edition by S. Chapra and R. Canale and covers therein:
A brief discussion of hyperbolic PDEs (not in textbook).
An introduction to optimization (Part 4 - PT4).
The purpose of this lecture is to:
Briefly discuss approaches for solving hyperbolic PDEs using finite differences.
Introduce optimization from a conceptual perspective.

Important: These lecture notes are a work in progress and may contain errors and/or typos.
Please do not distribute without the authors consent.

25-1

Lecture 25: Hyperbolic PDEs and an Intro to Optimization

25.1

April 7, 2015

Hyperbolic PDEs

Consider the wave equation:


2
2V
2 V

c
=0
t2
x2

The solution to this hyperbolic PDE is V (t, x) that 1) satisfies the partial differential equation and 2) satisfies auxiliary conditions required to specify the constants of integration.
Our interpretation of the independent variables is that t represents time and x represents
one spatial dimension. Given that we wish to solve this PDE for all t > 0 and x [a, b] one
set of possible auxiliary conditions is:
V (0, x) is given initial conditions.
V (t, a) and V (t, b) are given Dirichlet boundary conditions.
As we require both initial and boundary conditions to solve this PDE we refer to it as an
initial boundary value problem (IBVP).
Example (Transmission Line): For an ideal lossless transmission line we have per-unitlength (PUL) parameters L (PUL inductance) and C (PUL capacitance). The voltage
along the transmission line satisifies:
2V
1 2V

=0
t2
LC x2
One solution method is to reduce the second order PDE into a system of two first-order
PDEs:
(1)

1 I
V
+
=0
t
C x

(2)

I
1 V
+
=0
t L x

We can check that these two equations represent the orginal PDE by taking the derivative
of (1) with respect to time:
2V
1 2I
(3)
+
t2
C tx
and the derivative of (2) with respect to space:
(4)

Ian Jeffrey

2I
1 2V
+
=0
xt L x2

25-2

Lecture 25: Hyperbolic PDEs and an Intro to Optimization

April 7, 2015

Substituting for the second derivative of I with respect to space and time from (4) into (3)
gives:
2V
1 2V

=0
t2
LC x2
as required. Therefore solving (1) and (2) simultaneously is equivalent to solving the original
wave equation problem. The value

c=

1
LC

1/2

is the velocity of propagation along the line and I(t, x) represents the current.
In order to solve the coupled PDEs using a finite difference approach we could follow the following procedure:
1. Discretize V and I by introducing points xi within the domain [a , b].
2. At each point approximate the spatial derivatives using some finite difference approximation. This will result in a system of linear equation that represents the spatial
derivative operators.
3. Approximate the temporal (time) derivatives i.e., numerically integrating the resulting
system using an RK method.
The important thing to realize is that you end up having to carry a linear system of equations
representing the spatial derivatives at each step of the RK method. And this is for a single
spatial dimension! You can imagine the size of the system of equations when you are trying
to solve a three-dimensional problem.
Key Concept: Just so you are aware you must be very careful how you setup your
spatial derivative approximations and your time-step when dealing with hyperbolic PDEs.
Information flows in a particular direction depending on the current state of the solution and
this must be taken into account to ensure accuracy and stability. This was not mentioned
in class and is beyond the scope of this course.

Ian Jeffrey

25-3

Lecture 25: Hyperbolic PDEs and an Intro to Optimization

25.2

April 7, 2015

Optimization

Many design problems in engineering (and other disciplines) boil down to getting the best
value out of a system. There are a variety of meanings behind the work best:
1. Sometimes best is very specific:
Example: Design a filter that only passes values in a specific frequency range
Example: Design an MRI magnet that has a specific static magnetic field inside
the magnet bore
2. Sometimes best is just the best we can do:
Example: Design an engine to give the maximum fuel economy possible
Example: Design a circuit that minimizes power consumption
Example: Design a widget (some device) for doing a certain task for the least cost
Most of the time, as illustrated by the previous examples, we have must have constraints
on the design. For example, an engine with maximum fuel economy should probably be able
to move a car (not just some toy model) and should probably not cost an infinite amount of
money to build. Another example is in the case of designing a particular circuit in which case
resistor values, for example, must be positive. Still, in some cases there are no constraints:
Example: Solving a matrix equation Ax = b can be cast as an optimization problem:
minimize ||b Ax||2

in some norm

In this case the values of x can be anything, although on occasion it is beneficial to constrain
them.
As an aside, many advanced methods for solving matrix equations are based on this type of
optimization. Notice that this optimization problem is essentially asking us to find x that
solves the matrix equation. It does not, however, ask us to invert or even systematically
solve the system using say LU decomposition.

Ian Jeffrey

25-4

Lecture 25: Hyperbolic PDEs and an Intro to Optimization

25.2.1

April 7, 2015

A General Optimization Problem

We can write a general optimization problem as a mathematical problem as follows:


A general optimization problem:
Find x that minimizes or maximizes f (x) subject to the following constraints:
di (x) ai , i = 1, 2, . . . , m
ei (x) = bi , i = 1, 2, . . . , p
Some comments on the notation follow:
x is an array (vector) that contains the design variables. It contains as many design
variables as are necessary to define the optimization problem.
f (x) is referred to as the objective function (sometimes the cost functional). It
maps a vector of design variables to a single (real) number.
The equations di (x) ai are referred to as inequality constraints. They say that
some combination of the design variables is less than or equal to ai . The simplest
combination could be that some single design variable (element in x) is less than or
equal to some value.
The equations ei (x) = bi are referred to as equality constraints and specify that
some combination of the design variables is equal to bi . Notice that if the constraint
was as simple as one design variable being equal to bi then we would remove this
design variable from the optimization problem as the equality constraint tells us that
its value is known.
Given the nature of the function f (x) and the constraints we have different types of optimization problems:
If the objective function and constraints are linear then we have a linear programming problem.
If the objective function is quadratic and the constraints are linear then we have a
quadratic programming problem.
If either the objective function and/or the constraints are nonlinear then we have a
nonlinear programming problem (or a general nonlinear optimization problem).
It is important to realize that x just holds all of the design variables you want to vary in
order to produce an optimal design/solution. We illustrate by example:
Ian Jeffrey

25-5

Lecture 25: Hyperbolic PDEs and an Intro to Optimization

April 7, 2015

R1 R2

VL

R3 Vs

RL

Figure 25.1: Abstract circuit used to motivate an optimization problem. The goal is to
maximize the power delivered to the load by changing the design variables R1 , R2 , R3 and
Vs . RL = 50 is assumed to be a known quantity and is not included in the optimization
variables.
Example (Optimization Problem): Consider the abstract circuit shown in Figure
25.1. The circuit may be a function of a number of variables and parameters but we will
limit ourselves to being able to change three resistor values R1 , R2 and R3 and one voltage
source Vs . Our goal is to maximize the power delivered to the load RL = 50.
Computing the power delivered to the load for a given circuit design requires choosing values for R1 , R2 , R3 and Vs , solving the circuit, extracting the load voltage VL and
then computing PL = VL2 /RL . We will let f (x) represent this whole operation where
x = [R1 , R2 , R3 , Vs ]T
where T denotes transposition. So our optimization problem is:
maximize PL = f (x)
If we want the resistor values to be positive we need the constraints:
R1 0
R2 0
R3 0
Notice that these are not in the form specified by our general optimization problem with
inequality constraints. We can rewrite these constraints as:
R1 0
R2 0
R3 0
where they are now in the correct form.
Ian Jeffrey

25-6

Lecture 25: Hyperbolic PDEs and an Intro to Optimization

April 7, 2015

It may seem strange to impose that inequality constraints contain . However this makes
it easier for us to handle only these types of conditions. Any inequality constraint can be
cast in this form so it is not a limitation. Now from a practical perspective we may also
have constraints on the voltage:
Vs 100
Vs 0
which tells us that the voltage Vs is both less than or equal to 100 and greater than or
equal to zero. Of course the specifics of the constraints are dependent on the limits you
face as a design engineer. We are just picking these at random to illustrate.
Now suppose that we had some condition that imposed that R1 = R2 + R3 . We
wont ask why, we will simply show how an equality constraint can enforce this condition.
We have:
R1 R2 R3 = 0
We have now completely specified the optimization problem. The objective function is
properly defined, the design variables are known and both the inequality and equality
constraints have been written down.
We will not have time to illustrate how to perform optimization with constraints (inequality
or equality) in our brief foray into optimization. The hope is that the exposure will help you
the next time you pick up this subject.
Key Concept: Most optimization textbooks will only discuss minimizing (or maximizing)
an objective function (and not the other). The reason for this is that minimizing f (x) is
equivalent to maximizing f (x). You can convince yourself that this is true. So if we are
only able to minimize and we want to maximize, just multiply your objective function by
-1.
The solution to an optimization problem:
The solution to an optimization problem is often denoted as x . Our goal is to find
x that minimizes f (x), that is f (x ) f (x) (for all) feasible solutions x. A feasible
solution is one that satisfies the constraints (if we have constraints).
It is possible that there is some x that produces a smaller value of f (x) than f (x ),
but if this x does not satisfy the constraints it is not a (feasible) solution to our design
problem and should be ignored.
Ian Jeffrey

25-7

Lecture 25: Hyperbolic PDEs and an Intro to Optimization

25.3

April 7, 2015

One Dimensional Unconstrained Optimization

While optimization with a bunch of design variables is really interesting, one-dimensional


optimization problems are also important. By a one-dimensional optimization problem we
mean an optimization problem where there is only a single design variable, that is, x = x.
If the problem is unconstrained then there are no constraints. We will see a bit more detail
about one dimensional unconstrained optimization next lecture. However for now it suffices
to say that we want to:
minimize (or maximize) f (x)
where there are no constraints. One reason for looking at this problem first is that we already
know how to solve it!

25.3.1

Newton-Raphson

It shouldnt really be a surprise that the Newton-Raphson method can be used to find the
minimum/maximum of a function. Recall from root finding that:
xi+1 = xi

f (xi )
f 0 (xi )

can be used to solve for the roots of the equation f (x) = 0. How can we use this for
optimization?
Key Concept: At an extremum (minimum/maximum) the function f 0 (x) is zero. There
is an exception - if the domain of optimization is constrained then it is possible that the
extreme value occurs at the boundary of the domain and that the derivative is non-zero.
As we are not considering constrained optimization we do not care about this case.

Ian Jeffrey

25-8

Lecture 25: Hyperbolic PDEs and an Intro to Optimization

April 7, 2015

One dimensional unconstrained optimization using the Newton-Raphson


method:
Given that we want to minimize/maximize f (x) we set g(x) = f 0 (x) = 0. Roots of
g(x) can be found by the Newton-Raphson method:
xi+1 = xi

g(x)
f 0 (x)
=
x

i
g 0 (x)
f 00 (x)

Thats all there is to it! All of the cautions and pitfalls of the Newton-Raphson method
still hold true.
In addition we must be careful because the Newton-Raphson method will not discriminate between a maximum and minimum (the derivative is zero in both cases). We
can use the second derivative to check if we have reached a minimum or maximum of f (x)
at x = x :
f 00 (x ) < 0 f (x ) is maximum,

f 00 (x ) > 0 f (x ) is minimum

Example: Consider maximizing the function


f (x) = 2 sin(x)

x2
10

starting from an initial guess of x0 = 2.5. We can solve this problem using the NewtonRaphson method. We have:
x
f 0 (x) = 2 cos(x)
5
1
f 00 (x) = 2 sin(x)
5
such that:
2 cos(xi ) xi /5
xi+1 = xi
2 sin(xi ) 1/5
Starting the iterations gives:
x1 = 0.99508,

x2 = 1.46901,

etc.

The true solution is x = 1.42755 where f (x ) = 1.77573.


There are no recommended problems for hyperbolic PDEs.
Recommended Problems on Optimization: Will be assigned next lecture.

Ian Jeffrey

25-9