Lecture 25 of a numerical analysis course

© All Rights Reserved

Просмотров: 4

Lecture 25 of a numerical analysis course

© All Rights Reserved

- Damping washer extractor
- Symbolic methods and software overview for Lie symmetry analysis
- Ch3b Slip Line f
- DIFFERENTIAL EQUATIONSS
- CIVL4750 Numerical Solutions to Geotechnical Problems Lecture 1
- Optimal Imperfection of Reactive Powder Concrete Slabs Under Impact Load
- 101561_Solve Ode Pde Using Matlab
- Analysis2011.pdf
- Mortality and Longevity Valuation
- 2012 Mathematical Methods (CAS) Exam 2
- Lecture When is a Function Convex Hessian Positive Definite
- 3.1Problems on 2 Variables and Lagranges Theorem
- Cse-III-Engineering Mathematics - III [10mat31]-Solution
- Transportation Problem
- Linear programming.pdf
- 03 RA47043EN70GLA1 Physical RF Optimization
- 6BV04DOERSM
- PDE 1
- 2011_AlvAmaGriStr_MMOR
- Sample project IGNOU

Вы находитесь на странице: 1из 9

Winter 2015

Ian Jeffrey

April 7, 2015

This lecture is based on and uses material from Numerical Methods for Engineers, 6th

edition by S. Chapra and R. Canale and covers therein:

A brief discussion of hyperbolic PDEs (not in textbook).

An introduction to optimization (Part 4 - PT4).

The purpose of this lecture is to:

Briefly discuss approaches for solving hyperbolic PDEs using finite differences.

Introduce optimization from a conceptual perspective.

Important: These lecture notes are a work in progress and may contain errors and/or typos.

Please do not distribute without the authors consent.

25-1

25.1

April 7, 2015

Hyperbolic PDEs

2

2V

2 V

c

=0

t2

x2

The solution to this hyperbolic PDE is V (t, x) that 1) satisfies the partial differential equation and 2) satisfies auxiliary conditions required to specify the constants of integration.

Our interpretation of the independent variables is that t represents time and x represents

one spatial dimension. Given that we wish to solve this PDE for all t > 0 and x [a, b] one

set of possible auxiliary conditions is:

V (0, x) is given initial conditions.

V (t, a) and V (t, b) are given Dirichlet boundary conditions.

As we require both initial and boundary conditions to solve this PDE we refer to it as an

initial boundary value problem (IBVP).

Example (Transmission Line): For an ideal lossless transmission line we have per-unitlength (PUL) parameters L (PUL inductance) and C (PUL capacitance). The voltage

along the transmission line satisifies:

2V

1 2V

=0

t2

LC x2

One solution method is to reduce the second order PDE into a system of two first-order

PDEs:

(1)

1 I

V

+

=0

t

C x

(2)

I

1 V

+

=0

t L x

We can check that these two equations represent the orginal PDE by taking the derivative

of (1) with respect to time:

2V

1 2I

(3)

+

t2

C tx

and the derivative of (2) with respect to space:

(4)

Ian Jeffrey

2I

1 2V

+

=0

xt L x2

25-2

April 7, 2015

Substituting for the second derivative of I with respect to space and time from (4) into (3)

gives:

2V

1 2V

=0

t2

LC x2

as required. Therefore solving (1) and (2) simultaneously is equivalent to solving the original

wave equation problem. The value

c=

1

LC

1/2

is the velocity of propagation along the line and I(t, x) represents the current.

In order to solve the coupled PDEs using a finite difference approach we could follow the following procedure:

1. Discretize V and I by introducing points xi within the domain [a , b].

2. At each point approximate the spatial derivatives using some finite difference approximation. This will result in a system of linear equation that represents the spatial

derivative operators.

3. Approximate the temporal (time) derivatives i.e., numerically integrating the resulting

system using an RK method.

The important thing to realize is that you end up having to carry a linear system of equations

representing the spatial derivatives at each step of the RK method. And this is for a single

spatial dimension! You can imagine the size of the system of equations when you are trying

to solve a three-dimensional problem.

Key Concept: Just so you are aware you must be very careful how you setup your

spatial derivative approximations and your time-step when dealing with hyperbolic PDEs.

Information flows in a particular direction depending on the current state of the solution and

this must be taken into account to ensure accuracy and stability. This was not mentioned

in class and is beyond the scope of this course.

Ian Jeffrey

25-3

25.2

April 7, 2015

Optimization

Many design problems in engineering (and other disciplines) boil down to getting the best

value out of a system. There are a variety of meanings behind the work best:

1. Sometimes best is very specific:

Example: Design a filter that only passes values in a specific frequency range

Example: Design an MRI magnet that has a specific static magnetic field inside

the magnet bore

2. Sometimes best is just the best we can do:

Example: Design an engine to give the maximum fuel economy possible

Example: Design a circuit that minimizes power consumption

Example: Design a widget (some device) for doing a certain task for the least cost

Most of the time, as illustrated by the previous examples, we have must have constraints

on the design. For example, an engine with maximum fuel economy should probably be able

to move a car (not just some toy model) and should probably not cost an infinite amount of

money to build. Another example is in the case of designing a particular circuit in which case

resistor values, for example, must be positive. Still, in some cases there are no constraints:

Example: Solving a matrix equation Ax = b can be cast as an optimization problem:

minimize ||b Ax||2

in some norm

In this case the values of x can be anything, although on occasion it is beneficial to constrain

them.

As an aside, many advanced methods for solving matrix equations are based on this type of

optimization. Notice that this optimization problem is essentially asking us to find x that

solves the matrix equation. It does not, however, ask us to invert or even systematically

solve the system using say LU decomposition.

Ian Jeffrey

25-4

25.2.1

April 7, 2015

A general optimization problem:

Find x that minimizes or maximizes f (x) subject to the following constraints:

di (x) ai , i = 1, 2, . . . , m

ei (x) = bi , i = 1, 2, . . . , p

Some comments on the notation follow:

x is an array (vector) that contains the design variables. It contains as many design

variables as are necessary to define the optimization problem.

f (x) is referred to as the objective function (sometimes the cost functional). It

maps a vector of design variables to a single (real) number.

The equations di (x) ai are referred to as inequality constraints. They say that

some combination of the design variables is less than or equal to ai . The simplest

combination could be that some single design variable (element in x) is less than or

equal to some value.

The equations ei (x) = bi are referred to as equality constraints and specify that

some combination of the design variables is equal to bi . Notice that if the constraint

was as simple as one design variable being equal to bi then we would remove this

design variable from the optimization problem as the equality constraint tells us that

its value is known.

Given the nature of the function f (x) and the constraints we have different types of optimization problems:

If the objective function and constraints are linear then we have a linear programming problem.

If the objective function is quadratic and the constraints are linear then we have a

quadratic programming problem.

If either the objective function and/or the constraints are nonlinear then we have a

nonlinear programming problem (or a general nonlinear optimization problem).

It is important to realize that x just holds all of the design variables you want to vary in

order to produce an optimal design/solution. We illustrate by example:

Ian Jeffrey

25-5

April 7, 2015

R1 R2

VL

R3 Vs

RL

Figure 25.1: Abstract circuit used to motivate an optimization problem. The goal is to

maximize the power delivered to the load by changing the design variables R1 , R2 , R3 and

Vs . RL = 50 is assumed to be a known quantity and is not included in the optimization

variables.

Example (Optimization Problem): Consider the abstract circuit shown in Figure

25.1. The circuit may be a function of a number of variables and parameters but we will

limit ourselves to being able to change three resistor values R1 , R2 and R3 and one voltage

source Vs . Our goal is to maximize the power delivered to the load RL = 50.

Computing the power delivered to the load for a given circuit design requires choosing values for R1 , R2 , R3 and Vs , solving the circuit, extracting the load voltage VL and

then computing PL = VL2 /RL . We will let f (x) represent this whole operation where

x = [R1 , R2 , R3 , Vs ]T

where T denotes transposition. So our optimization problem is:

maximize PL = f (x)

If we want the resistor values to be positive we need the constraints:

R1 0

R2 0

R3 0

Notice that these are not in the form specified by our general optimization problem with

inequality constraints. We can rewrite these constraints as:

R1 0

R2 0

R3 0

where they are now in the correct form.

Ian Jeffrey

25-6

April 7, 2015

It may seem strange to impose that inequality constraints contain . However this makes

it easier for us to handle only these types of conditions. Any inequality constraint can be

cast in this form so it is not a limitation. Now from a practical perspective we may also

have constraints on the voltage:

Vs 100

Vs 0

which tells us that the voltage Vs is both less than or equal to 100 and greater than or

equal to zero. Of course the specifics of the constraints are dependent on the limits you

face as a design engineer. We are just picking these at random to illustrate.

Now suppose that we had some condition that imposed that R1 = R2 + R3 . We

wont ask why, we will simply show how an equality constraint can enforce this condition.

We have:

R1 R2 R3 = 0

We have now completely specified the optimization problem. The objective function is

properly defined, the design variables are known and both the inequality and equality

constraints have been written down.

We will not have time to illustrate how to perform optimization with constraints (inequality

or equality) in our brief foray into optimization. The hope is that the exposure will help you

the next time you pick up this subject.

Key Concept: Most optimization textbooks will only discuss minimizing (or maximizing)

an objective function (and not the other). The reason for this is that minimizing f (x) is

equivalent to maximizing f (x). You can convince yourself that this is true. So if we are

only able to minimize and we want to maximize, just multiply your objective function by

-1.

The solution to an optimization problem:

The solution to an optimization problem is often denoted as x . Our goal is to find

x that minimizes f (x), that is f (x ) f (x) (for all) feasible solutions x. A feasible

solution is one that satisfies the constraints (if we have constraints).

It is possible that there is some x that produces a smaller value of f (x) than f (x ),

but if this x does not satisfy the constraints it is not a (feasible) solution to our design

problem and should be ignored.

Ian Jeffrey

25-7

25.3

April 7, 2015

optimization problems are also important. By a one-dimensional optimization problem we

mean an optimization problem where there is only a single design variable, that is, x = x.

If the problem is unconstrained then there are no constraints. We will see a bit more detail

about one dimensional unconstrained optimization next lecture. However for now it suffices

to say that we want to:

minimize (or maximize) f (x)

where there are no constraints. One reason for looking at this problem first is that we already

know how to solve it!

25.3.1

Newton-Raphson

It shouldnt really be a surprise that the Newton-Raphson method can be used to find the

minimum/maximum of a function. Recall from root finding that:

xi+1 = xi

f (xi )

f 0 (xi )

can be used to solve for the roots of the equation f (x) = 0. How can we use this for

optimization?

Key Concept: At an extremum (minimum/maximum) the function f 0 (x) is zero. There

is an exception - if the domain of optimization is constrained then it is possible that the

extreme value occurs at the boundary of the domain and that the derivative is non-zero.

As we are not considering constrained optimization we do not care about this case.

Ian Jeffrey

25-8

April 7, 2015

method:

Given that we want to minimize/maximize f (x) we set g(x) = f 0 (x) = 0. Roots of

g(x) can be found by the Newton-Raphson method:

xi+1 = xi

g(x)

f 0 (x)

=

x

i

g 0 (x)

f 00 (x)

Thats all there is to it! All of the cautions and pitfalls of the Newton-Raphson method

still hold true.

In addition we must be careful because the Newton-Raphson method will not discriminate between a maximum and minimum (the derivative is zero in both cases). We

can use the second derivative to check if we have reached a minimum or maximum of f (x)

at x = x :

f 00 (x ) < 0 f (x ) is maximum,

f 00 (x ) > 0 f (x ) is minimum

f (x) = 2 sin(x)

x2

10

starting from an initial guess of x0 = 2.5. We can solve this problem using the NewtonRaphson method. We have:

x

f 0 (x) = 2 cos(x)

5

1

f 00 (x) = 2 sin(x)

5

such that:

2 cos(xi ) xi /5

xi+1 = xi

2 sin(xi ) 1/5

Starting the iterations gives:

x1 = 0.99508,

x2 = 1.46901,

etc.

There are no recommended problems for hyperbolic PDEs.

Recommended Problems on Optimization: Will be assigned next lecture.

Ian Jeffrey

25-9

- Damping washer extractorЗагружено:David Ustariz
- Symbolic methods and software overview for Lie symmetry analysisЗагружено:Chang Jae Lee
- Ch3b Slip Line fЗагружено:Simanchal Kar
- DIFFERENTIAL EQUATIONSSЗагружено:Kubalyenda Silaji
- CIVL4750 Numerical Solutions to Geotechnical Problems Lecture 1Загружено:hktang1802
- Optimal Imperfection of Reactive Powder Concrete Slabs Under Impact LoadЗагружено:IJAERS JOURNAL
- 101561_Solve Ode Pde Using MatlabЗагружено:Saddy Khan
- Analysis2011.pdfЗагружено:Mirica Mihai Antonio
- Mortality and Longevity ValuationЗагружено:David Dorr
- 2012 Mathematical Methods (CAS) Exam 2Загружено:pinkangel2868_142411
- Lecture When is a Function Convex Hessian Positive DefiniteЗагружено:Dan Ismailescu
- 3.1Problems on 2 Variables and Lagranges TheoremЗагружено:Shubham
- Cse-III-Engineering Mathematics - III [10mat31]-SolutionЗагружено:Venkatesh Prasad
- Transportation ProblemЗагружено:Octavius Balangkit
- Linear programming.pdfЗагружено:geeta
- 03 RA47043EN70GLA1 Physical RF OptimizationЗагружено:Rudy Setiawan
- 6BV04DOERSMЗагружено:rajeevtyagi41
- PDE 1Загружено:A R
- 2011_AlvAmaGriStr_MMORЗагружено:hehusa25
- Sample project IGNOUЗагружено:shiv pratap Singh
- SchedulingЗагружено:Abdul Rahman
- StrainЗагружено:Kenny Kenzo
- Workspace Generation for Multifingered ManipulationЗагружено:Ivan Avramov
- improvedPathFollowingBMIЗагружено:yxiefacebook
- Answers to WS on 2nd Derivative Test 2017Загружено:Nikhil Singh
- energijaЗагружено:marAA123
- Microsoft Word - Space_Allocation_Using_Intelligent_Optimization_Techniques-Rafael_Garcia-Christian_Quintero.doc.pdfЗагружено:jose miguel fragozo jimenez
- Comp Here SiveЗагружено:Tiruchengode Vinoth
- A fractional PDE for first passage time of time-changed Brownian motion and its numerical solutionЗагружено:Francesca Carfora
- Analysis of the Open-Loop Neural NetworksЗагружено:Usama Adil

- Book Math 1aЗагружено:Mang Awan
- rlsЗагружено:athos00
- 33203 05 Leslie MatrixЗагружено:Paula
- kubna ravenkaЗагружено:Nikola Trajcev
- JB_W2_DPP4_6Загружено:AAVANI
- Quiz2.fa12Загружено:Joe Park
- Equations and GraphsЗагружено:yusi riza
- Mathematics Pre Year Q P for TCC 2009 10 Up to 2009 June ( Final for Print Corrected by TPS and Final by STT)Загружено:Rakesh Chithuluri
- ExponentsandRadicals (1)Загружено:Shorya Kumar
- Pure Maths 2013 Specimen Paper Unit 1 Paper 2Загружено:VanoiMariaStylesWilkinson
- GPU Gems 3 - Chapter 25Загружено:leonhard_euler
- Graeffes Root Squaring Method Example PDFЗагружено:Mark
- Calculus, Student Solutions Manual - Anton, Bivens & DavisЗагружено:ManuelEfYi
- Young Tableaux SubsЗагружено:Alejandro Marino Vaquero Avilés-Casco
- Electromagnetic TransientЗагружено:ranaateeq
- 2016-Fall-ME501-04-ODE-Part4(1).pdfЗагружено:Rej O-one
- CHAPTER2_STAT447_F17Загружено:Kostas Konstantinidis
- hw3Загружено:spamspamspans
- Image Zooming using Sinusoidal Transforms like Hartley, DFT, DCT, DST and Real Fourier TransformЗагружено:ijcsis
- Tutorial 4Загружено:Rachit Madan
- Quaternions for engineersЗагружено:Federico Thomas
- Ch04 FractionsЗагружено:Humayun Rashid Khan
- Genetic AlgorithmЗагружено:asdfasdf
- Ant Colony OptimizationЗагружено:ahmed_shaheeb
- 2 Analytic Solved ProblemЗагружено:Ashutosh Singh
- Lattice (Order)Загружено:mars
- jcform.pdfЗагружено:Kenny
- Lecture 2 & 3 Unit Commitment and ELDЗагружено:Santosh Thapa
- Difference between HashMap and HashSet in Java.docxЗагружено:jogishokeen
- 1-s2.0-S1474034612000377-main (1)Загружено:Pia Escarate