Вы находитесь на странице: 1из 55

2010

Numerical Methods
Presented To Dr. Essam Soliman

Presented By : Kareem Ahmed Badawi. No. : 32005100.


Higher Technological Institute
4/27/2010
[NUMERICAL METHODS] April 27, 2010

Theoretical Introduction

Throughout this course we have repeatedly made use of the numerical differential equation
solver packages built into our computer algebra system. Back when we first made use of this
feature I promised that we would eventually discuss how these algorithms are actually
implemented by a computer. The current laboratory is where I make good on that promise.

Until relatively recently, solving differential equations numerically meant coding the method
into the computer yourself. Today there are numerous solvers available that can handle the
majority of classes of initial value problems with little user intervention other than entering the
actual problem. However, occasionally it still becomes necessary to do a some customized
coding in order to attack a problem that the prewritten solvers can't quite handle.

This laboratory is intended to introduce you to the basic thinking processes underlying
numerical methods for solving initial value problems. Sadly, it probably won't turn you into an
expert programmer of numerical solver packages (unless some miracle occurs.)

The Nature of Numerical Solutions

From the point of view of a mathematician, the ideal form of the solution to an initial value
problem would be a formula for the solution function. After all, if this formula is known, it is
usually relatively easy to produce any other form of the solution you may desire, such as a
graphical solution, or a numerical solution in the form of a table of values. You might say that a
formulaic solution contains the recipes for these other types of solution within it.
Unfortunately, as we have seen in our studies already, obtaining a formulaic solution is not
always easy, and in many cases is absolutely impossible.

So we often have to "make do" with a numerical solution, i.e. a table of values consisting of
points which lie along the solution's curve. This can be a perfectly usable form of the answer in
many applied problems, but before we go too much further, let's make sure that we are aware
of the shortcomings of this form of solution.

By it's very nature, a numerical solution to an initial value problem consists of a table of values
which is finite in length. On the other hand, the true solution of the initial value problem is
most likely a whole continuum of values, i.e. it consists of an infinite number of points.
Obviously, the numerical solution is actually leaving out an infinite number of points. The
question might arise, "With this many holes in it, is the solution good for anything at all?" To
make this comparison a little clearer, let's look at a very simple specific example.

Higher Technological Institute |6th Of October Branch 2


[NUMERICAL METHODS] April 27, 2010

Example

Say we were to solve the initial value problem:

y′ = 2x

y(0) = 0

It's so simple, you could find a formulaic solution in your head, namely y = x2. On the other
hand, say we were to use a numerical technique. (Yes, I know we don't know how to do this
yet, but go with me on this for a second!) The resulting numerical solution would simply be a
table of values. To get a better feel for the nature of these two types of solution, let's compare
them side by side, along with the graphs we would get based on what we know about each
one:

Formulaic Solution Numerical Solution

x 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4


y = x2
y 0.00 0.04 0.16 0.36 0.64 1.00 1.44 1.96

Notice that the graph derived from the formulaic solution is smoothly continuous, consisting of
an infinite number of points on the interval shown. On the other hand, the graph based on the
numerical solution consists of just a bare eight points, since the numerical method used
apparently only found the value of the solution for x-increments of size 0.2.

Using Numerical Solutions

So what good is the numerical solution if it leaves out so much of the real answer? Well, we can
respond to that question in several ways:

 The numerical solution still looks like it is capturing the general trend of the "real"
solution, as we can see when we look at the side-by-side graphs. This means that if we
are seeking a qualitative view of the solution, we can still get it from the numerical
solution, to some extent.

Higher Technological Institute |6th Of October Branch 3


[NUMERICAL METHODS] April 27, 2010

 The numerical solution could even be "improved" by playing "join-the-dots" with the set
of points it produces. In fact this is exactly what some solver packages, such as
Mathematica, do do with these solutions. (Mathematica produces a join-the-dots
function that it calls InterpolatingFunction.)
 When actually using the solutions to differential equations, we often aren't so much
concerned about the nature of the solution at all possible points. Think about it! Even
when we are able to get formulaic solutions, a typical use we make of the formula is to
substitute values of the independent variable into the formula in order to find the
values of the solution at specific points. Did you hear that? Let me say it again: to find
the values of the solution at specific points. This is exactly what we can still do with a
numerical solution.

The Pitfalls of Numerical Solutions

One last word of warning, however, before we move on to actually finding numerical solutions.
In a problem where a numerical solution would really be necessary, i.e. one which we can't
solve by any other method, there is no formulaic solution for us to compare our answers with.
This means that there is always an element of doubt about the data we produce by using these
numerical techniques.

Say, for example, you obtained a set of numerical data as a solution, which led to the following
graph:

Any reasonable observer of this picture would, in the absence of any other evidence, assume
that the underlying solution had a graph that looks something like this:

Higher Technological Institute |6th Of October Branch 4


[NUMERICAL METHODS] April 27, 2010

In other words, you'd play join-the-dots visually. But how do you know that the actual
underlying solution doesn't look like this:

Notice that this graph fits the data points just as well as the first attempt we made at joining
the dots, see:

So how can you tell whether or not your data is leading you to the wrong conclusion? There are
ways, both qualitative and quantitative, that can be used to help in making this kind of decision.
A whole field of study called numerical analysis is dedicated to answering this sort of question.
Suffice it to say, that in reality, the kind of error I've just illustrated with the above pictures
never occurs. I deliberately used an "off-the-wall" example to get your attention.

In reality, the kind of errors we need to be careful of are much more subtle. What tends to
happen with numerical solutions is that the calculated points drift further and further away

Higher Technological Institute |6th Of October Branch 5


[NUMERICAL METHODS] April 27, 2010

from the actual solution as you move further and further from the point defined by the initial
condition. This can lead you to make assumptions about the actual solution which aren't true.

This difficulty can be overcome to some degree by calculating these points closer together. The
penalty for this, of course, is that more points must be found, so the computer has to spend
more time finding the solution. Most computer solvers try to strike a compromise between the
accuracy inherent in using more points, and the extra time required to calculate the additional
points. (A third issue involves machine round-off errors, but we won't even start to talk about
that here.)

Euler’s Method

The laboratory introduces you to a very simple technique for handling first order initial value
problems. Like so many other concepts in mathematics, it is named after Leonhard Euler (1707-
1783), perhaps the most prolific mathematician of all time. Before we can begin to describe
Euler's Method, we must first make sure that we understand the nature of these approximate
numerical solutions that his idea makes it possible for us to find.

Developing Euler's Method Graphically

In order to develop a technique for solving first order initial value problems numerically, we
should first agree upon some notation. We will assume that the problem in question can be
algebraically manipulated into the form:

y′ = f(x, y)

y(xo) = yo

Our goal is to find a numerical solution, i.e. we must find a set of points which lie along the
initial value problem's solution. If we look at this problem, as stated above, we should realize
that we actually already know one point on the solution, namely the one defined by the initial
condition, (xo, yo). Possibly, the picture of this tiny piece of information looks something like
this:

Higher Technological Institute |6th Of October Branch 6


[NUMERICAL METHODS] April 27, 2010

Now, remember we don't really know the true solution of the problem, or we wouldn't be
going through this method at all. But let's act as if we do know this elusive solution for a
moment. Let's pretend that it's "ghostly graph" could be superimposed onto our previous
picture to get this:

Again, the blue graph of the true solution, shown above, is actually unknown. We've drawn a
picture of what it might look like just to help us think.

Since we're after a set of points which lie along the true solution, as stated above, we must now
derive a way of generating more solution points in addition to the solitary initial condition point
shown in red in the picture. How could we get more points?

Well, look back at the original initial value problem at the top of the page! So far we have only
used the initial condition, which gave us our single point. Maybe we should consider the
possibility of utilizing the other part of the initial value problem—the differential equation
itself:

y′ = f(x, y)

Remember that one interpretation of the quantity y′ appearing in this expression is as the slope
of the tangent line to the function y. But, the function y is exactly what we are seeking as a
solution to the problem. This means that we not only know a point which lies on our elusive
solution, but we also know a formula for its slope:

Higher Technological Institute |6th Of October Branch 7


[NUMERICAL METHODS] April 27, 2010

slope of the solution = f(x, y)

All we have to do now is think of a way of using this slope to get those "other points" that we've
been after! Well, look at the right hand side of the last formula. It looks like you can get the
slope by substituting values for x and y into the function f. These values should, of course, be
the coordinates of a point lying on the solution's graph—they can't just be the coordinates of
any point anywhere in the plane. Do we know of any such points—points lying on the solution
curve? Of course we do! The initial condition point that we already sketched is exactly such a
point! We could use it to find the slope of the solution at the initial condition. We would get:

slope of the solution at (xo, yo) = f(xo, yo)

Remembering that this gives us the slope of the function's tangent line at the initial point we
could put this together with the initial point itself to build the tangent line at the initial point,
like this:

Once again, let's remind ourselves of our goal of finding more points which lie on the true
solution's curve. Using what we know about the initial condition, we've built a tangent line at
the initial condition's point. Look again at the picture of this line in comparison with the graph
of the true solution in the picture above. If we're wanting other points along the path of the
true solution, and yet we don't actually have the true solution, then it looks like using the
tangent line as an approximation might be our best bet! After all, at least on this picture, it
looks like the line stays pretty close to the curve if you don't move too far away from the initial
point.

Let's say we move a short distance away, to a new x-coordinate of x1. Then we could locate the
corresponding point lying on our tangent line. (We can't do this for the curve—it's a ghost,
remember!) It might look something like this:

Higher Technological Institute |6th Of October Branch 8


[NUMERICAL METHODS] April 27, 2010

Notice that our new point, which I've called (x1, y1), isn't too terribly far away from the true
value of the solution at this x-coordinate, up on the curve.

So we now have two points as part of our numerical solution:

 (xo, yo): an exact value, known to lie on the solution curve.


 (x1, y1): an approximate value, known to lie on the solution curve's tangent line through
(xo, yo).

We must now attempt to continue our quest for points on the solution curve (though we're
starting to see the word "on" as a little optimistic—perhaps "near" would be a more realistic
word here.) Still glowing from our former success, we'll dive right in and repeat our last trick,
constructing a tangent line at our new point, like this:

You should immediately recognize that there's a problem, and I've cheated to overcome it!
Since our new point didn't actually lie on the true solution, we can't actually produce a tangent
line to the solution at this point. (We don't even know where the true solution actually is
anymore—the blue curve in the picture is just a thinking aid.) But we can still substitute our
new point, (x1,y1), into the formula:

slope of the solution = f(x,y)

Higher Technological Institute |6th Of October Branch 9


[NUMERICAL METHODS] April 27, 2010

to get get the slope of a pseudo-tangent line to the curve at (x1,y1). We hope that our
approximate point, (x1,y1), is close enough to the real solution that the pseudo-tangent line is
pretty close to the unknown real tangent line.

We now attempt to use this new pseudo-tangent line to get yet another point in the
approximate solution. As before, we move a short distance away from our last point, to a new
x-coordinate of x2. Then we locate the corresponding point lying on our pseudo-tangent line.
The result might look something like this:

We now have three points in our approximate solution:

 (xo, yo): an exact value, known to lie on the solution curve.


 (x1, y1): an approximate value, known to lie on the solution curve's tangent line through
(xo, yo).
 (x2, y2): an approximate value, known to lie on the solution curve's pseudo-tangent line
through (x1, y1).

As you can see, we're beginning to establish a pattern in the way we are generating new points.
We could continue making new points like this for as long as we liked, but for the sake of this
illustration let's find just one more value in the approximate solution.

We make another pseudo-tangent line, this time through (x2, y2), like this:

Higher Technological Institute |6th Of October Branch 10


[NUMERICAL METHODS] April 27, 2010

and we make another short jump to an x-coordinate of x3, and locate the corresponding point
on our latest pseudo-tangent line, like this:

So the list of points in our approximate numerical solution now has four members:

 (xo, yo): an exact value, known to lie on the solution curve.


 (x1, y1): an approximate value, known to lie on the solution curve's tangent line through
(xo, yo).
 (x2, y2): an approximate value, known to lie on the solution curve's pseudo-tangent line
through (x1, y1).
 (x3, y3): an approximate value, known to lie on the solution curve's pseudo-tangent line
through (x2, y2).

Looking over the picture one last time, we see an example of how the numerical solution—the
red dots, might compare with the actual solution. As we stated in the introduction to this
laboratory, a weakness of numerical solutions is their tendency to drift away from the true
solution as the points get further away from the initial condition point. One way of minimizing
(but not eliminating) this problem is to make sure that the jump-size between consecutive
points is relatively small.

Deriving the Euler's Method Formulas

Reminder: We're solving the initial value problem:

y′ = f(x, y)

y(xo) = yo

As we just saw in the graphical description of the method, the basic idea is to use a known point
as a "starter," and then use the tangent line through this known point to jump to a new point.
Rather than focus on a particular point in the sequence of points we're going to generate, let's
be generic. Let's use the names:

 (xn, yn) for the known point

Higher Technological Institute |6th Of October Branch 11


[NUMERICAL METHODS] April 27, 2010

 (xn+1, yn+1) for the new point

Our picture, based on previous experience, should look something like this:

(Though the proximity of the true solution to the point (xn, yn) is, perhaps, a little optimistic.)

Our task here is to find formulas for the coordinates of the new point, the one on the right.
Clearly it lies on the tangent line, and this tangent line has a known slope, namely f(x n, yn). Let's
mark on our picture names for the sizes of the x-jump, and the y-jump as we move from the
known point, (xn, yn), to the new point. Let's also write in the slope of the tangent line that we
just mentioned. Doing so, we get:

The formula relating xn and xn+1 is obvious:

xn+1 = xn + h

Also, we know from basic algebra that slope = rise / run, so applying this idea to the triangle in
our picture, the formula becomes:

f(xn, yn) = Δy / h

which can be rearranged to solve for Δy giving us:

Δy = h f(xn, yn)

Higher Technological Institute |6th Of October Branch 12


[NUMERICAL METHODS] April 27, 2010

But, we're really after a formula for yn+1. Looking at the picture, it's obvious that:

yn+1 = yn + Δy

And, replacing Δy by our new formula, this becomes:

yn+1 = yn + h f(xn, yn)

Summary of Euler's Method

In order to use Euler's Method to generate a numerical solution to an initial value problem of
the form:

y′ = f(x, y)

y(xo) = yo

we decide upon what interval, starting at the initial condition, we desire to find the solution.
We chop this interval into small subdivisions of length h. Then, using the initial condition as our
starting point, we generate the rest of the solution by using the iterative formulas:

xn+1 = xn + h

yn+1 = yn + h f(xn, yn)

to find the coordinates of the points in our numerical solution. We terminate this process when
we have reached the right end of the desired interval.

A Preliminary Example

Just to get a feel for the method in action, let's work a preliminary example completely by hand.
Say you were asked to solve the initial value problem:

y′ = x + 2y

y(0) = 0

numerically, finding a value for the solution at x = 1, and using steps of size h = 0.25.

Higher Technological Institute |6th Of October Branch 13


[NUMERICAL METHODS] April 27, 2010

Applying the Method

Clearly, the description of the problem implies that the interval we'll be finding a solution on is
[0,1]. The differential equation given tells us the formula for f(x, y) required by the Euler
Method, namely:

f(x, y) = x + 2y

and the initial condition tells us the values of the coordinates of our starting point:

 xo = 0
 yo = 0

We now use the Euler method formulas to generate values for x1 and y1.

The x-iteration formula, with n = 0 gives us:

x1 = xo + h

or:

x1 = 0 + 0.25

So:

x1 = 0.25

And the y-iteration formula, with n = 0 gives us:

y1 = yo + h f(xo, yo)

or:

y1 = yo + h (xo + 2yo)

or:

y1 = 0 + 0.25 (0 + 2*0)

So:

y1 = 0

Higher Technological Institute |6th Of October Branch 14


[NUMERICAL METHODS] April 27, 2010

Summarizing, the second point in our numerical solution is:

 x1 = 0.25
 y1 = 0

We now move on to get the next point in the solution, (x2, y2).

The x-iteration formula, with n=1 gives us:

x2 = x1 + h

or:

x2 = 0.25 + 0.25

So:

x2 = 0.5

And the y-iteration formula, with n = 1 gives us:

y2 = y1 + h f(x1, y1)

or:

y2 = y1 + h (x1 + 2y1)

or:

y2 = 0 + 0.25 (0.25 + 2*0)

So:

y2 = 0.0625

Summarizing, the third point in our numerical solution is:

 x2 = 0.5
 y2 = 0.0625

We now move on to get the fourth point in the solution, (x3, y3).
Higher Technological Institute |6th Of October Branch 15
[NUMERICAL METHODS] April 27, 2010

The x-iteration formula, with n = 2 gives us:

x3 = x2 + h

or:

x3 = 0.5 + 0.25

So:

x3 = 0.75

And the y-iteration formula, with n = 2 gives us:

y3 = y2 + h f(x2, y2)

or:

y3 = y2 + h (x2 + 2y2)

or:

y3 = 0.0625 + 0.25 (0.5 + 2*0.0625)

So:

y3 = 0.21875

Summarizing, the fourth point in our numerical solution is:

 x3 = 0.75
 y3 = 0.21875

We now move on to get the fifth point in the solution, (x4, y4).

The x-iteration formula, with n = 3 gives us:

x4 = x3 + h

or:

x4 = 0.75 + 0.25

Higher Technological Institute |6th Of October Branch 16


[NUMERICAL METHODS] April 27, 2010

So:

x4 = 1

And the y-iteration formula, with n = 3 gives us:

y4 = y3 + h f(x3, y3)

or:

y4 = y3 + h (x3 + 2y3)

or:

y4 = 0.21875 + 0.25 (0.75 + 2*0.21875)

So:

y4 = 0.515625

Summarizing, the fourth point in our numerical solution is:

 x4 = 1
 y4 = 0.515625

We could summarize the results of all of our calculations in a tabular form, as follows:

n xn yn
0 0.00 0.000000
1 0.25 0.000000
2 0.50 0.062500
3 0.75 0.218750
4 1.00 0.515625

A question you should always ask yourself at this point of using a numerical method to solve a
problem, is "How accurate is my solution?" Sadly, the answer is "Not very!" This problem can
actually be solved without resorting to numerical methods (it's linear). The true solution turns
out to be:

Higher Technological Institute |6th Of October Branch 17


[NUMERICAL METHODS] April 27, 2010

y = 0.25 e2x - 0.5 x - 0.25

If we use this formula to generate a table similar to the one above, we can see just how poorly
our numerical solution did:

x y

0.00 0.000000

0.25 0.037180

0.50 0.179570

0.75 0.495422

1.00 1.097264

We can get an even better feel for the inaccuracy we have incurred if we compare the graphs of
the numerical and true solutions, as shown here:

The numerical solution gets worse and worse as we move further to the right. We might even
be prompted to ask the question "What good is a solution that is this bad?" The answer is "Very
little good at all!" So should we quit using this method? No! The reason our numerical solution
is so inaccurate is because our step-size is so large. To improve the solution, shrink the step-
size!

By the way, the reason I used such a large step size when we went through this problem is
because we were working it by hand. When we move on to using the computer to do the work,
we needn't be so afraid of using tiny step-sizes.

To illustrate that Euler's Method isn't always this terribly bad, look at the following picture,
made for exactly the same problem, only using a step size of h = 0.02:

Higher Technological Institute |6th Of October Branch 18


[NUMERICAL METHODS] April 27, 2010

As you can see, the accuracy of this numerical solution is much higher than before, but so is the
amount of work needed! Look at all those red points! Can you imagine calculating the
coordinates of each of them by hand?

Runge-Kutta Method

What is the Runge-Kutta 2nd order method


The Runge-Kutta 2nd order method is a numerical technique used to solve an ordinary
differential equation of the form

 f  x, y , y 0   y 0
dy
dx

Only first order ordinary differential equations can be solved by using the Runge-Kutta 2nd
order method. In other sections, we will discuss how the Euler and Runge-Kutta methods are
used to solve higher order ordinary differential equations or coupled (simultaneous) differential
equations.

How does one write a first order differential equation in the above form?

Higher Technological Institute |6th Of October Branch 19


[NUMERICAL METHODS] April 27, 2010

Example

Rewrite

 2 y  1.3e  x , y 0   5
dy
dx

in

dy
 f ( x, y ), y (0)  y 0 form.
dx

Solution
 2 y  1.3e  x , y 0   5
dy
dx

 1.3e  x  2 y, y 0   5
dy
dx

In this case

f x, y   1.3e  x  2 y

Example
Rewrite

 x 2 y 2  2 sin( 3x), y 0  5


dy
ey
dx

in

dy
 f ( x, y ), y (0)  y 0 form.
dx

Solution
 x 2 y 2  2 sin( 3x), y 0  5
dy
ey
dx

Higher Technological Institute |6th Of October Branch 20


[NUMERICAL METHODS] April 27, 2010

dy 2 sin( 3x)  x 2 y 2
 , y0  5
dx ey

In this case

2 sin( 3x)  x 2 y 2
f  x, y  
ey

Runge-Kutta 2nd order method


Euler’s method is given by

yi 1  yi  f xi , yi h (1)

where

x0  0

y 0  y ( x0 )

h  xi 1  xi

To understand the Runge-Kutta 2nd order method, we need to derive Euler’s method from the
Taylor series.

2 3
y i 1  y i 
dy
xi 1  xi   1 d y
2
xi 1  xi 2  1 d y
3
xi 1  xi 3  ...
dx xi , yi 2! dx xi , yi
3! dx xi , yi

 y i  f ( xi , y i )xi 1  xi   f ' ( xi , yi ) xi 1  xi   f ' ' ( xi , yi )xi 1  xi   ... (2)


1 2 1 3

2! 3!

As you can see the first two terms of the Taylor series

yi 1  yi  f xi , yi h

are Euler’s method and hence can be considered to be the Runge-Kutta 1st order method.

The true error in the approximation is given by

f xi , yi  2 f xi , yi  3
Et  h  h  ... (3)
2! 3!

Higher Technological Institute |6th Of October Branch 21


[NUMERICAL METHODS] April 27, 2010

So what would a 2nd order method formula look like. It would include one more term of the
Taylor series as follows.

y i 1  y i  f xi , y i h  f xi , y i h 2
1
(4)
2!

Let us take a generic example of a first order ordinary differential equation

 e 2 x  3 y, y 0  5
dy
dx

f x, y   e 2 x  3 y

Now since y is a function of x,

f x, y  f x, y  dy
f x, y    (5)
x y dx


 2 x
x

e  3y 
 2 x
y
 
e  3 y e 2 x  3 y  


 2e 2 x  (3) e 2 x  3 y 
 5e 2 x  9 y

The 2nd order formula for the above example would be

y i 1  y i  f xi , y i h  f xi , y i h 2
1
2!

 
 yi  e 2 xi  3 yi h 
1
2!

 5e 2 xi  9 yi h 2 
However, we already see the difficulty of having to find f x, y  in the above method. What
Runge and Kutta did was write the 2nd order method as

yi 1  yi  a1k1  a2 k 2 h (6)

where

k1  f xi , yi 

k 2  f xi  p1h, yi  q11k1h (7)

Higher Technological Institute |6th Of October Branch 22


[NUMERICAL METHODS] April 27, 2010

This form allows one to take advantage of the 2nd order method without having to calculate
f x, y  .

So how do we find the unknowns a1 , a 2 , p1 and q11 . Without proof (see Appendix for
proof), equating Equation (4) and (6) , gives three equations.

a1  a2  1

1
a 2 p1 
2

1
a 2 q11 
2

Since we have 3 equations and 4 unknowns, we can assume the value of one of the unknowns.
The other three will then be determined from the three equations. Generally the value of a 2 is
1
chosen to evaluate the other three constants. The three values generally used for a 2 are ,1
2
2
and , and are known as Heun’s Method, the midpoint method and Ralston’s method,
3
respectively.

Heun’s Method
1
Here a 2  is chosen, giving
2

1
a1 
2

p1  1

q11  1

resulting in

1 1 
y i 1  y i   k1  k 2 h (8)
2 2 

where

k1  f xi , yi  (9a)

k 2  f xi  h, yi  k1h (9b)

Higher Technological Institute |6th Of October Branch 23


[NUMERICAL METHODS] April 27, 2010

This method is graphically explained in Figure 1.

y Slope  f xi  h, yi  k1h

Slope  f xi , yi  y i 1 predicted

Average Slope 
1
 f xi  h, yi  k1h   f xi , yi 
2
yi

xi xi+1
x

Runge-Kutta 2nd order method (Heun’s method).

Midpoint Method
Here a2  1 is chosen, giving

a1  0

1
p1 
2

1
q11 
2

resulting in

yi 1  yi  k 2 h (10)

where

k1  f xi , yi  (11a)

 1 1 
k 2  f  xi  h, y i  k1 h  (11b)
 2 2 

Example
Higher Technological Institute |6th Of October Branch 24
[NUMERICAL METHODS] April 27, 2010

A ball at 1200 K is allowed to cool down in air at an ambient temperature of 300 K. Assuming
heat is lost only due to radiation, the differential equation for the temperature of the ball is
given by

d
 2.2067  10-12 ( 4  81  108 )
dt

where  is in K and t in seconds. Find the temperature at t  480 seconds using Runge-Kutta
2nd order method. Assume a step size of h  240 seconds.

Solution
d
dt

 2.2067  10 12  4  81  10 8 

f t ,   2.2067  10 12  4  81 108 
Per Heun’s method given by Equations (8) and (9)

1 1 
 i 1   i   k1  k 2 h
2 2 

k1  f t i , i 

k 2  f t i  h, i  k1h

i  0, t 0  0, 0   (0)  1200

k1  f t 0 , o 

 f 0,1200


 2.2067  10 12 1200 4  81 108 
 4.5579

k 2  f t 0  h, 0  k1h

 f 0  240,1200   4.5579240

 f 240,106.09


 2.2067  10 12 106.09 4  81 108 
Higher Technological Institute |6th Of October Branch 25
[NUMERICAL METHODS] April 27, 2010

 0.017595

1 1 
 1   0   k1  k 2  h
2 2 

1 
 1200    4.5579  0.017595240
1
2 2 

 1200   2.2702240

 655.16 K

i  1, t1  t 0  h  0  240  240,1  655.16K

k1  f t1 ,1 

 f 240,655.16


 2.2067  10 12 655.16 4  81 108 
 0.38869

k 2 f t1  h,1  k1h

 f 240  240,655.16   0.38869240

 f 480,561.87


 2.2067  10 12 561.87 4  81 108 
 0.20206

1 1 
 2   1   k1  k 2  h
2 2 

1 
 655.16    0.38869   0.20206240
1
2 2 

 655.16   0.29538240

 584.27 K

 2   480  584.27 K

Higher Technological Institute |6th Of October Branch 26


[NUMERICAL METHODS] April 27, 2010

The results from Heun’s method are compared with exact results in Figure 2.

The exact solution of the ordinary differential equation is given by the solution of a non-linear
equation as

  300
0.92593 ln  1.8519 tan 1 0.0033333   0.22067  10 3 t  2.9282
  300

The solution to this nonlinear equation at t  480 s is

 (480)  647.57 K

1200
Temperature, θ (K)

Exact h =120
800

h =240
400
h =480

0
0 100 200 300 400 500

-400

Time, t(sec)
Heun’s method results for different step sizes.

Using a smaller step size would increase the accuracy of the result as given in Table 1 and Figure
3 below.

Step size, h  480 Et t %

Higher Technological Institute |6th Of October Branch 27


[NUMERICAL METHODS] April 27, 2010

480 -393.87 1041.4 160.82

240 584.27 63.304 9.7756

120 651.35 -3.7762 0.58313

60 649.91 -2.3406 0.36145

30 648.21 -0.63219 0.097625

Effect of step size for Heun’s method

800
Temperature, θ (480)

600

400

200

0
0 100 200 300 400 500
-200
Step size, h
-400

Effect of step size in Heun’s method.

In Table 2, Euler’s method and the Runge-Kutta 2nd order method results are shown as a
function of step size,

Step size,  ( 480)

Higher Technological Institute |6th Of October Branch 28


[NUMERICAL METHODS] April 27, 2010

h Euler Heun Midpoint Ralston

480 -987.84 -393.87 1208.4 449.78

240 110.32 584.27 976.87 690.01

120 546.77 651.35 690.20 667.71

60 614.97 649.91 654.85 652.25

30 632.77 648.21 649.02 648.61

Comparison of Euler and the Runge-Kutta methods

while in Figure , the comparison is shown over the range of time.

1200

1100

1000 Midpoint
Temperature, θ(K)

900 Ralston

800 Heun

700 Analytical

600
Euler
500
0 100 200 300 400 500 600

Time, t (sec)

Comparison of Euler and Runge Kutta methods with exact results over time.

How do these three methods compare with results obtained if we found f x, y 
directly
Of course, we know that since we are including the first three terms in the series, if the solution
is a polynomial of order two or less (that is, quadratic, linear or constant), any of the three
methods are exact. But for any other case the results will be different.

Higher Technological Institute |6th Of October Branch 29


[NUMERICAL METHODS] April 27, 2010

Let us take the example of

 e 2 x  3 y, y 0  5 .
dy
dx

If we directly find f x, y  , the first three terms of the Taylor series gives

y i 1  y i  f xi , y i h  f xi , y i h 2
1
2!

where

f x, y   e 2 x  3 y

f x, y   5e 2 x  9 y

For a step size of h  0.2 , using Heun’s method, we find

y0.6  1.0930

The exact solution

yx  e2 x  4e3x

gives

y0.6  e 20.6   4e 30.6 

 0.96239

Then the absolute relative true error is

0.96239  1.0930
t   100
0.96239

 13.571%

For the same problem, the results from Euler’s method and the three Runge-Kutta methods are
given in Table .

y(0.6)

Exact Euler Direct 2nd Heun Midpoint Ralston

Higher Technological Institute |6th Of October Branch 30


[NUMERICAL METHODS] April 27, 2010

Value 0.96239 0.4955 1.0930 1.1012 1.0974 1.0994

t % 48.514 13.571 14.423 14.029 14.236

Comparison of Euler’s and Runge-Kutta 2nd order methods

Higher Technological Institute |6th Of October Branch 31


[NUMERICAL METHODS] April 27, 2010

Runge-kutta 4th order

Theoretical Introduction

In the last lab you learned to use Heuns's Method to generate a numerical solution to an initial
value problem of the form:

y′ = f(x, y)
y(xo) = yo

We discussed the fact that Heun's Method was an improvement over the rather simple Euler
Method, and that though it uses Euler's method as a basis, it goes beyond it, attempting to
compensate for the Euler Method's failure to take the curvature of the solution curve into
account. Heun's Method is one of the simplest of a class of methods called predictor-corrector
algorithms. In this lab we will address one of the most powerful predictor-corrector algorithms
of all—one which is so accurate, that most computer packages designed to find numerical
solutions for differential equations will use it by default—the fourth order Runge-Kutta
Method. (For simplicity of language we will refer to the method as simply the Runge-Kutta
Method in this lab, but you should be aware that Runge-Kutta methods are actually a general
class of algorithms, the fourth order method being the most popular.)

The Runge-Kutta algorithm may be very crudely described as "Heun's Method on steroids." It
takes to extremes the idea of correcting the predicted value of the next solution point in the
numerical solution. (It should be noted here that the actual, formal derivation of the Runge-
Kutta Method will not be covered in this course. The calculations involved are complicated, and
rightly belong in a more advanced course in differential equations, or numerical methods.)
Without further ado, using the same notation as in the previous two labs, here is a summary of
the method:

xn+1 = xn + h

yn+1 = yn + (1/6)(k1 + 2k2 + 2k3 + k4)

where

k1 = h f(xn, yn)

k2 = h f(xn + h/2, yn + k1/2)

k3 = h f(xn + h/2, yn + k2/2)

k4 = h f(xn + h, yn + k3)

Higher Technological Institute |6th Of October Branch 32


[NUMERICAL METHODS] April 27, 2010

Even though we do not plan on deriving these formulas formally, it is valuable to understand
the geometric reasoning that supports them. Let's briefly discuss the components in the
algorithm above.

First we note that, just as with the previous two methods, the Runge-Kutta method iterates the
x-values by simply adding a fixed step-size of h at each iteration.

The y-iteration formula is far more interesting. It is a weighted average of four values—k1, k2,
k3, and k4. Visualize distributing the factor of 1/6 from the front of the sum. Doing this we see
that k1 and k4 are given a weight of 1/6 in the weighted average, whereas k2 and k3 are
weighted 1/3, or twice as heavily as k1 and k4. (As usual with a weighted average, the sum of
the weights 1/6, 1/3, 1/3 and 1/6 is 1.) So what are these ki values that are being used in the
weighted average?

k1 you may recognize, as we've used this same quantity on both of the previous labs. This
quantity, h f(xn, yn), is simply Euler's prediction for what we've previously called Δy—the vertical
jump from the current point to the next Euler-predicted point along the numerical solution.

k2 we have never seen before. Notice the x-value at which it is evaluating the function f.
xn + h/2 lies halfway across the prediction interval. What about the y-value that is coupled with
it? yn + k1/2 is the current y-value plus half of the Euler-predicted Δy that we just discussed as
being the meaning of k1. So this too is a halfway value, this time vertically halfway up from the
current point to the Euler-predicted next point. To summarize, then, the function f is being
evaluated at a point that lies halfway between the current point and the Euler-predicted next
point. Recalling that the function f gives us the slope of the solution curve, we can see that
evaluating it at the halfway point just described, i.e. f(xn + h/2, yn + k1/2), gives us an estimate of
the slope of the solution curve at this halfway point. Multiplying this slope by h, just as with the
Euler Method before, produces a prediction of the y-jump made by the actual solution across
the whole width of the interval, only this time the predicted jump is not based on the slope of
the solution at the left end of the interval, but on the estimated slope halfway to the Euler-
predicted next point. Whew! Maybe that could use a second reading for it to sink in!

k3 has a formula which is quite similar to that of k2, except that where k1 used to be, there is
now a k2. Essentially, the f-value here is yet another estimate of the slope of the solution at the
"midpoint" of the prediction interval. This time, however, the y-value of the midpoint is not
based on Euler's prediction, but on the y-jump predicted already with k2. Once again, this slope-
estimate is multiplied by h, giving us yet another estimate of the y-jump made by the actual
solution across the whole width of the interval.

k4 evaluates f at xn + h, which is at the extreme right of the prediction interval. The y-value
coupled with this, yn + k3, is an estimate of the y-value at the right end of the interval, based on
the y-jump just predicted by k3. The f-value thus found is once again multiplied by h, just as
with the three previous ki, giving us a final estimate of the y-jump made by the actual solution
across the whole width of the interval.
Higher Technological Institute |6th Of October Branch 33
[NUMERICAL METHODS] April 27, 2010

In summary, then, each of the ki gives us an estimate of the size of the y-jump made by the
actual solution across the whole width of the interval. The first one uses Euler's Method, the
next two use estimates of the slope of the solution at the midpoint, and the last one uses an
estimate of the slope at the right end-point. Each ki uses the earlier ki as a basis for its
prediction of the y-jump.

This means that the Runge-Kutta formula for yn+1, namely:

yn+1 = yn + (1/6)(k1 + 2k2 + 2k3 + k4)

is simply the y-value of the current point plus a weighted average of four different y-jump
estimates for the interval, with the estimates based on the slope at the midpoint being
weighted twice as heavily as the those using the slope at the end-points.

As we have just seen, the Runge-Kutta algorithm is a little hard to follow even when one only
considers it from a geometric point of view. In reality the formula was not originally derived in
this fashion, but with a purely analytical approach. After all, among other things, our geometric
"explanation" doesn't even account for the weights that were used. If you're feeling ambitious,
a little research through a decent mathematics library should yield a detailed analysis of the
derivation of the method.

The Multi-Step method

Linear multistep methods are used for the numerical solution of ordinary differential
equations. Conceptually, a numerical method starts from an initial point and then takes a short
step forward in time to find the next solution point. The process continues with subsequent
steps to map out the solution. Single-step methods (such as Euler's method) refer to only one
previous point and its derivative to determine the current value. Methods such as Runge-Kutta
take some intermediate steps (for example, a half-step) to obtain a higher order method, but
then discard all previous information before taking a second step. Multistep methods attempt
to gain efficiency by keeping and using the information from previous steps rather than
discarding it. Consequently, multistep methods refer to several previous points and derivative
values.

Definitions

Numerical methods for ordinary differential equations approximate solutions to initial value
problems of the form

The result is approximations for the value of y(t) at discrete times ti:

Higher Technological Institute |6th Of October Branch 34


[NUMERICAL METHODS] April 27, 2010

ti = t0 + ih

yi = y(ti) = y(t0 + ih)

fi = f(ti,yi)

where h is the time step (sometimes referred to as Δt).

A linear multistep method uses a linear combination of yi and yi' to calculate the value of y for
the desired current step.

Multistep method will use the previous s steps to calculate the next value. Consequently, the
desired value at the current processing stage is yn + s.

A linear multistep method is a method of the form

where h denotes the step size and f the right-hand side of the differential equation. The
coefficents and determine the method. The designer of the method
chooses the coefficients; often, many coefficients are zero. Typically, the designer chooses the
coefficients so they will exactly interpolate y(t) when it is an nth order polynomial.

If the value of bs is nonzero, then the value of yn + s depends on the value of f(tn + s,yn + s).
Consequently, the method is explicit if bs = 0. In that case, the formula can directly compute yn +
s.
If then the method is implicit and the equation for yn + s must be solved. Iterative
methods such as Newton's method are often used to solve the implicit formula.

Sometimes an explicit multistep method is used to "predict" the value of yn + s. That value is
then used in an implicit formula to "correct" the value. The result is a Predictor-corrector
method.

Examples

Consider for an example the problem

The exact solution is y(t) = et.

Higher Technological Institute |6th Of October Branch 35


[NUMERICAL METHODS] April 27, 2010

One-Step Euler

A simple numerical method is Euler's method:

Euler's method can be viewed as an explicit multistep method for the degenerate case of one
step.

This method, applied with step size on the problem y' = y, gives the following results:

Two-Step Adams Bashforth

Euler's method is a one-step method. A simple multistep method is the two-step Adams–
Bashforth method

This method needs two values, yn + 1 and yn, to compute the next value, yn + 2. However, the
initial value problem provides only one value, y0 = 1. One possibility to resolve this issue is to
use the y1 computed by Euler's method as the second value. With this choice, the Adams–
Bashforth method yields (rounded to four digits):

The exact solution at t = t4 = 2 is , so the two-step Adams–Bashforth method


is more accurate than Euler's method. This is always the case if the step size is small enough.

Higher Technological Institute |6th Of October Branch 36


[NUMERICAL METHODS] April 27, 2010

Multistep Method Families

Three families of linear multistep methods are commonly used: Adams–Bashforth methods,
Adams–Moulton methods, and the backward differentiation formulas (BDFs).

Adams–Bashforth methods

The Adams–Bashforth methods are explicit methods. The coefficients are as − 1 = − 1 and
, while the bj are chosen such that the methods has order s (this
determines the methods uniquely).

The Adams–Bashforth methods with s = 1, 2, 3, 4, 5 are (Hairer, Nørsett & Wanner 1993, §III.1;
Butcher 2003, p. 103):

 — this is simply the Euler method;




The coefficients bj can be determined as follows. Use polynomial interpolation to find the
polynomial p of degree s − 1 such that

The Lagrange formula for polynomial interpolation yields

The polynomial p is locally a good approximation of the right-hand side of the differential
equation y' = f(t,y) that is to be solved, so consider the equation y' = p(t) instead. This equation
can be solved exactly; the solution is simply the integral of p. This suggests taking

The Adams–Bashforth method arises when the formula for p is substituted. The coefficients bj
turn out to be given by

Higher Technological Institute |6th Of October Branch 37


[NUMERICAL METHODS] April 27, 2010

Replacing f(t, y) by its interpolant p incurs an error of order h s, and it follows that the s-step
Adams–Bashforth method has indeed order s (Iserles 1996, §2.1)

The Adams–Bashforth methods were designed by John Couch Adams to solve a differential
equation modelling capillary action due to Francis Bashforth. Bashforth (1883) published his
theory and Adams' numerical method (Goldstine 1977).

Adams–Moulton methods

The Adams–Moulton methods are similar to the Adams–Bashforth methods in that they also
have as − 1 = − 1 and . Again the b coefficients are chosen to obtain the
highest order possible. However, the Adams–Moulton methods are implicit methods. By
removing the restriction that bs = 0, an s-step Adams–Moulton method can reach order s + 1,
while an s-step Adams–Bashforth methods has only order s.

The Adams–Moulton methods with s = 0, 1, 2, 3, 4 are (Hairer, Nørsett & Wanner 1993, §III.1;
Quarteroni, Sacco & Saleri 2000):

 — this is the backward Euler method;


 — this is the trapezoidal rule;

The derivation of the Adams–Moulton methods is similar to that of the Adams–Bashforth


method; however, the interpolating polynomial uses not only the points tn−1, … tn−s, as above,
but also tn. The coefficients are given by

The Adams–Moulton methods are solely due to John Couch Adams, like the Adams–Bashforth
methods. The name of Forest Ray Moulton became associated with these methods because he
realized that they could be used in tandem with the Adams–Bashforth methods as a predictor-

Higher Technological Institute |6th Of October Branch 38


[NUMERICAL METHODS] April 27, 2010

corrector pair (Moulton 1926); Milne (1926) had the same idea. Adams used Newton's method
to solve the implicit equation (Hairer, Nørsett & Wanner 1993, §III.1).

Analysis

The central concepts in the analysis of linear multistep methods, and indeed any numerical
method for differential equations, are convergence, order, and stability.

The first question is whether the method is consistent: is the difference equation

a good approximation of the differential equation y' = f(t,y)? More precisely, a multistep
method is consistent if the local error goes to zero as the step size h goes to zero, where the
local error is defined to be the difference between the result yn + s of the method, assuming that
all the previous values are exact, and the exact solution of the equation at
time tn + s. A computation using Taylor series shows out that a linear multistep method is
consistent if and only if

All the methods mentioned above are consistent (Hairer, Nørsett & Wanner 1993, §III.2).

If the method is consistent, then the next question is how well the difference equation defining
the numerical method approximates the differential equation. A multistep method is said to
have order p if the local error is of order O(hp + 1) as h goes to zero. This is equivalent to the
following condition on the coefficients of the methods:

The s-step Adams–Bashforth method has order s, while the s-step Adams–Moulton method has
order s + 1 (Hairer, Nørsett & Wanner 1993, §III.2).

These conditions are often formulated using the characteristic polynomials

Higher Technological Institute |6th Of October Branch 39


[NUMERICAL METHODS] April 27, 2010

In terms of these polynomials, the above condition for the method to have order p becomes

In particular, the method is consistent if it has order one, which is the case if ρ(1) = 0 and ρ'(1) =
σ(1).

If the roots of the characteristic polynomial ρ all have modulus less than or equal to 1 and the
roots of modulus 1 are of multiplicity 1, we say that the root condition is satisfied. The method
is convergent if and only if it is consistent and the root condition is satisfied. Consequently, a
consistent method is stable if and only if this condition is satisfied, and thus the method is
convergent if and only if it is stable.

Furthermore, if the method is stable, the method is said to be strongly stable if z = 1 is the only
root of modulus 1. If it is stable and all roots of modulus 1 are not repeated, but there is more
than one such root, it is said to be relatively stable. Note that 1 must be a root; thus stable
methods are always one of these two.

Example

Consider the Adams–Bashforth three-step method

The characteristic equation is thus

which has roots z = 0,1, and the conditions above are satisfied. As z = 1 is the only root of
modulus 1, the method is strongly stable.

Higher Technological Institute |6th Of October Branch 40


[NUMERICAL METHODS] April 27, 2010

Systems of Differential Equations

In the introduction to this section we briefly discussed how a system of differential equations
can arise from a population problem in which we keep track of the population of both the prey
and the predator. It makes sense that the number of prey present will affect the number of the
predator present. Likewise, the number of predator present will affect the number of prey
present. Therefore the differential equation that governs the population of either the prey or
the predator should in some way depend on the population of the other. This will lead to two
differential equations that must be solved simultaneously in order to determine the population
of the prey and the predator.

The whole point of this is to notice that systems of differential equations can arise quite easily
from naturally occurring situations. Developing an effective predator-prey system of
differential equations is not the subject of this chapter. However, systems can arise from nth
order linear differential equations as well. Before we get into this however, let’s write down a
system and get some terminology out of the way.

Example

Write the following 2nd order differential equations as a system of first order, linear differential
equations.

Solution
We can write higher order differential equations as a system with a very simple change of
variable. We’ll start by defining the following two new functions.

Now notice that if we differentiate both sides of these we get,

Higher Technological Institute |6th Of October Branch 41


[NUMERICAL METHODS] April 27, 2010

Note the use of the differential equation in the second equation. We can also convert the
initial conditions over to the new functions.

Putting all of this together gives the following system of differential equation

Linear shooting method

To this point, we have only considered the solutions of differential equations for which the
initial conditions are known. However, many physical applications do not have specified initial
conditions, but rather some given boundary (constraint) conditions. A simple example of such a
problem is the second-order boundary value problem

on t " [a, b] with the general boundary conditions

Thus the solution is defined over a specific interval and must satisfy the relations (5.3.2) at the
end points of the interval. Figure 4 gives a graphical representation of a generic boundary value
problem solution. We discuss the algorithm necessary to make use of the time-stepping
schemes in order to solve such a problem.

Higher Technological Institute |6th Of October Branch 42


[NUMERICAL METHODS] April 27, 2010

The Shooting Method

The boundary value problems constructed here require information at the present time (t = a)
and a future time (t = b). However, the time-stepping schemes developed previously only
require information about the starting time t = a. Some effort is then needed to reconcile the
time-stepping schemes with the boundary value problems presented here.

We begin by reconsidering the generic boundary value problem

on t " [a, b] with the boundary conditions

The stepping schemes considered thus far for second order differential equations involve a
choice of the initial conditions y(a) and y!(a). We can still approach the boundary value problem
from this framework by choosing the “initial” conditions

where the constant A is chosen so that as we advance the solution to t = b we find y(b) = ". The
shooting method gives an iterative procedure with which we can determine this constant A.
Figure 5 illustrates the solution of the boundary value problem given two distinct values of A. In
this case, the value of A = A1 gives a value for the initial slope which is too low to satisfy the
boundary conditions (5.3.4), whereas the value of A = A2 is too large to satisfy (5.3.4).

Higher Technological Institute |6th Of October Branch 43


[NUMERICAL METHODS] April 27, 2010

Computational Algorithm

The above example demonstrates that adjusting the value of A in (5.3.5b) can 0lead to a
solution which satisfies (5.3.4b). We can solve this using a self consistent algorithm to search
for the appropriate value of A which satisfies the original problem. The basic algorithm is as
follows:

1. Solve the differential equation using a time-stepping scheme with the initial conditions
y(a) = ! and y!(a) = A.
2. Evaluate the solution y(b) at t = b and compare this value with the target value of y(b) =
".
3. Adjust the value of A (either bigger or smaller) until a desired level of tolerance and
accuracy is achieved. A bisection method for determining values of A, for instance, may
be appropriate.
4. Once the specified accuracy has been achieved, the numerical solution is complete and
is accurate to the level of the tolerance chosen and the discretization scheme used in
the time-stepping.

What is the finite difference method


The finite difference method is used to solve ordinary differential equations that have
conditions imposed on the boundary rather than at the initial point. These problems are called
boundary-value problems. In this chapter, we solve second-order ordinary differential
equations of the form

d2y
 f ( x, y, y ' ), a  x  b , (1)
dx 2

with boundary conditions

y (a )  ya and y (b)  yb (2)

Many academics refer to boundary value problems as position-dependent and initial value
problems as time-dependent. That is not necessarily the case as illustrated by the following
examples.

The differential equation that governs the deflection y of a simply supported beam under
uniformly distributed load (Figure 1) is given by

d 2 y qx( L  x )
 (3)
dx 2 2 EI

Higher Technological Institute |6th Of October Branch 44


[NUMERICAL METHODS] April 27, 2010

where

x  location along the beam (in)

E  Young’s modulus of elasticity of the beam (psi)

I  second moment of area (in4)

q  uniform loading intensity (lb/in)

L  length of beam (in)

The conditions imposed to solve the differential equation are

y ( x  0)  0 (4)

y ( x  L)  0

Clearly, these are boundary values and hence the problem is considered a boundary-value
problem.

Figure Simply supported beam with uniform distributed load.

Now consider the case of a cantilevered beam with a uniformly distributed load (Figure 2). The
differential equation that governs the deflection y of the beam is given by

d 2 y q( L  x ) 2
 (5)
dx 2 2 EI

Higher Technological Institute |6th Of October Branch 45


[NUMERICAL METHODS] April 27, 2010

where

x  location along the beam (in)

E  Young’s modulus of elasticity of the beam (psi)

I  second moment of area (in4)

q  uniform loading intensity (lb/in)

L  length of beam (in)

The conditions imposed to solve the differential equation are

y ( x  0)  0 (6)

dy
( x  0)  0
dx

Clearly, these are initial values and hence the problem needs to be considered as an initial value
problem.

Figure Cantilevered beam with a uniformly distributed load.

Higher Technological Institute |6th Of October Branch 46


[NUMERICAL METHODS] April 27, 2010

Example

The deflection y in a simply supported beam with a uniform load q and a tensile axial load T is
given by

d 2 y Ty qx( L  x)
  (E1.1)
dx 2 EI 2 EI

where

x  location along the beam (in)

T  tension applied (lbs)

E  Young’s modulus of elasticity of the beam (psi)

I  second moment of area (in4)

q  uniform loading intensity (lb/in)

L  length of beam (in)

T x T

Figure Simply supported beam for Example 1.


Given,

T  7200 lbs, q  5400 lbs/in, L  75 in , E  30 Msi , and I  120 in 4 ,

a) Find the deflection of the beam at x  50" . Use a step size of x  25" and approximate the
derivatives by central divided difference approximation.

Higher Technological Institute |6th Of October Branch 47


[NUMERICAL METHODS] April 27, 2010

b) Find the relative true error in the calculation of y (50) .

Solution
a) Substituting the given values,

d2y 7200 y (5400) x(75  x )


 
dx 2
(30  10 )(120) 2(30  106 )(120)
6

d2y
2
 2  10 6 y  7.5  10 7 x(75  x) (E1.2)
dx

d2y
Approximating the derivative at node i by the central divided difference approximation,
dx 2

i 1 i i 1

Figure Illustration of finite difference nodes using


central divided difference method.

d 2 y yi 1  2 yi  yi 1
 (E1.3)
dx 2 ( x ) 2

We can rewrite the equation as

yi 1  2 yi  yi 1
 2  10 6 yi  7.5  10 7 xi (75  xi ) (E1.4)
(x) 2

Since x  25 , we have 4 nodes as given in Figure 3

i 1 i2 i 3 i4

x  25 x  50 x  75
x0
Figure 5 Finite difference method from x  0 to x  75 with x  25 .

The location of the 4 nodes then is

Higher Technological Institute |6th Of October Branch 48


[NUMERICAL METHODS] April 27, 2010

x0  0

x1  x0  x  0  25  25

x2  x1  x  25  25  50

x3  x2  x  50  25  75

Writing the equation at each node, we get

Node 1: From the simply supported boundary condition at x  0 , we obtain

y1  0 (E1.5)

Node 2: Rewriting equation (E1.4) for node 2 gives

y3  2 y 2  y1
2
 2  10 6 y 2  7.5  10 7 x2 (75  x2 )
(25)

0.0016 y1  0.003202 y 2  0.0016 y3  7.5  10 7 (25)(75  25)

0.0016 y1  0.003202 y 2  0.0016 y 3  9.375  10 4 (E1.6)

Node 3: Rewriting equation (E1.4) for node 3 gives

y 4  2 y3  y 2
2
 2  10 6 y3  7.5  10 7 x3 (75  x3 )
(25)

0.0016 y 2  0.003202 y3  0.0016 y3  7.5  10 7 (50)(75  50)

0.0016 y 2  0.003202 y3  0.0016 y3  9.375  10 4 (E1.7)

Node 4: From the simply supported boundary condition at x  75 , we obtain

y4  0 (E1.8)

Equations (E1.5-E1.8) are 4 simultaneous equations with 4 unknowns and can be written in
matrix form as

Higher Technological Institute |6th Of October Branch 49


[NUMERICAL METHODS] April 27, 2010

 1 0 0 0   y1  0 
0.0016  0.003202  y   4 
 0.0016 0   2  9.375  10 

 0 0.0016  0.003202 0.0016  y 3  9.375  10  4 
    
 0 0 0 1   y 4  0 

The above equations have a coefficient matrix that is tridiagonal (we can use Thomas’ algorithm
to solve the equations) and is also strictly diagonally dominant (convergence is guaranteed if we
use iterative methods such as the Gauss-Siedel method). Solving the equations we get,

 y1  0 
y   
 2    0.5852
 y3   0.5852
   
 y 4  0 

y(50)  y( x2 )  y2  0.5852"

The exact solution of the ordinary differential equation is derived as follows. The homogeneous
part of the solution is given by solving the characteristic equation

m 2  2  10 6  0

m  0.0014142

Therefore,

y h  K1e 0.0014142x  K 2 e 0.0014142x

The particular part of the solution is given by

y p  Ax 2  Bx  C

Substituting the differential equation (E1.2) gives

d 2 yp
 2  10 6 y p  7.5  10 7 x(75  x)
dx 2

d2
2
( Ax 2  Bx  C )  2  10 6 ( Ax 2  Bx  C )  7.5  10 7 x(75  x)
dx

Higher Technological Institute |6th Of October Branch 50


[NUMERICAL METHODS] April 27, 2010

2 A  2  10 6 ( Ax 2  Bx  C )  7.5  10 7 x(75  x)

 2  10 6 Ax 2  2  10 6 Bx  (2 A  2  10 6 C )  5.625  10 5 x  7.5  10 7 x 2

Equating terms gives

 2  10 6 A  7.5  10 7

 2  10 6 B  5.625  10 5

2 A  2  10 6 C  0

Solving the above equation gives

A  0.375

B  28.125

C  3.75  10 5

The particular solution then is

y p  0.375x 2  28.125x  3.75 105

The complete solution is then given by

y  0.375x 2  28.125x  3.75  105  K1e 0.0014142x  K 2 e 0.0014142x

Applying the following boundary conditions

y ( x  0)  0

y ( x  75)  0

we obtain the following system of equations

K1  K 2  3.75  10 5

1.1119 K1  0.89937 K 2  3.75  10 5

These equations are represented in matrix form by

 1 1   K1   3.75  10 5 
1.1119 0.89937  K    5
   2   3.75  10 

Higher Technological Institute |6th Of October Branch 51


[NUMERICAL METHODS] April 27, 2010

A number of different numerical methods may be utilized to solve this system of equations
such as the Gaussian elimination. Using any of these methods yields

 K1   1.775656226  10 5 
K    5
 2   1.974343774  10 

Substituting these values back into the equation gives

y  0.375x 2  28.125x  3.75  105  1.775656266  105 e 0.0014142x  1.974343774  105 e 0.0014142x
Unlike other examples in this chapter and in the book, the above expression for the deflection
of the beam is displayed with a larger number of significant digits. This is done to minimize the
round-off error because the above expression involves subtraction of large numbers that are
close to each other.

b) To calculate the relative true error, we must first calculate the value of the exact solution at
y  50 .

y(50)  0.375(50) 2  28.125(50)  3.75  10 5  1.775656266  10 5 e 0.0014142(50)

 1.974343774  10 5 e 0.0014142( 50)

y (50)  0.5320

The true error is given by

E t = Exact Value – Approximate Value

Et  0.5320  (0.5852)

Et  0.05320

The relative true error is given by

True Error
t   100%
True Value

0.05320
t   100%
 0.5320

t  10%

Higher Technological Institute |6th Of October Branch 52


[NUMERICAL METHODS] April 27, 2010

Numerical solution to partial differential equations

Like ordinary differential equations, partial differential equations are equations to be solved in
which the unknown element is a function, but in PDEs the function is one of several variables,
and so of course the known information relates the function and its partial derivatives with
respect to the several variables. Again, one generally looks for qualitative statements about the
solution. For example, in many cases, solutions exist only if some of the parameters lie in a
specific set (say, the set of integers). Various broad families of PDE's admit general statements
about the behaviour of their solutions. This area has a long-standing close relationship with the
physical sciences, especially physics, thermodynamics, and quantum mechanics: for many of
the topics in the field, the origins of the problem and the qualitative nature of the solutions are
best understood by describing the corresponding result in physics, as we shall do below.

Roughly corresponding to the initial values in an ODE problem, PDEs are usually solved in the
presence of boundary conditions. For example, the Dirichlet problem (actually introduced by
Riemann) asks for the solution of the Laplace condition on an open subset D of the plane, with
the added condition that the value of u on the boundary of D was to be some prescribed
function f. (Physically this corresponds to asking, for example, for the steady-state distribution
of electrical charge within D when prescribed voltages are applied around the boundary.) It is a
nontrivial task to determine how much boundary information is appropriate for a given PDE!

Linear differential equations occur perhaps most frequently in applications (in settings in which
a superposition principle is appropriate.) When these differential equations are first-order, they
share many features with ordinary differential equations. (More precisely, they correspond to
families of ODEs, in which considerable attention must be focused on the dependence of the
solutions on the parameters.)

Historically, three equations were of fundamental interest and exhibit distinctive behaviour.
These led to the clarification of three types of second-order linear differential equations of
great interest. The Laplace equation

2 2
du du
--- + --- = 0
2 2
dx dy

Higher Technological Institute |6th Of October Branch 53


[NUMERICAL METHODS] April 27, 2010

applies to potential energy functions u=u(x,y) for a conservative force field in the plane. PDEs of
this type are called elliptic. The Heat Equation

2 2
du du du
--- + --- = ---
2 2 dt
dx dy

applies to the temperature distribution u(x,y) in the plane when heat is allowed to flow from
warm areas to cool ones. PDEs of this type are parabolic. The Wave Equation

2 2 2
du du du
--- + --- = ---
2 2 2
dx dy dt

applies to the heights u(x,y) of vibrating membranes and other wave functions. PDEs of this
type are called hyperbolic. The analyses of these three types of equations are quite distinct in
character. Allowing non-constant coefficients, we see that the solution of a general second-
order linear PDE may change character from point to point. These behaviours generalize to
nonlinear PDEs as well.

A general linear PDE may be viewed as seeking the kernel of a linear map defined between
appropriate function spaces. (Determining which function space is best suited to the problem is
itself a nontrivial problem and requires careful functional analysis as well as a consideration of
the origin of the equation. Indeed, it is the analysis of PDEs which tends to drive the
development of classes of topological vector spaces.) The perspective of differential operators
allows the use of general tools from linear algebra, including eigenspace decomposition
(spectral theory) and index theory.

Modern approaches seek methods applicable to non-linear PDEs as well as linear ones. In this
context existence and uniqueness results, and theorems concerning the regularity of solutions,
are more difficult. Since it is unlikely that explicit solutions can be obtained for any but the most
special of problems, methods of "solving" the PDEs involve analysis within the appropriate
function space -- for example, seeking convergence of a sequence of functions which can be
shown to approximately solve the PDE, or describing the sought-for function as a fixed point
under a self-map on the function space, or as the point at which some real-valued function is
minimized. Some of these approaches may be modified to give algorithms for estimating
numerical solutions to a PDE.

Higher Technological Institute |6th Of October Branch 54


[NUMERICAL METHODS] April 27, 2010

Generalizations of results about partial differential equations often lead to statements about
function spaces and other topological vector spaces. For example, integral techniques (solving a
differential equation by computing a convolution, say) lead to integral operators (transforms on
functions spaces); these and differential operators lead in turn to general pseudodifferential
operators on function spaces.

Higher Technological Institute |6th Of October Branch 55

Вам также может понравиться