Вы находитесь на странице: 1из 86

Introduction to Numerical Solutions

of Ordinary Differential Equations


Larry Caretto
Mechanical Engineering 501AB
Seminar in Engineering Analysis

October 29, 2003
2
Outline
Review last class and homework
Midterm Exam November 4 covers
material up to and including last weeks
lecture
Overivew of numerical solutions
Initial value problems in first-order
equations
Systems of first order equations and initial
value problems in higher order equations
Boundary value problems
Stiff systems and eigenvalues

3
Review Last Class
Systems of equations
Combine to higher order equation
Matrix approaches
Reduction of order
Laplace transforms for ODEs
Transform ODE from y(t) to Y(s)
Solve for Y(s)
Rearrange solution
Transform back to y(t)
4
Simultaneous Solution
Differentiate first (y
1
) equation to obtain
derivative(s) of y
2
that are present in
second equation
Solve first equation for y
2
Substitute equations for y
2
and its deriv-
atives into the result of the first step
Result is a higher order equation that
can be solved (if possible) by usual
methods
5
Matrix Differential Equations II
Matrix components in
r Ay
y
= +
dt
d
(
(
(
(
(
(
(
(

=
n
y
y
y
y

3
2
1
y
(
(
(
(
(
(
(
(
(

=
(
(
(
(
(
(
(
(

=
dt
dy
dt
dy
dt
dy
dt
dy
y
y
y
y
dt
d
dt
d
n
n

3
2
1
3
2
1
y
(
(
(
(
(
(
(
(

=
nn n n n
n
n
n
a a a a
a a a a
a a a a
a a a a






3 2 1
3 33 32 31
2 23 22 21
1 13 12 11
A
| |
T
n
r r r r
3 2 1
= r
6
Solving
Assume that A
(n x n)
has n linearly
independent eigenvectors
Eigenvectors are columns of matrix, X
Define a new vector s = X
-1
y (y = Xs)
Substitute y = Xs into the matrix
differential equation
Obtain equation for s using A = X
-1
AX
that gives independent equations
r Ay
y
= +
dt
d
7
Terms in Solution of
p s
s
= A +
dt
d
With these definitions, s
i
= C
i
e

i
t
+ p
i
/
i

becomes s = EC + A
-1
p (E(0) = I)
(
(
(
(
(
(
(
(

t
t
t
t
n
e
e
e
e
t







0 0 0
0 0 0
0 0 0
0 0 0
) (
3
2
1
E
(
(
(
(
(
(
(
(
(
(

= A

n
n
p
p
p
p

3
3
2
2
1
1
1
p
(
(
(
(
(
(
(
(

=
n
C
C
C
C

3
2
1
C
8
Apply Initial Conditions on y(0)
Get y = Xs = XEC + XA
-1
p
At t = 0, E = I, and y = y
0
= initial y
components, giving y
0
= XIC + XA
-1
p
Premultiply by X
-1
to obtain X
-1
y
0
=
X
-1
XC + X
-1
XA
-1
p = C + A
-1
p
Constant vector, C = X
-1
y
0
- A
-1
p
Result: y = XE [X
-1
y
0
- A
-1
p] + XA
-1
p
Homogenous(p = 0): y = XEX
-1
y
0


9
Reduction of Order
An n
th
order equation can be written as
a system of n first order equations
We can write a general nonlinear nth
order equation as shown below
|
|
.
|

\
|
=

1
1
, , , ,
n
n
n
n
dx
y d
dx
dy
y x f
dx
y d

Redefine y as z
0
and define the
following derivatives

2
1
2
0
2
2
2
1
0
z
dx
dz
dx
z d
dx
y d
z
dx
dz
dx
dy
= = = = =
10
Reduction of Order II
z
0
to z
n-1
are n variables that satisfy
simultaneous, first-order ODEs
( )
1 1 0
1
1
1
, , , , , , , ,

= =
|
|
.
|

\
|
=
n
n
n
n
n
n
n
z z z x f
dx
dz
dx
y d
dx
dy
y x f
dx
dy

Continue this definition up to z
n-1


dx
dz
dx
dz
dx
z d
dx
z d
dx
z d
dx
y d
n n
n
n
n
n
n
n
n
n
1
2
2
2
2
2
1
1
1
0

= = = = = =
2 , , 1 , 0
1
= =
+
n k z
dx
dz
k
k

11
Reduction of Order III
The main application of reduction of
order is to numerical methods
Numerical solutions of ODEs develop
methods to solve a system of first order
equations
Higher order equations are solved by
converting them to a system of first
order equations
12
Simple Laplace Transforms
f(t) F(s) f(t) F(s)
ct
n
cn!/s
n+1
ce
at
sin

et
ct
x
cI(x+1)/s
x+1

ce
at
c/(s a) ce
at
cos

et
csin

et ce/(s
2
+ e
2
)
ccos

et cs/(s
2
+ e
2
) Additional transforms
in Table 5.9, pp 297-
299 of Kreyszig
csinh

et ce/(s
2
- e
2
)
ccosh

et cs/(s
2
- e
2
)
2 2
) ( e
e
+ a s
c
2 2
) (
) (
e +

a s
a s c
13
Transforms of Derivatives
The main formulas for differential
equations are the Laplace transforms of
derivatives
These have transform of desired
function, L [f(t)] = F(s) and the initial
conditions, f(0), f(0), etc.
) 0 ( ) ( )] ( ' [ f s sF t f =
) 0 ( ' ) 0 ( ) ( )] ( ' ' [
2
f sf s F s t f =
) 0 ( ) 0 ( ) 0 ( ' ) 0 ( ) ( )] ( [
) 1 ( ) 2 ( 2 1 ) (
=
n n n n n n
f sf f s f s s F s t f
14
Solving Differential Equations
Transform all terms in the differential
equation to get an algebraic equation
For a differential equation in y(t) we get the
transforms Y(s) = L [y(t)]
Similar notation for other transformed
functions in the equation R(s) = L [r(t)]
Solve the algebraic equation for Y(s)
Obtain the inverse transform for Y(s)
from tables to get y(t)
15
Partial Fractions
Method to convert fraction with several
factors in denominator into sum of
individual factors (in denominator)
Necessary to modify complicated
solution for Y(s) into simpler functions
whose transforms are known
Special rules for repeated factors and
for complex conjugates
16
Shifting Theorems
Sometimes required for transform table
First theorem is defined for function f(t)
with transform F(s)
F(s - a) has inverse of e
-at
f(t)
Example Y(s) = [2(s+1)+4]/[(s + 1)
2
+ 1]
For F(s) = s/(s
2
+ e
2
); f(t) = cos et and f(t) =
cos et for F(s) = e/(s
2
+ e
2
)
Here Y(s+1) = 2(s+1)+4]/[(s + 1)
2
+ 1] +
4/[(s + 1)
2
+ 1] so y(t) = e
-t
times Y(s)
inverse
y(t) = e
-t
[2 cos t + 4 sin t]
17
Shifting Theorems II
Second shifting theorem
Applies to e
-as
F(s), where F(s) is known
transform of a function f(t)
Inverse transform is f(t a) u(t a) where
u(t a) is the unit step function
If we have e
-as
s/(s
2
+ e
2
), we recognize
s/(s
2
+ e
2
) as F(s) for f(t) = cos et
Thus e
-as
s/(s
2
+ e
2
) is the Laplace
transform for cos[et(t a)] u(t a)
18
Numerical Analysis Problems
Numerical solution of algebraic
equations and eigenvalue problems
Solution of one or more nonlinear
algebraic equations f(x) = 0
Linear and nonlinear optimization
Constructing interpolating polynomials
Numerical quadrature
Numerical differentiation
Numerical differential equations
19
Interpolation
Start with N data pairs x
i
, y
i

Find a function (polynomial) that can be
used for interpolation
Basic rule: the interpolation polynomial
must fit all points exactly
Denote the polynomial as p(x)
The basic rule is that p(x
i
) = y
i

Many different forms
20
Newton Polynomials
p(x) = a
0
+ a
1
(x x
0
) + a
2
(x x
0
)(x x
1
)
+ a
3
(x x
0
)(x x
1
)(x x
2
) + + a
n-1
(x
x
0
)(x x
1
)(x x
2
) (x x
n-2
)
Terms with factors of x x
i
are zero
when x = x
i
Use this and rule that p(x
i
) = y
i
to find a
i

a
0
= y
0
, a
1
= (y
1
y
0
) / (x
1
x
0
)
y
2
= a
0
+ a
1
(x
2
x
0
) + a
2
(x
2
x
0
)(x
2
x
1
)
Solve for a
2
using results for a
0
and a
1


21
Newton Polynomials II
y
2
= a
0
+ a
1
(x
2
x
0
) + a
2
(x
2
x
0
)(x
2
x
1
)
) )( (
) (
) )( (
) (
1 2 0 2
0 2
0 1
0 1
0 2
1 2 0 2
0 2 1 0 2
2
x x x x
x x
x x
y y
y y
x x x x
x x a a y
a


=


=
y
2
= a
0
+ a
1
(x
2
x
0
) + a
2
(x
2
x
0
)(x
2
x
1
)
Data determine coefficients
Develop scheme known as divided
difference table to compute a
k

22
Divided Difference Table
x
0
y
0
a
0

a
1
x
1
y
1
a
2
x
2
y
2
| a
3
x
3
y
3
0 1
0 1
0
x x
y y
F

=
1 2
1 2
1
x x
y y
F

=
2 3
2 3
2
x x
y y
F

=
0 2
0 1
0
x x
F F
S

=
1 3
1 2
1
x x
F F
S

=
0 3
0 1
0
x x
S S
T

=
23
Divided Difference Example
0 0 a
0

a
1
10

10

a
2
20

40

| a
3
30

100

1
0 10
0 10
0
=

= F
3
10 20
10 40
1
=

= F
6
20 30
40 100
2
=

= F
1 .
0 20
1 3
0
=

= S
15 .
10 30
3 6
1
=

= S
600
1
0 30
15 . 2 .
0
=

= T
24
Divided Difference Example II
Divided difference table gives a
0
= 0, a
1

= 1, a
2
= .1, and a
3
= 1/600
Polynomial p(x) = a
0
+ a
1
(x x
0
) + a
2
(x
x
0
)(x x
1
) + a
3
(x x
0
)(x x
1
)(x x
2
) =
0 + 1(x 0) + 0.1(x 0)(x 10) +
(1/600)(x 0)(x 10)(x 20) = x +
0.1x(x 10) + (1/600)x(x 10)(x 20)
Check p(30) = 30 + .1(30)(20) + (1/600)
(30)(20)(10) = 30 + 60 + 10 = 100 (correct)
25
Constant Step Size
Divided differences work for equal or
unequal step size in x
If Ax = h is a constant we have simpler
results
F
k
= Ay
k
/h = (y
k+1
y
k
)/h
S
k
= A
2
y
k
/h
2
=

(y
k+2
2y
k-1
+ y
k
)/h
2

T
k
= A
3
y
k
/h
3
= (y
k+3
3y
k+2
+ 3y
k+1
y
k
)/h
3
A
n
y
k
is called the n
th
forward difference
Can also define backwards and central
differences


26
Interpolation Approaches
When we have N data points how do we
interpolate among them?
Order N-1 polynomial not good choice
Use piecewise polynomials of lower order
(linear or quadratic)
Can match first and or higher derivatives
where piecewise polynomials join
Cubic splines are piecewise cubic
polynomials that match first and second
derivatives (as well as values)
27
Cubic Spline Interpolation
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0 1 2 3 4 5 6
x values
y

v
a
l
u
e
s
Known f'
Natural
No Knot
Data
28
Newton Interpolating Polynomial
-1
0
1
2
3
4
5
0 1 2 3 4 5 6
X Values
Y

V
a
l
u
e
s Polynomial
Data
29
Polynomial Applications
Data interpolation
Approximation functions in numerical
quadrature and solution of ODEs
Basis functions for finite element
methods
Can obtain equations for numerical
differentiation
Statistical curve fitting (not discussed
here) usually used in practice
30
Derivative Expressions
Obtain from differentiating interpolation
polynomials or from Taylor series
Series expansion for f(x) about x = a
.... ) - (
! 3
1
) - (
! 2
1
) ( ) ( ) (
3
3
3
2
2
2
+ + + + =
= =
=
a x
dx
f d
a x
dx
f d
a x
dx
df
a f x f
a x a x
a x

=
=
=
0
) - (
!
1
) (
n
n
a x
n
n
a x
dx
f d
n
x f
Note: d
0
f/dx
0
= f
and 0! = 1
What is error from truncating series?
31
Truncation Error
If we truncate series after m terms


+ =
=
=
=
+ =
1 0
) - (
!
1
) - (
!
1
) (
m n
n
a x
n
n
m
n
n
a x
n
n
a x
dx
f d
n
a x
dx
f d
n
x f
Terms used Truncation error, c
m

Can write truncation error as single term
at unknown location (derivation based
on the theorem of the mean)
1
1
1
1
) - (
! ) 1 (
1
) - (
!
1
+
=
+
+

+ =
=
+
= =

m
x
m
m
m n
n
a x
n
n
m
a x
dx
f d
m
a x
dx
f d
n

c
32
Derivative Expressions
Look at finite-difference grid with equal
spacing: h = Ax so x
i
= x
0
+ ih
Taylor series about x = x
i
gives f(x
i
+ kh)
= f[x
0
+ (i+k)h] = f
i+k
in terms of f(x
i
) = f
i

..... ) (
! 3
1
) (
! 2
1
) ( ) (
3
3
3
2
2
2
+ + + + = +
= =
=
kh
dx
f d
kh
dx
f d
kh
dx
df
x f kh x f
i i
i x x x x
x x
i i
i i
i x x
n
n
n
i
x x
i
x x
i
dx
f d
f
dx
f d
f
dx
df
f
= =
=
= = = ...
2
2
' ' '
Compact derivative notation
33
Derivative Expressions II
Combine all definitions for compact
series notation
..... ) (
! 3
1
) (
! 2
1
) ( ) (
3
3
3
2
2
2
+ + + + = +
= =
=
kh
dx
f d
kh
dx
f d
kh
dx
df
x f kh x f
i i
i x x x x
x x
i i
Use this formula to get expansions for
various grid locations about x = x
i
and
use results to get derivative expressions
.....
! 3
) (
! 2
) (
3 ' ' ' 2 ' '
'
+ + + + =
+
kh f kh f
kh f f f
i i
i i k i
34
Derivative Expressions III
Apply general
equation for k
= 1 and k = 1
.....
! 3 ! 2
3 ' ' ' 2 ' '
'
1
+ + + + =
+
h f h f
h f f f
i i
i i i
Ah
h
f f h f h f
h
f f
f
i i i i i i
i
+

=
+ + 1
' ' ' ' '
1
'
.....
! 3 ! 2
.....
! 3
) (
! 2
) (
3 ' ' ' 2 ' '
'
+ + + + =
+
kh f kh f
kh f f f
i i
i i k i
.....
! 3 ! 2
3 ' ' ' 2 ' '
'
1
+ + =

h f h f
h f f f
i i
i i i
Ah
h
f f h f h f
h
f f
f
i i i i i i
i
+

= +

=
1
' ' ' ' '
1
'
.....
! 3 ! 2
Forward

Backward
35
Derivative Expressions IV
Subtract f
i+1
and f
i-1
expressions
.....
! 3 ! 2
3 ' ' ' 2 ' '
'
1
+ + + + =
+
h f h f
h f f f
i i
i i i
.....
! 3 ! 2
3 ' ' ' 2 ' '
'
1
+ + =

h f h f
h f f f
i i
i i i
2
1 1
4 ' ' ' ' ' 2 ' ' '
1 1
'
2
.....
! 5 ! 3
Ah
h
f f h f h f
h
f f
f
i i i i i i
i
+

= + +

=
+ +
Result called central difference expression
.....
! 5
2
! 3
2
2
5 ' ' ' 3 ' ' '
'
1 1
+ + + =
+
h f h f
h f f f
i i
i i i
36
Order of the Error
Forward and backward derivative have
error term that is proportional to h
Central difference error is proportional
to h
2

Error proportional to h
n
called n
th
order
Reducing step size by a factor of a
reduces n
th
order error by a
n

n
h
h
|
|
.
|

\
|
~
1
2
1 2
c c
37
Order of the Error Notation
Write the error term for n
th
error term as
O(h
n
)
Big oh notation, O, denotes order
Recognizes that factor multiplying h
n
may
change slightly with h
) (
2
2
1 1
'
h O
h
f f
f
i i
i
+

=
+
) (
1
'
h O
h
f f
f
i i
i
+

=
+
) (
1
'
h O
h
f f
f
i i
i
+

=

First order forward First order backward
Second order central
38
Higher Order Derivatives
Add f
i+1
and f
i-1
expressions
.....
! 3 ! 2
3 ' ' ' 2 ' '
'
1
+ + + + =
+
h f h f
h f f f
i i
i i i
.....
! 3 ! 2
3 ' ' ' 2 ' '
'
1
+ + =

h f h f
h f f f
i i
i i i
( )
2
2
1 1
4 ' ' ' ' ' 2 ' ' '
2
1 1
' '
2
.....
! 5 ! 3
2
h O
h
f f f h f h f
h
f f f
f
i i i i i i i i
i
+
+
= + +
+
=
+ +
f is second-order, central difference
.....
! 6
2
! 4
2
! 2
2 2
6 ' ' ' ' ' ' 4 ' ' ' ' 2 ' '
1 1
+ + + + = +
+
h f h f h f
f f f
i i i
i i i
39
Higher Order Directional
We can get higher order derivative
expressions at the expense of more
computations
Get second order forward and backward
derivative expressions from f
i+2
and f
i-2
,
respectively
Combine with previous expressions for
f
i+1
and f
i-1
to eliminate first order error
term
40
Specific Taylor Series
General
equation
.....
! 3
) (
! 2
) (
3 ' ' ' 2 ' '
'
+ + + + =
+
kh f kh f
kh f f f
i i
i i k i
.....
! 3
8
! 2
4 2
3 ' ' ' 2 ' '
'
2
+ + + + =
+
h f h f
h f f f
i i
i i i
k = -2
k = 2
.....
! 3
8
! 2
4 2
3 ' ' ' 2 ' '
'
2
+ + =

h f h f
h f f f
i i
i i i
k = -3
k = 3 .....
! 3
27
! 2
9 3
3 ' ' ' 2 ' '
'
3
+ + + + =
+
h f h f
h f f f
i i
i i i
.....
! 3
27
! 2
9 3
3 ' ' ' 2 ' '
'
3
+ + =

h f h f
h f f f
i i
i i i
41
Second Order Forward
Subtract 4f
i+1
from f
i+2
to eliminate h
2

term
(

+ + + +
(

+ + + + =
+ +
..
6 2
4
...
6
8
2
4 2 4
3
' ' '
2
' ' '
3
' ' '
2
' ' '
1 2
h
f
h
f h f f
h
f
h
f h f f f f
i i i i
i i i i i i
...
6
4 2 3 4
3
' ' ' '
1 2
+ + = +
+ +
h
f hf f f f
i i i i i
...
3 2
3 4
2
' ' '
1 2
'
+ +
+
=
+ +
h
f
h
f f f
f
i
i i i
i
Second
order
error
42
Second Order Backwards
Add 4f
i-1
to f
i-2
to eliminate h
2
term
(

+ + +
(

+ + = +

..
6 2
4
...
6
8
2
4 2 4
3
' ' '
2
' ' '
3
' ' '
2
' ' '
1 2
h
f
h
f h f f
h
f
h
f h f f f f
i i i i
i i i i i i
...
6
4 2 3 4
3
' ' ' '
1 2
+ + = +

h
f hf f f f
i i i i i
...
3 2
3 4
2
' ' '
1 2
'
+ +
+
=
+ +
h
f
h
f f f
f
i
i i i
i
Second
order
error
43
Other Derivative Expressions
Can continue in this fashion
Write Taylor series for f
i+1
, f
i-1
, f
i+2
, f
i-2
, f
i+3
,
f
i-3
, etc.
Create linear combinations with factors that
eliminate desired terms
Eliminate f
i
term to obtain central difference
Keep only terms in f
k
with k > i for forward
difference expressions
Keep only terms in f
k
with k s i for forward
difference expressions
See results on page 271 of Hoffman
44
Order of Error Examples
Table 2-1 in notes shows error in first
derivative for e
x
around x = 1
Using first- and second-order forward and
second-order central differences
Step h = 0.4, 0.2, and 0.1
Error ratio for doubling step size
4.01 to 4.02 for central differences
2.07 to 2.15 for first-order forward differences
4.32 to 4.69 for second-order forward
) log( ) log(
) log( ) log(
log
log
1 2
1 2
1
2
1
2
h h
h
h
n

=
|
.
|

\
|
|
.
|

\
|
~
c c
c
c
45
Roundoff Error
Possible in derivative expressions from
subtracting close differences
Example f(x) = e
x
: f(x) ~ (e
x+h
e
x-h
)/(2h)
and error at x = 1 is (e
1+h
e
1-h
)/(2h) - e
3
10 5 . 4 718282 . 2
) 1 . 0 ( 2
722815 . 2 004166 . 3

=

= x E
9
10 5 . 4 59 7182818284 . 2
) 0001 . 0 ( 2
7180100139 . 2 7185536702 . 2

=

= x E
9
10 9 . 5 718281828 . 2
) 0000001 . 0 ( 2
0388 7182815566 . 2 8724 7182821002 . 2

=

= x E
Second order error
46
Figure 2-1. Effect of Step Size on Error
1.E-11
1.E-10
1.E-09
1.E-08
1.E-07
1.E-06
1.E-05
1.E-04
1.E-03
1.E-02
1.E-01
1.E+00
1.E+01
1.E-17 1.E-15 1.E-13 1.E-11 1.E-09 1.E-07 1.E-05 1.E-03 1.E-01
Step Size
E
r
r
o
r
47
Numerical ODE Solutions
The initial value problem
Euler method as a prototype for the
general algorithm
Local and global errors
More accurate methods
Step-size control for error control
Applications to systems of equations
Higher-order equations
48
The Initial Value Problem
dy/dx = f(x,y) (known) with y(x
0
) = y
0

Basic numerical approach
Use a finite difference grid: x
i+1
x
i
= h
Replace derivative by finite-difference
approximation: dy/dx ~ (y
i+1
y
i
) / (x
i+1
x
i
)
= (y
i+1
y
i
) / h
Derive a formula to compute f
avg
the
average value of f(x,y) between x
i
and x
i+1

Replace dy/dx = f(x,y) by (y
i+1
y
i
) / h = f
avg

Repeatedly compute y
i+1
= y
i
+ h

f
avg

49
Notation
x
i
is the value of the independent
variable at point i on the grid
Determined from the user-selected value of
step size (or the series of h
i
values)
Can always specify exactly the
independent variables value, x
i

y
i
is the value of the numerical solution
at the point where x = x
i
f
i
is derivative value found from x
i
and
the numerical value, y
i
. I.e., f
i
= f(x
i
, y
i
).
50
More Notation
y(x
i
) is the exact value of y at x = x
i

Usually not known but notation is used in
error analysis of algorithms
f(x
i
,y(x
i
)) is the exact value of the
derivative at x = x
i

e
1
= y(x
1
) y
1
is the local truncation
error
This is error for one step of algorithm
starting from known initial condition

51
Local versus Global Error
At the initial condition we know the
solution, y, exactly
First step introduces some error
Remaining steps have single step error
plus previous accumulated error
E
j
= y(x
j
) y
j
is global truncation error
Difference between numerical and exact
solution after several steps
This is the error we want to control

52
Eulers Method
Simplest algorithm, example used for
error analysis, not for practical use
Define f
avg
= f(x
i
, y
i
)
Eulers method algorithm is y
i+1
= y
i
+ h
i
f
i

= y
i
+ h
i
f(x
i
, y
i
)
Example dy/dx = x + y, y = 0 at x = 0
Choose h = 0.1
We have x
0
= 0, y
0
= 0, f
0
= x
0
+ y
0
= 0
x
1
= x
0
+ h = 0.1, y
1
= y
0
+ hf
0
= 0
53
Euler Example Continued
Next step is from x
1
= 0.1 to x
2
= 0.2
f
1
= x
1
+ y
1
= 0.1 + 0 = 0.1
y
2
= y
1
+ hf
1
= 0 + (.1)(.1) = .01
For dy/dx = x + y, we know the exact
solution is (x
0
+ y
0
+ 1)e
x-x
0 x 1
For x
0
= y
0
= 0, y = e
x
x 1
Look at application of Euler algorithm
for a few steps and compute the error
54
Euler Example
x
i
y
i
f
i
y(x
i
) E(x
i
)
0 0 0 0 0
.1 0 .1 .0052 .0052
.2 .01 .21 .0214 .0114
.3 .031 .331 .04986 .01886
.4 .0641 .4641 .091825 .027725
55
Euler Example Plot
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x
y
Exact
Numerical
56
Error Propagation
Behavior of Euler algorithm is typical of
all algorithms for numerical solutions
Error grows at each step
We usually do not know this global
error, but we would like to control it
Look at local error for Euler algorithm
Then discuss general relationship
between local and global error
57
Taylor Series to Get Error
Expand y(x) in Taylor series about x = a
.... ) - (
! 3
1
) - (
! 2
1
) ( ) ( ) (
3
3
3
2
2
2
+ + + + =
= =
=
a x
dx
y d
a x
dx
y d
a x
dx
dy
a y x y
a x a x
a x
Look at one step from known initial
condition, a = x
0
, to x
0
+ h so x a = h
....
! 3
1
! 2
1
) ( ) (
3
0
3
3
2
0
2
2
0
0 0
+ + + + = + h
dx
y d
h
dx
y d
h
dx
dy
x y h x y
In ODE notation, dy/dx|
0
= f(x
0
, y(x
0
))
58
Local Euler Error
Result of Taylor series on last chart
....
! 3
1
! 2
1
)) ( , ( ) ( ) (
3
0
3
3
2
0
2
2
0 0 0 0
+ + + + = + h
dx
y d
h
dx
y d
x y x hf x y h x y
This is only the Euler algorithm for the
first step when we know f(x
0
,y(x
0
))
This gives the local truncation error
Local truncation error for Euler algorithm
is second order
Euler Algorithm Truncation Error
59
Global Error
We will show that a local error of order
n, has a global error of order n-1
To show this consider the global error at
x = x
0
+ kh after k algorithm steps
Is approximately k times the local error
If local error is O(h
n
), approximate global
error after k steps is k O(h
n
) ~ kAh
n

A new step size, h/r, takes kr steps to get
to the same x value
60
Global Error Concluded
Compare error for same x = kh with
step sizes h and h/r
E
x=kh
(h) ~ kAh
n
E
x=kh
(h/r) = krA(h/r)
n


( ) ( )
( )
1
1
) (

=
=
= ~
n n
n
kh x
kh x
r
h k
r
h
kr
h E
r
h
E
When we reduce the step size by a
factor of 1/r we reduce the error by a
factor of 1/r
n-1
; this is the behavior of
an algorithm whose error is order n-1
61
Euler Local and Global Error
Previously showed Euler algorithm to
have second order local error
Should have first order global error
Results for previous Euler example at x
= 1 with different step sizes
Step size

First step

Final error

h = 0.1

5.17x10
-3


1.25 x10
-1


h = 0.01

5.02 x10
-5


1.35 x10
-2


h = 0.001

5.00 x10
-7


1.36 x10
-3


62
Better Algorithms
Seek high accuracy with low
computational work
Could improve Euler accuracy by
cutting step size, but this is not efficient
Use other algorithms that have higher
order errors
Runge-Kutta methods typically used
This is a class of methods that use several
function evaluation methods per step
63
Second-order Runge Kutta
Huens method
Modified Euler method
| |
2
) , (
) , ( ) , (
2
) , (
0
1 1 1
0
1 0
1 1
1
1
1 1 1
0
1
+ + + +
+ +
+
+
+ + + +
+ +
= + + =
+ = + =
i i i i i
i i i i
i
i i
i i i i i i i i
y x f h y y
y x f y x f
h
y y
h x x y x f h y y
) , (
2
) , (
2
2
1
2
1 1 1
1
2
1
1
2
1
+ +
+ +
+
+
+
+
+ =
+ =
(

+ =
i i
i i i
i
i
i
i i
i
i
i
y x f h y y
h
x x y x f
h
y y
64
Fourth-order Runge Kutta
Uses four derivative evaluations per step
) , (
2
,
2
2
,
2
) , (
6
2 2
3 1 1 4
2 1
1 3
1 1
1 2
1 1
1 1
4 3 2 1
1
k y h x f h k
k
y
h
x f h k
k
y
h
x f h k
y x f h k
h x x
k k k k
y y
i i i i
i
i
i i
i
i
i i
i i i
i i i i i
+ + =
|
.
|

\
|
+ + =
|
.
|

\
|
+ + =
=
+ =
+ + +
+ =
+ +
+
+
+
+
+
+ + +
65
Comparison of Methods
Look at Euler, Heun, Modified Euler and
fourth-order Runge-Kutta
Solve dy/dx = e
-y-x
with y(0) = 1
Compare numerical values to exact
solution y = ln( e
y
0 + e
-x
0 e
-x
)
Look at errors in the methods at x = 1
as a function of step size
Compare error propagation (increase in
error as x increases)
66
Error versus Step Size for Simple ODE Solvers
1.E-16
1.E-15
1.E-14
1.E-13
1.E-12
1.E-11
1.E-10
1.E-09
1.E-08
1.E-07
1.E-06
1.E-05
1.E-04
1.E-03
1.E-02
1.E-01
1.E+00
1.E-07 1.E-06 1.E-05 1.E-04 1.E-03 1.E-02 1.E-01
Step size, h
E
r
r
o
r
Euler
Huen
Modified Euler
Runge-Kutta
0 1
0 0
= = = = =

x x at y y with e
dx
dy
y x
67
Error Propagation in Solutions of ODEs
1.E-14
1.E-13
1.E-12
1.E-11
1.E-10
1.E-09
1.E-08
1.E-07
1.E-06
1.E-05
1.E-04
1.E-03
1.E-02
1.E-01
1.E+00
0 0.2 0.4 0.6 0.8 1
x
E
r
r
o
r
Euler (N = 10)
Euler (N = 100)
Euler N = 1,000)
Huen (N =10)
Huen (N = 100)
Huen (N = 1,000)
RK4 (H = 10)
RK4 (N = 100)
68
Some Basic Concepts
A numerical method is convergent with
the solution of the ODE if the numerical
solution approaches the actual solution
as h 0 (with increase in numerical
precision at smaller h)
Mainly a theoretical concept; algorithms in
use are convergent
Stability refers to the ability of a
numerical algorithm to damp any errors
introduced during the solution
69
More on Stability
Finite difference equations in numerical
algorithms, when iterated, may be
numerically increase without bound
Stability usually is obtained by keeping
step size h small, sometimes smaller
than the h required for accuracy
For most ODEs stability is not a
problem, but it is for stiff systems of
ODEs and for partial differential
equations
70
Euler Solution of Decaying Exponential
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
0 0.5 1 1.5 2 2.5 3
x
y
y(0) = 1
y(0) = 2
y(0) = 3
y(0) = 4
y(0) = 5
Euler
Euler algorithm errors move
numerical solution of dy/dx = -y
to solution for anohter initial
condition. Error bounded by
decreasing exponential
71
Euler Solution for Increasing Exponential
0
10
20
30
40
50
60
70
80
90
100
0 0.5 1 1.5 2 2.5 3
x
y
y(0) = 1
y(0) = 2
y(0) = 3
y(0) = 4
y(0) = 5
Euler
Euler algorithm errors move
numerical solution of dy/dx = -y
to solution for anohter initial
condition. Error becomes large
for increasing exponential
72
Error Control
How do we choose h?
Want to obtain a result with some
desired small global error
Can just repeat calculations with smaller
h until two results are sufficiently close
Can use algorithms that estimate error
and adjust step size during the
calculation based on the error
73
Runge-Kutta-Fehlberg
Uses two equations to compute y
n+1
,
one has O(h
5
), the other O(h
6
) error
Requires six derivative evaluations per
step (same evaluations used for both
equations)
The error estimate can be used for step
size control based on an overall 5
th

order error
Cask-Karp version different coefficients
74
Runge-Kutta-Fehlberg II
See page 377 in Hoffman for algorithm
Typical formula components below
y
n+1
= y
n
+ (16k
1
/135 + 6656k
2
/12825
k
3
= hf(x
n
+ 3h/8, y
n
+ 3k
1
/32 + 9k
2
/32)
Error = k
1
/360 - 128k
3
/4275
h
new
= h
old
|E
Desired
/Error|
1/5
E
Desired
is set by user
RKF45 code by Watts and Shampine
75
Other Error Controls
Do fourth-order Runge-Kutta two times,
with step sizes different by a factor of
two and compare results
Runge-Kutta-Verner is similar to Runge-
Kutta-Fehlberg, but uses higher order
expressions
76
Systems of Equations
Any problem with one or more ODEs of
any order can be reduced to a system
for first order ODEs
Last week we reduced a system of two
second order equations to a system of
four first order equations
In this process the sum of the orders is
constant
Example from last week
77
Reduction of Order Example
Define variables y
3
= dy
1
/dt and y
4
= dy
2
/dt
Then dy
3
/dt = d
2
y
1
/dt
2
and dy
4
/dt = d
2
y
2
/dt
2

Have four simultaneous first-order ODEs


0 0
2
2
2 3
1
2
2
2
2
2
2
1
2
1
1
2 1
2
1
2
=
+
+ =
+
+ y
m
k k
y
m
k
dt
y d
y
m
k
y
m
k k
dt
y d
) , (
) , (
) , ( ) , (
4 2
2
2 3
1
2
2 4
3 2
1
2
1
1
2 1 3
2 4
2
1 3
1
y
y
y y
x f y
m
k k
y
m
k
dt
dy
x f y
m
k
y
m
k k
dt
dy
x f y
dt
dy
x f y
dt
dy
=
+
=
= +
+
=
= = = =
78
Solving Simultaneous ODEs
Apply same algorithms used for single
ODEs
Must apply each step and substep to all
equations in system
Key is consistent determination of f
i
(x,y)
All y
i
values in y must be available at the
same x point when computing the f
i

E.g., in Runge-Kutta we must evaluate k
1

for all equations before finding k
2

79
Runge-Kutta for ODE System
y
(i)
is vector of dependent variables at x = x
i

k
(1)
,

k
(2)
,

k
(3)
, and k
(4),
are vectors containing
intermediate Runge-Kutta results
f is a vector containing the derivatives
k
(1)
= hf(x
i
, y
(i)
)
k
(2)
= hf(x
i
+ h/2, y
(i)
+ k
(1)
/2)
k
(3)
= hf(x
i
+ h/2, y
(i)
+ k
(2)
/2)
k
(4)
= hf(x
i
+ h, y
(i)
+ k
(3)
)
y
(i+1)
= (k
(1)
+ 2k
(2)
+ 2k
(3)
+ k
(4)
)/6

80
ODE System by RK4
dy/dx = -y + z and dz/dx = y z with
y(0) = 1 and z(0) = -1 with h = .1
k
(1)y
= h[-y + z] = 0.1[-1 + (-1)] = -.2
k
(1)z
= h[y - z] = 0.1[1 - (-1)] = .2
k
(2)y
= h[-(y+ k
(1)y
/2) + z + k
(1)z
/2] = 0.1[
-(1 + -0.2/2) + (-1 + .2/2)] = -.18
k
(2)z
= h[(y+ k
(1)y
/2) (z + k
(1)z
/2)] = 0.1[(1
+ -0.2)/2 - (-1 + .2/2)] = .18

81
ODE System by RK4 II
k
(3)y
= h[-(y+ k
(2)y
/2) + z + k
(2)z
/2] = 0.1[
-(1 + -0.18/2) + (-1 + .18/2)] = -.182
k
(3)z
= h[(y+ k
(2)y
/2) (z + k
(2)z
/2)] = 0.1[(1
+ -0.18)/2 - (-1 + .18/2)] = .182
k
(4)y
= h[-(y+ k
(3)y
) + z + k
(3)z
] = 0.1[ -(1 +
-0.182) + (-1 + .182)] = -.1636
k
(4)z
= h[(y+ k
(3)y
) (z + k
(3)z
)] = 0.1[ (1 +
-0.182) - (-1 + .182)] = .1636


82
ODE System by RK4 III
y
i+1
= y
i
+ (k
(1)y
+ 2k
(2)y
+ 2k
(3)y
+ k
(4)y
)/6
= 1 + [ (.2) + 2(.18) + 2(.182) +
(.1636)]/6 = .8187
z
i+1
= z
i
+ (k
(1)z
+ 2k
(2)z
+ 2k
(3)z
+ k
(4)z
)/6
= 1 + [(.2) + 2(.18) + 2(.182) +
(.1636)]/6 = .8187
Continue in this fashion until desired
final x value is reached
No x dependence for f in this example
83
Numerical Software for ODEs
Usually written to solve a system of N
equations, but will work for N = 1
User has to code a subroutine or
function to compute the f array
Input variables are x and y; f is output
Some codes have one dimensional
parameter array to pass additional
information from main program into the
function that computes derivatives
84
Derivative Subroutine Example
Visual Basic code
for system of
ODEs at right is
shown below
Sub fsub( x as Double, y() as Double, _
f() as Double )
f(1) = -y(1)

+

Sqr(y(2))

+

y(3)*Exp(2*x)
f(2) = -2 * y(1)^2
f(3) = -3 * y(1) * y(2)
End Sub
2 1
3
2
1
2
2
3 2 1
1
3 2 y y
dx
dy
y
dx
dy
e y y y
dx
dy
x
= =
+ + =
85
Derivative Subroutine Example
Fortran code for
system of ODEs
at right is shown
below
subroutine fsub( x, y, f )
real(KIND=8) x, y(*), f(*)
f(1) = -y(1)

+

sqrt(y(2))

+y(3)*exp(2*x)
f(2) = -2 * y(1)**2
f(3) = -3 * y(1) * y(2)
end subroutine fsub
2 1
3
2
1
2
2
3 2 1
1
3 2 y y
dx
dy
y
dx
dy
e y y y
dx
dy
x
= =
+ + =
86
Derivative Subroutine Example
C++ code for
system of ODEs
at right is shown
below
void fsub(double

x,

double

y[],

double

f[])
{
f[1]

=

-y[1]

+

sqrt(y[2])

+

y[3]*exp(2*x);
f[2] = -2 * y[1] * y[1];
f[3] = -3 * y[1] * y[2];
}
2 1
3
2
1
2
2
3 2 1
1
3 2 y y
dx
dy
y
dx
dy
e y y y
dx
dy
x
= =
+ + =