Вы находитесь на странице: 1из 31

Initial Value ODE

Consider the system of nonlinear ODE with prescribed initial value.


y

= f(x, y(x))
y(a) = y
0
initial conditions

(1) y IR
n
Note:
1. If f C
1
then (1) has a unique solution
2. The behavior of errors in the numerical solution of (1) is related to the behavior of the
linearized eq: Let y(x) be some nominal solution and y a perturbation . Then
y(x) = y(x) + y(x)
y

= ( y + y)

= f(x, y + y)
y

+ y

= f(x, y) +
f
y
(x, y)y
y

=
f
y
(x, y)y = A(x)y ()
f
y
(x, y) = the Jacobian matrix of f(x, y).
3. The Model Problem:
Assume A(x) = A (a constant in time) and that A has N distinct eigenvalues
j
and N
independent eigenvectors. Then by making a change of variables
y = Pz P = [v
1
|v
2
| . . . |v
N
]
We can rewrite (*) in the form
z

= Dz
where
D =
_

1
0
.
.
.
0
N
_

_
So the equations for z are decoupled into the form
z

h
=
j
z j = 1, . . . , N
Scalar Model Problem:
We consider the scalar model problem
y

= y, y(0) = y
0
with the exact solution y = y
0
e
x
.
Note: If Re() > 0 solutions grow exponentially.
If Re() < 0 solutions decay exponentially.
1
Grow
Decay
Im()
Re()
Consequence for system of ODE:
If
f
y
(x, y(x)) = A(x) has eigenvalues all of whose
real parts are negative then errors will decay exponentially with time.
If any one eigenvalue of A(x) has a positive real part,
then errors will grow exponentially with time.
Schemes to solve the scalar initial value problem:
Consider
y

= f(x, y) y R
y(0) = y
0
1. The Taylor Series Method:
y(x
n+1
) = y(x
n
) +hy

(x
n
) + +
h
r
r!
y
(r)
(x
n
) +
h
r+1
(r + 1)!
y
(r+1)
()
Now y

= f(x, y(x))
y

= f
x
(x, y) +f
y
y

= f
x
+ff
y
Eg: y

= y, y(0) = y
0
y

= y

=
2
y
y
(r)
= y
(r1)
= =
r
y
y
n+1
= y
n
+hy
n
+ +
(h)
r
r!
y
n
+
=

1 + (h) + +
(h)
r
r!
+

y
n
= e
h
y
n
y
n+1
x
n+1
x
n
y
n

Note:
(1) By truncating the Taylor series at the rth term, we obtain an approximation of O(h
r
)
derivative evaluation tedious.
(2) The accuracy of a numerical scheme is determined by the number of terms of agreement
with the Taylor Series when the exact solution of the ODE is substituted into the dierence
equation.
(3) Many numerical schemes can be interpreted as giving dierent approximations to e
h
when
they are applied to the model problem.
2
2. The forward Euler Method A prototype ODE solver:
Idea: Truncate the Taylor series after the linear term and avoid having to take higher derivatives.
y
n+1
= y
n
+hy

n
+O(h
2
)
= y
n
+hf(x
n
, y
n
) +O(h
2
)
Eulers Method:
Y
n+1
= Y
n
+hf(x
n
, Y
n
)
Y
0
= y
0
Where Y
n
y(x
n
)
Dierence equation
Alternative Derivation 1: Using the forward dierence approx. to y

:
y
i+1
y
i
h
= y

i
+Mh
Y
i+1
Y
i
h
= f(x
i
, Y
i
)
M depends on y

.
x
i
x
i+1
Y
i+1
Y
i
y(x)
h
Alternative Derivation 2:
y

= f(x, y(x))
y(x
n+1
) = y(x
n
) +
x
n+1

x
n
f(s, y(s)) ds
Y
n+1
= Y
n
+hf(x
n
, Y
n
)
Left hand approximate integration
f(x, y(x))
x
n
x
n+1
x
NOTE:
1. The Forward Euler (FE) is explicit because all the information to proceed from the n
th
step
to the (n+1)
th
step is known. Contrast this with Y
n+1
= Y
n
+hf(x
n+1
, Y
n+1
) which involves
solving a nonlinear equation at each time step.
2. The Euler method involves a dierence equation that can be thought of as a model for
y

= f(x, y).
Truncation error:
The Truncation Error is the term that remains when you plug the exact solution to y

= f(x, y)
into the dierence scheme.
3
Eg: For the Euler Method
Y
n+1
Y
n
h
= f(x
n
, Y
n
) the truncation error is:
T
n
=
y
n+1
y
n
h
f(x
n
, y
n
) =
y
n
+hy

n
+
h
2
2
y

n
+. . . y
n
h
f(x
n
, y
n
)) = O(h)
Consistency:
A dierence equation is consistent with a dierential equation if the truncation error 0 as
h 0.
Note: Consistency Dierence equation
h0

Dierential equation you want to


solve and not some other
dierential equation
Convergence:
A dierence method converges with order p if for h suciently small
|Y
n
(h) y(nh)| C(b a)h
p
for nh < (b a).
Theorem: The FE method converges with rst order accuracy on [a, b]
Proof: Let e
n
= Y
n
y
n
and let L = max

f
y

f C

[a, b]
Y
n+1
= Y
n
+hf(x
n
, Y
n
)
y
n+1
= y
n
+hf(x
n
, y
n
) +
h
2
2
y

()
Subtract e
n+1
= e
n
+h{f (x
n
, Y
n
) f (x
n
, y
n
)}
h
2
2
y

()
Since f C
1
using the MVT
|f(x
n
, Y
n
) f(x
n
, y
n
)| = |f
y
(x
n
, y)| |Y
n
y
n
| L|e
n
|
Also let y

= f
x
+f f
y
C
0
[a, b] there exists an M :
|y

()|/2 M [a, b]
|e
n+1
| (1 +hL)|e
n
| +Mh
2
, e
0
= 0 A Dierence inequality for the error.
Mh
2
= Local truncation error.
Consider the dierence equation:
E
n+1
= (1 +hL)E
n
+Mh
2
E
0
= 0 ()
Claim: |e
n
| E
n
PF: Induction n = 0 trivial since E
0
= 0 = e
0
|e
n+1
| = (1 +hL)|e
n
| +Mh
2
(1 +hL)E
n
+Mh
2
= E
n+1

4
Solution to (*)
Homog: E
n
= A(1 +hL)
n
Particular: E
n
= D D = (1 +hL)D +Mh
2
D =
Mh
2
hL
=
M
L
h
E
n
= A(1 +hL)
n
Mh/L
E
0
= 0 = AMh/L
E
n
=
Mh
L
{(1 +hL)
n
1}
|e
n
|
Mh
L
{(1 +hL)
n
1} e
x
1 +x

Mh
L

e
hLn
1

since g(x) = e
x
1 x is increasing on [a,b].
|e
n
|
M
L

e
L(ba)
1

h = O(h) Global truncation error


Notes:
1. Local truncation error (LTE): O(h
2
) & Global truncation error (GTE) O(h)
Y
n
y
n
y
n+1
y
n+1
Y
n+1
LTE
GTE
Essentially NO(h
2
) = O(h)
2. Since we are primarily interested in the GTE, many authors only dene the truncation error
as was done above and refer to it as the truncation error.
How about a general convergence theory?
We dont want to have to repeat the above theorem for each new scheme.
The key elements of a convergence theory to prove convergence of the solution to a dierence
scheme to the solution to the initial value problem y

= f(x, y) y(0) = y.
Consistency - truncation error is small enough that as h 0, the dierence equation the
dierential equation you want to solve and not some other equation.
Stability Roundo errors do not grow as the solution evolves.
Eg: Consider the IVP y

= y y(0) = 1 with the solution y = e


x
and the dierence scheme
Y
n+2
2Y
n+1
+Y
n
=
h
2
[f(x
n+2
, Y
n+2
) f(x
n
, Y
n
)] .
For this problem
5
Y
n+2
2Y
n+1
+Y
n
=
h
2
[Y
n+2
Y
n
] , with IC Y
0
= 1, Y
1
= 1.
Plug in solution to y

= y
T
n
=
y
n+2
2y
n+1
+y
n
h
2


2
[y
n+2
y
n
]
h
=
y
n
+ 2hy

n
+
(2h)
2
2
y

n
+ 0(h
3
) 2

y
n
+hy

n
+
h
2
2
y

n
+ 0(h
3
)

+y
n
h
2


2

y
n
+ (2h)y

n
+
(2h)
2
2
y

n
+. . . y
n

h
= (y

n
y

n
) +O(h) so the method is consistent.
Let us see how small perturbations to the solution will propagate.
1) We look for a solution to the homogeneous dierence equation (*)
Y
n
=
n

1
h
2

2
2 +

1 +
h
2

= 0
with roots =
1
1
2

4 4

h
2

(1 h/2)
=
1 (h)/2
1 (h/2)
= 1 or
1 + (h)/2
1 (h)/2
2) Let us assume that the perturbations take on the form Y
0
= , Y
1
= and that
Y
n+2
2Y
n+1
+Y
n
=
h
2
[Y
n+2
Y
n
] +h.
Let us see how these perturbations will propagate:
Particular solution: Assume Y
n
= C a constant
C 2C +C =
h
2
[C C] +h
We note that C is a solution to the homogeneous equation so we look for a particular solution of
the form
Y
n
= nc
(n + 2)c 2(n + 1)c +nc =
h
2
[(n + 2)c nc] +h
hc +h = 0
c = /
Y
n
= n/ is a particular solution.
General solution
Y
n
= A+B

1 +h/2
1 h/2

n
n/
Y
0
= A+B =
Y
1
= A+B

1 +h/2
1 h/2

/ =
6
B

1 +h/2
1 h/2

= /
B(h) = /
B = /h
2
A = B = (1 + 1/h
2
)
Y
n
=

1 +
1
h
2


h
2

1 +h/2
1 h/2

.
Now as h 0 and n in such a way that nh = x constant we observe that Y
n

Eg: Stability: Consider the model problem
y

= y y(0) = 1 with y = e
x
and consider the solution generated by the Euler scheme
Y
n+1
Y
n
= hY
n
and consider perturbations of the form Y
0
= 1 + and look at
Y
n+1
Y
n
= hY
n
+h Y
0
= (Subtract out the exact solution to the DCE)
Homogeneous eq: Y
n
=
n
= (1 +h)
Particular solution: Y
n
= c c c = hc +h c = /
General solution is
Y
n
= A(1 +h)
n
/ Y
0
= A/ = A =

(1 + )
If < 0 and h 0, n in such a way that hn = x a constant then
Y
n
= A

1
x||
n

n
/
= Aexp
_
_
_
x||
ln

1
x||
n

x||
n
_
_
_
/
n
=

(1 + )e
x||

which is bounded.
7
Zero Stability: A dierence scheme for which perturbations remain bounded in the limit h 0
is said to be 0-stable.
You can check 0-stability of an N step method
N

k=0
a
k
y
j+k
= h
N

k=0
b
k
f
j+k
()
By determining the roots of the polynomial P
N
() =
N

k=0
a
k

k
. If the roots of P
n
() are such that
|| 1 and those for which || = 1 are simple then (*) is 0-stable.
Examples:
The Euler Scheme is 0-stable since: P
1
() = 1 = 0 has only one root on the unit disk.
The Second order scheme Y
n+2
2Y
n+1
+ Y
n
=
h
2
(Y
n+2
Y
n
) is not 0-stable since P
2
() =
( 1)
2
= 0 has = 1 as a double root.
Theorem: (Dahlquist) Consistency + 0-stability convergence. In particular, a 0-stable consis-
tent method converges with the order of its truncation error.
Problem with practical use of the convergence theorem:
Eg: Consider the model problem y

= y y(0) = 1 y(x) = e
x
.
Euler solution:
Y
n
= Y
n1
+hY
n1
, Y
0
= 1
= (1 +h)Y
n1
multiplication factor G = 1 + h
= (1 +h)
n
Y
0
()
Now let n , h 0 in such a way that nh = X a constant, then
lim
n
(1 +h)
n
= lim
n
exp

X
ln (1 +X/n)
X/n

= e
X
so the method converges as h 0.
But in practice h = 0. Say = 10 and h = 1
Y
n
= (9)
n
which blows up and oscillates!
From (*) we observe that the solution will decay provided |G(h)| = |1 +h| < 1.
If is real then 1 < 1 +h < 1 2 < h < 0
Absolute Stability:
8
Recall: If Re() < 0 then exact solutions of y

= y decay with time i.e., y


EX
(x) = y
0
e
||x
.
Requirement of a dierence scheme-asymptotic stability:
If Re() < 0 then ideally we would like |G(h)| < 1 for all h. A method that satises this
requirement is called asymptotically stable or A-stable.
Is FE A-stable? No.
Eg. = 10, h = 1, |Y
n
| . But the news is not all bad, if Re() < 0 then there exist a
range of values of h for which |G(h)| < 1.
Stability Regions:
The set of points z = h in the complex plane for which |G(z)| < 1.
Stability region for FE:
For what values of z C are |G(z)| = |1 +z| < 1.
Method 1.
z = +i |1 +z|
2
= (1 + )
2
+
2
< 1

h = z
2 1
0
Method 2.
Using a conformal map:
G = 1 +z z = G1.
G
|G| < 1
z = G1
Usefully stable: A method is usefully stable for a particular problem with eigenvalues
i
and
choice of timestep h if h
i
is in the stability region of the method for all
i
.
Eg: Forward Euler y

= y y
0
= 1 y = e
x
1. = 10 for stability we require that h : 2 < h < 0
2 < h10 < 0 h <
1
5
9
h
2(1 +i)
h = 1
2. = 2(1 +i) = 2

2 e
i3/4
Point on BDY of disk is h2(1 +i) = (1 +i)
h
1
2
i
2
3. = i for no choice of h can h = hi be brought into the stability region so Eulers method
will not be useful for oscillatory systems y

= iy y = e
ix
which occur in models of wave
phenomena.
Generalization of Eulers method
FE

Higher Order
1-Step Dierence Multistep Methods Predictor Corrector
Improved Euler Leapfrog Adams-Moulton Improved Euler
Trapezoidal Scheme Adams-Bashforth
Backward Euler Backward Dierence
-Method
Modied Euler

Higher order DCE
RK

Dierent approximations to y

=f(x,y(x))

Dierent Philosophy
Schemes based on the Trapezoidal Rule:
10
Using the integral form of y

= f(x, y(x))
y(x
n+1
) = y(x
n
) +
x
n+1

x
n
f(x, y(x))dx
= y(x
n
) +
h
2
[f (x
n
, y (x
n
)) +f (x
n+1
, y (x
n+1
))] +O(h
3
) ()
There are a number of dierent ways we can choose to exploit ()
(1)
The Improved Euler-Explicit/ Heuns method (RK-2)
Predictor Y
n+1
= Y
n
+hf(x
n
, Y
n
)
Corrector Y
n+1
= Y
n
+
h
2

f (x
n
, Y
n
) +f

x
n+1
, Y
n+1

_
_
_
2 Stage
Y
n
x
n
x
n+1

Y
n+1
Y
n+1
y
n+1
Second order accurate
Explicit
If we keep replacing Y
n+1
by Y
n+1
until convergence, we obtain a predictor-corrector method.
Truncation Error for the Improved Euler Scheme:
General Problem: y

= f(x, y), y(0) = y


0
Y
k+1
= Y
k
+
h
2
[f(x
k
, Y
k
) +f(x
k+1
, Y
k
+hf(x
k
, Y
k
))]
T
k
(h) =
y
k+1
y
k
h

1
2
[f(x
k
, y
k
) +f(x
k+1
, y
k
+hf(x
k
, y
k
))]
=
y
k
+hy

k
+
h
2
2
y

k
+
h
3
3!
y

k
+. . . y
k
h

1
2

2f(x
k
, y
k
) +h(f
x
+f
y
f)|
x
k
+O(h
2
)

=

y

k
f
k

+
h
2

k
(f
x
+f
y
f) |
x
k

+O(h
2
)
T
k
(h) = O(h
2
)
On Model Problem Simpler: y

= y y(0) = y
0
T
k
(h) =
y
k+1
y
k
h

1
2
[y
k
+ (y
k
+hy
k
)]
=
y
k
+hy

k
+
h
2
2
y

k
+
h
3
3!
y

k
+. . . y
k
h
y
k
+h

2
2
y
k
= (y

k
y
k
) +
h
2

k

2
y
k

+O(h
2
)
11
(2) The Trapezoidal Scheme Crank-Nicholson Scheme Implicit
Y
n+1
= Y
n
+
h
2
[f(x
n
, Y
n
) +f(x
n+1
, Y
n+1
)]
Dont know this value Implicit
We can solve this nonlinear equation (at each timestep) using Newtons method:
Let g(Y
n+1
) = Y
n+1

h
2
[f(x
n
, Y
n
) +f(x
n+1
, Y
n+1
)]
Y
k+1
n+1
= Y
(k)
n+1
g

Y
(k)
n+1

Y
(k)
n+1

Second Order Accurate

f (x
n+1
, y
n+1
) = f

x
n
+h, y
n
+hy

n
+
h
2
2
y

n
+. . .

T
n
(h) =
y
n+1
y
n
h

1
2
[f(x
n
, y
n
) +f(x
n+1
, y
n+1
]
=
y
n
+hy

n
+
h
2
2
y

n
+
h
3
3!
y

n
+. . . y
n
h

1
2

f
n
+f
n
+h(f
x
+f
y
f)

x
n
+O(h
2
)

= (y

n
f
n
) +
h
2

n
(f
x
+ff
y
)

x
n

+O(h
2
).
A quicker method to check the truncation error (useful for multistep methods but not very
useful for multistage RK method).
y

n
= f
n
y

n+1
= f
n+1
T
n
(h) =
y
n+1
y
n
h

1
2
{f
n
+f
n+1
}
=
y
n
+hy

n
+
h
2
2
y

n
+
h
3
6
y

n
+. . . y
n
h

1
2

n
+y

n+1

= y

n
+
h
2
y

n
+
h
2
6
y

n
+. . .
1
2

n
+y

n
+hy

n
+
h
2
2
y

=
h
2
12
y

n
+. . .
Error constant
Note: Improved Euler and the Trapezoidal Schemes are both second order accurate but the Im-
proved Euler Scheme is explicit while the Trapezoidal Scheme is implicit. Then why would we
bother using the Trapezoidal Scheme if we have to solve a nonlinear equation at each timestep?
Answer: Stability
12
Stability Region of the Improved Euler RK2:
Consider the model problem y

= y, y(0) = 1, y = e
x
.
Y
n+1
= Y
n
+
h
2
[Y
n
+ {Y
n
+hY
n
}]
Y
n+1
=

1 +h +
(h)
2
2

Y
n
e
h
Y
n
(Note: With Taylor Series up to O((h)
n
) Be careful if you try to infer the error by looking at
G(z) = 1 +z +z
2
/2 since we would be ignoring the time stepping part.)
The growth factor is G(h) = 1 +h +
(h)
2
2
For stability we require |G(h)| < 1
Stability region by a conformal map:
Let G = 1 +z +
z
2
2
z = h
z = 1

2G1
G
1
2G
2 2
2G1
+1
3

2G1

3i

3
z = 1

2G1

2G1
1 1
2
Stability Region of the Trapezoidal Scheme: Y
k+1
= Y
k
+
h
2
[f(x
n
, Y
n
) +f(x
n+1
, Y
n+1
)]
Consider the model problem y

= y y(0) = y
0
.
Y
k+1
= Y
k
+
h
2
[Y
k
+ Y
k+1
]

1
h
2

Y
k+1
=

1 +
h
2

Y
k
Y
k+1
=
(1 +h/2)
(1 h/2)
Y
k
= G(h)Y
k
where G(h) =
1 +h/2
1 h/2
Note: e
z
G(z) =
1 +z/2
1 z/2
is the (1, 1) Pade Approximation of e
z
.
13
e
z
=
1 +a
1
z +. . . +a
n
z
m
1 +b
1
z +. . . +b
n
z
n
is the (m, n) Pade approximant.
Stability: For stability we require that |G(h)| < 1.
G(z) =
1 +z/2
1 z/2
=
2 +z
2 z
1 > |G(z)| =
|z + 2|
|z 2|
|z 2| > |z + 2|
2 2
z
|z 2|
z
|z + 2|
The Trapezium Rule is A-stable but it is more expensive to compute with it since it is implicit.
Alternatively using a conformal map:
G =
2 +z
2 z
(2 z)G = 2 +z 2(G1) = z(1 +G) z = 2
(G1)
(G+ 1)
.
G
A
B
C
D
1 c
A

z
O : G = 0 z = 2
A : G = 1 z = 0
B : G = i z = 2

i 1
i + 1

C : G 1
+
z i
C : G 1

z i
D : G = i z = 2i
14
General Implicit Method method
Y
n+1
= Y
n
+h[(1 )f(x
n
, Y
n
) + f(x
n+1
, Y
n+1
)]
= 0 : Forward Euler / Explicit Euler G = 1 +z (1, 0) Pade approx. to e
z/2
= 1/2 : Trapezoidal Rule G =
1+z/2
1z/2
(1, 1) Pade approx. to e
z/2
= 1 : Backward Euler /Implicit Euler G =
1
1z
(0, 1) Pade approx. to e
z/2
.
< 1/2
= 1/2 > 1/2
= 1
BE
FE
1 2
1 2
Truncation Error: Let y

n
= f(x
n
, y
n
)
T
n
(h) =
y
n+1
y
n
h
[(1 )f(x
n
, y
n
) + f(x
n+1
, y
n+1
)]
=
y
n
+hy

n
+
h
2
2
y

n
+
h
3
3!
y

n
+. . . y
n
h

(1 )f
n
+

f
n
+h(f
x
+ff
y
)

x
n
+O(h
2
)

= (y

n
f
n
) +
h
2
(y

n
2(f
x
+ff
y
)

x
n
) +O(h
2
)
=

O(h) = 1/2
O(h
2
) = 1/2.
Note:
An explicit method cannot be A-stable.
The order of an A-stable implicit method cannot exceed 2.
The second order A-stable implicit method with the smallest error constant is the trapezoidal
rule.
Looks like the TR is a winner but there is an important class of problems for which TR gives
poor results sti systems.
Sti Systems:
Example:
15

x
y

1 0
0 100

x
y

x = e
t
x
0
y = e
100t
y
0
y
0
x
0
e
t
e
100t
FE: If we were to use the FE method in the useful regime we would requre 2 < h
k
< 0

1
= 1 h < 2

2
= 100 h < 1/50
We do not particularly care about y since it decays to zero very rapidly but we are more interested
in x which persists much longer. But to compute the system stably we woud need very small
time-steps bad news; it will take forever.
What about using the Trapezoidal Rule so we dont have to worry about the timestep?
Say Re() and let us look at G(z) =
1+z/2
1z/2
in the case Re() .
Let z = +i and let be xed.
|G(z)| =
|1 +z/2|
|1 z/2|
=

(1 + /2)
2
+ (/2)
2

(1 /2)
2
+ (/2)
2

1
solution will oscillate but will not decay!
But e
z
, which G(z) is supposed to approximate, is such that e
z
0 as Re(z) .
L-Stability: A numerical method for which G(z) 0 as Re(z) is said to be L-stable or
has strong decay.
Example of an L-stable method: the Backward Euler Scheme: (BE)
Y
n+1
= Y
n
+hf(x
n+1
, Y
n+1
)
For model problem:
Y
n+1
= Y
n
+hY
n+1
Y
n+1
=
1
(1 h)
Y
n
= G(h)Y
n
G(z) =
1
1 z
z = 1
1
G
G(z) 0 as Re(z)
so BE is L-stable.
16
G
1/G
z = 1 1/G
1
1
2
17
3.2 Runge-Kutta methods: Multistage one step methods
Can use more than 1 function evaluation (i.e., of f) per timestep.
Idea: Use weighted average for gradients over the interval [x
k
, x
k+1
] to achieve a greater
accuracy when stepping from x
k
to x
k+1
.
RK2: Runge-Kutta method for order 2
Assume
y
k+1
= y
k
+ahf(x
k
, y
k
) +bhf(x
k
+ h, y
k
+ hf(x
k
, y
k
))
= y
k
+ (a +b)hf(x
k
, y
k
) +bh
2
(f
x
+ f f
y
)

k
+bh
3

2
2
f
xx
+ f f
xy
+

2
2
f
2
f
yy

+O(h
4
)
Now TS:
y
k+1
= y
k
+hf
k
+
h
2
2
(f
x
+f f
y
)

k
+
h
3
3!

f
xx
+ 2f f
xy
+f
x
f
y
+f f
2
y
+f
2
f
yy

+. . .
To agree with TS up to order 2 we have
a +b = 1, b = 1/2 = b T
n
(h) = O(h
2
)
(I). a = 1/2 b = 1/2 = = 1 which is just the improved Euler method.
Convenient form:
m
1
= f(x
k
, y
k
) m
2
= f(x
k+1
, y
k
+hm
1
)
y
k+1
= y
k
+
h
2
(m
1
+m
2
).
(II).
a = 0 b = 1, = 1/2 = The Modied Euler Method.
y
k+1
= y
k
+hf(x
k
+
1
2
h, y
k
+
1
2
hf(x
k
, y
k
))
x
k
+h/2 x
k+1
x
k
Convenient form
m
1
= f(x
k
, y
k
)
m
2
= f(x
k
+
1
2
h, y
k
+
h
2
m
1
)
y
k+1
= y
k
+hm
2
RK3: Y
k+1
= Y
k
+
h
6
(m
1
+ 4m
3
+m
2
) +O(h
4
)
m
1
= f(x
k
, Y
k
)
m
2
= f(x
k
+h, Y
k
+hm
1
)
m
3
= f(x
k
+
h
2
, Y
k
+
h
4
(m
1
+m
2
))
18
Demonstration that the method is O(h
3
) using the model problem. Consider y

= y.
y
k+1
= y
k
+
h
6

y
k
+ 4

y
k
+
h
4
(y
k
+ (y
k
+hy
k
))

+ (y
k
+hy
k
)

= y
k
+
h
6

(y
k
+ 4y
k
+ y
k
) + 3h
2

2
y
k
+ (h)
3
y
k

1 + (h) +
(h)
2
2
+
(h)
3
6

y
k
.
RK4: y
k+1
= y
k
+
h
6
[m
1
+ 2m
2
+ 2m
3
+m
4
] +O(h
5
)
where
m
1
= f(x
k
, y
k
)
m
2
= f

x
k
+
h
2
, y
k
+
m
1
2
h

m
3
= f

x
k
+
h
2
, y
k
+
m
2
2
h

m
4
= f (x
k
+h, y
k
+hm
3
)
agrees with TS up to O(h
4
)
m
1
m
2
m
3
m
4
x
k+1
x
k
+h/2
x
k
T
n
(h) = O(h
4
)
19
Stability region for RK methods:
Im(h)
1

3
2
3
2 2.51 2.78
R
k1
R
k2
R
k3
R
k4
Re(h
= ||e
i
y
k+1
=

1 +h +
(h)
2
2
+. . . +
(h)
P
P!

y
k
= 1 + (h) +. . . +
(h)
P
P!
= 1 +re
i
+
r
2
2
e
i2
+. . . +
r
P
P!
e
iP
r = h||
Note: For RK the order = # of function evaluations.
# of Function Evaluations Order of Method
2 2
3 3
4 4
5 4
6 5
7 6
n > 8 n 2
20
Multistep Methods
Idea: Use more previous timesteps to get a better approximation rather than more gradient
evaluations per timestep.
Because we are using more than one timestep we end up with higher order dierence equation
that we are using to model the rst order ODE. y

= f(x, y). We have to be careful that the


additional (sometimes spurious) solutions do not end up corrupting the numerical solution.
General Multistep Method:
N

k=0
a
k
y
j+k
= h
N

k=0
b
k
f
j+k
and dene the corresponding characteristic polynomials P
N
() and Q
N
() as follows: P
N
() =

N
k=0
a
k

k
and Q
N
() =

N
k=0
b
k

k
Consistency: It can be shown (exercise) that the method is consistent if and only if P
N
(1) = 0
and P

N
(1) = Q
N
(1).
0-stability: The method is said to be 0-stable provided the roots
k
of P
N
() = 0 are either
such that |
k
| < 1 or |
k
| = 1 in which case the roots are simple.
Eg: The Leapfrog Method:
Idea: Use central dierences to approximate the rst derivative rather than the forward/backward
dierence schemes used in Eulers methods and the multistage methods.
We obtain the leapfrog scheme:
Y
n+1
= Y
n1
+ 2hf(x
n
, Y
n
), Y
0
= h
0
.
As usual, a Taylor series expansion shows that the truncation error for this method is of O(h
2
).
Perturbation argument for Leapfrog:
y
n+1
= y
n1
+ 2hf(x
n
, y
n
) f = y
y
n
= G
n
G
2
2(h)G1 = 0
G
2
= 1 + 2G = h
G = (1 + 2G)
1/2
= [1 + G
1
2
(G)
2
+. . .]
(1 + G+. . .)
G = G
1
+ G
2
+. . . G
1
= 1, G
1
+ G
2
= (1 + G
1
) = 1 +
y
n
= e
in
e
i(n+1)
2he
in
e
i(n1)
e
i
2h e
i
= 0
h =
e
i
e
i
2
= i sin
21
1 1
1
1
1
1
x x
x
Re > 0 unstable
Re < 0 Unstable
Re = 0
could be stable provided h
is small enough
x
x x
In fact G
1
G
2
= 1 for stability G
1
, G
2
must be on unit disk
G
1
= e
i
G
2
= e
i
G = z

z
2
+ 1 z = h
G
2
2zG1 = 0 z =
G
2
1
2G
=
1
2
(GG
1
)
z =
1
2
(e
i
e
i
)
z = i sin
Explicit Multistep Methods Adams Bashforth
A-B2: y
n
= y
n1
+hf(x
n1
, y
n1
) +hf(x
n2
, y
n2
) = y
n1
+hf
n1
+hf
n2
Expand each of these terms in (1) about x
n
, y
n
in a Taylor Series.
y
n
= (y
n
hy

n
+
h
2
2
y

n
+. . .) + h

f
n
hf

n
+. . .

+ h

f
n
2hf

n
+. . .

= y
n
+h(1 + + ) y

n
+h
2
y

1
2
2

+O(h
3
)
In order for the terms up to O(h
2
) to vanish we require
1 + + = 0
1
2
2 = 0

=
1
2
, =
3
2
We obtain the second order Adams-Bashforth Method AB2:
Y
n+1
= Y
n
+
3
2
hf(x
n
, Y
n
)
1
2
hf(x
n1
, Y
n1
)
Accuracy O(h
2
)
22
Need Y
0
and Y
1
to start the time-stepping use RK4 to nd Y
1
Stability: Consider y

= y
Y
n+1
=

1 +
3h
2

Y
n

h
2
Y
n1
A second order DCE
Look for solutions of the form Y
n
= G
n
G
2
(1 + 3z/2)G+
z
2
= 0
As z 0, G
2
G = 0 the Zero Stability Polynomial which has roots
G
1
= 1 a root shared by all consistent methods
G
2
= 0 which is the spurious root in this case under control
For z small G =

1 +
3
2
z

(1 + 3/2z)
2
4z/2

/2
=

1 +z +O(z
2
)
z
2
+O(z
2
)
Stability Region:
x x
under control
x Re() > 0
0 Re() < 0
Z
1
1
Illustration of a perturbation method that can be used to derive an expression for the roots to the
characteristic equation in the limit z 0.
G
2

1 +
3z
2

G+
z
2
= 0 ()
z = 0 G(G1) = 0 G = 0, 1.
Assume that G has a power series expansion in powers of z:
G = G
0
+G
1
z +G
2
z
2
+. . .
Plug into (*): (G
0
+G
1
z +. . .)
2

1 +
3z
2

(G
0
+G
1
z +. . .) +
z
2
= 0
Expand and collect powers of z:
z
0
> G
2
0
G
0
= 0 G
0
= 0, 1
z

> 2G
0
G
1

3
2
G
0
G
1
+
1
2
= 0
G
1
(2G
0
1) =
1
2
+
3
2
G
0
G
1
=

1
2
+
3
2
G
0
2G
0
1

G
0
= 0 G
1
= +1/2
G
0
= 1 G
1
=
1
1
= 1
G =

1 +z +O(z
2
)
0 +
z
2
+O(z
2
)
23
This method was not needed here because we could use the quadratic formula. However, for
higher order methods this technique becomes extremely useful.
Note that the zeroth order term is the zero-stability polynomial.
To derive higher order AB methods we use the integral form of the ODE and interpolate f over
previous timesteps
Interpolate
here
n m x
n
x
n+1
y
n+1
= y
n
+
x
n+1

x
n
f(x, y(x)) dx (1)
f
n+s
= E
s
f
n
= (1 )
s
=
m

k=0
(1)
k

s
k

k
f
n
But f
k1
= f
k
, . . . ,
j
f
i
=
j
f
ij
f
n+s
=
m

k=0
(1)
k

s
k

k
f
nk
(2)
where

y
k

y(y 1) . . . (y k + 1)/k! k > 0


1 k = 0
Make the transformation of variables s = (x x
n
)/h; dx = hds
y
n+1
= y
n
+h
1

0
m

k=0
(1)
k

s
k

k
f
nk
ds
= y
n
+h{
0
f
n
+
1
f
n1
+. . . +
m

m
f
nm
}
where
k
= (1)
k
1

s
k

ds

0
= 1

1
= (1)
1

0
(s)
1
ds =
1
2

2
= (1)
2
1

0
(s)(s 1)
2
ds =
5
12

3
=
3
8

4
=
251
720
24
m = 1 : Y
n+1
= Y
n
+h

f
n
+
1
2
f
n1

= Y
n
+
h
2
{3f
n
f
n1
}
LTE
: O(h
3
)
m = 2 : Y
n+1
= Y
n
+h

f
n
+
1
2
f
n1
+
5
12

2
f
n2

= Y
n
+
h
12
{23f
n
16f
n1
+ 5f
n2
} : O(h
4
)
Note: For an n-step scheme we have n roots. If the method is consistent 1 will be a root. For
stability a method has to control the behavior of the remaining n 1 roots
If |G
j
| > 1 for some j the method is zero-unstable
If |G
j
| = 1 for more than one root then the method is weakly zero stable
This useful stability region in this case is the set of all z such that |G
j
(z)| < 1 for all j.
A family of implicit multistep methods Adams Moulton Methods
By analogy with the trapezoidal scheme we derive a family of methods that use a polynomial
to interpolate f(x, y(x)) at x
nm
, x
nm+1
, . . . , x
n
and x
n+1
. Including x
n+1
makes this family
of methods implicit.
?
x
nm
x
nm+1
x
n
x
n+1|
Using the interpolation formula derived above
f
n+1+r
= E
r
f
n+1
=
m+1

k=0
(1)
k

r
k

k
f
n+1k
or letting s = 1 +r
f
n+s
= E
(s1)
f
n+1
=
m+1

k=0
(1)
k

1s
k

k
f
n+1k
Substituting into the integral form of y

= f(x, y):
y
n+1
= y
n
+
x
n+1

x
n
f(x, y(x)) dx
we obtain
y
n+1
= y
n
+h

0
f
n+1
+
1
f
n
+. . . +
m+1

m+1
f
nm

where
k
= (1)
k
1

1s
k

ds k = 0, 1, . . . , m+ 1
25

0
= 1,
1
=
1
2

2
=
1
12
,
3
=
1
24
,
4
=
19
720
LTE
Eg: AM 1 (m = 1) : Y
n+1
= Y
n
+hf
n+1
: Backward Euler O(h
2
)
AM 2 (m = 0) : Y
n+1
= Y
n
+h

f
n+1

1
2
(f
n+1
f
n
)

= Y
n
+
h
2
[f
n+1
+f
n
] : TR O(h
3
)
AM 3 (m = 1) : Y
n+1
= Y
n
+h

f
n+1

1
2
(f
n+1
f
n
)
1
12
(f
n+1
2f
n
+f
n1
)

= Y
n
+
h
12
[5f
n+1
+ 8f
n
f
n1
] O(h
4
)
AM 4 (m = 2) : Y
n+1
= Y
n
+
h
24
[9f
n+1
+ 19f
n
5f
n1
+f
n2
] O(h
5
)
unknown term need to solve a nonlinear eq.
Stability properties of AM4 using the perturbation approach.
AM2:

1
9h
24

Y
n+1

1 +
19h
24

Y
n
+
5h
24
Y
n1

h
24
Y
n2
= 0
Let r = (h/24)
(1 9r)Y
n+1
(1 + 19r)Y
n
+ 5rY
n1
rY
n2
= 0
Look for solutions of the form Y
n
= G
n
(1 9r)G
3
(1 + 19r)G
2
+ 5rGr = 0
Now consider the limit h 1 so that r 1. The r 0 limit yields the following equation:
G
3
0
G
2
0
= 0
which implies that G
0
= 1, 0, 0 are the leading terms in the asymptotic expansion for G. Separating
the small from the large terms we have
G
2
(G1) = r

1 5G+ 19G
2
+ 9G
3

To generate the series exp for the root G


0
= 1 we use the recursion
G = 1 +r(1 5G+ 19G
2
+ 9G
3
)/G
2
G
0
= 1, G
1
= 1 + 24r, . . .
To generate the series exp for the roots G
0
= 0 we use the recursion
G
2
= r(1 5G+ 19G
2
+ 9G
3
)/(G1)
G
0
= 0; G
1
= ir
1/2
; G
2
= ir
1/2
+ 3r +. . .
G =

1 + 24r +O(r
2
)
ir
1/2
+ 3r +O(r
3/2
)
are the appropriate expansions for G.
x
x
> 0
x x
x
x
< 0
Exp. decay
Tracks
exp. growth
of solution
26
Y
n
C
1
(1 + 24r +. . .)
n
+C
2
(ir
1/2
+ 3r +. . .)
n
+C
3
(ir
1/2
+ 3r +. . .)
n
Tracks
e
x
n
Parasitic solutions decay
BDF Methods Good for sti problems:
Extension of Backward Euler
Only evaluate f(x, y) once at the end of the timestep
Use high order backward dierence approximations to y

.
Recall :
y
n+1
= Ey
n
= (1 +hD +
hD
2
2!
+. . .)y
n
= e
hD
hD = ln E E = (1 )
1
= ln(1 )
=

j=1

j
j
y


1
h
P

k=1

k
k
y
n
= f(x
n
, y
n
)
A pth order method
(LTE O(h
p+1
) GTE O(h
p
))
or
P

i=0

i
Y
ni
= h
0
f
n
BDF1: y
n
= hf
n
Y
n
= Y
n1
+hf
n
Backward Euler O(h)
BDF2:
1
2

2
y
n
+y
n
=
1
2
(y
n
2y
n1
+y
n2
) + (y
n
y
n1
) = hf
n
3
2
y
n
2y
n1
+
1
2
y
n2
= hf
n
Y
n

4
3
Y
n1
+
1
3
Y
n2
=
2
3
hf
n
27
3.5 Predictor Correct Methods
Split ME into two steps
(1) y
(0)
i+1
= y
i
+hf(x
i
, y
i
) predictor
(2) y
(k+1)
i+1
= y
i
+
h
2

f (x
i
, y
i
) +f

x
i+1
, y
(k)
i+1

k = 0, . . . , corrector loop
Stop when
|y
(k)
i
y
(k1)
i
|
|y
(k)
i
|
<
The rst step in functional iteration x = g(x) |g

(x
0
)| < 1.
The Milne-Simpson Method: More accurate is not always better.
Predictor: (Explicit)
y
k+1
= y
k3
+
x
k+1

x
k3
f(t, y(t))dt
||
P
3
= y
k3
+
4h
3
(2f
k2
f
k1
+ 2f
k
) +O(h
5
)
k 3
k 2
k 1
k
k + 1
Corrector: (Implicit)
y
k+1
= y
k1
+
x
k+1

x
k1
f(t, y(t))dt
= y
k1
+
h
3
(f
k1
+ 4f
k
+f
k+1
) +O(h
5
)
k 1 k k + 1
Stability of correctors: y

= y y(0) = 1
MILNE: y
n+1
= y
n1
+
h
3
(y
n+1
+ 4y
n
+ y
n1
)

1
h
3

y
n+1

4h
3
y
n

1 +
h
3

y
n1
= 0
(1 r)y
n+1
4ry
n
(1 +r)y
n1
= 0
Looking for solutions of the form:
y = G
n
which gives a second order polynomial for G in terms of h. Although it is easy to write down the
roots
1,2
of this polynomial it is sucient to study stability for small h, which can be done using
a Taylor series expansion.
28
Tables of Multistep methods and their stencils
Adams-Bashforth: Explicit
Number of Steps Order b
s
b
s1
b
s2
b
s3
b
s4
1 1 0 1 (Euler)
2 2 0
3
2

1
2
3 3 0
23
12

16
12
5
12
4 4 0
55
24

59
24
37
24

9
24
n +s n +s 1 n
Stencil: a
j
:
b
j
:
= yet to be determined = known values.
Adams-Moulton: Implicit
Number of Steps Order b
s
b
s1
b
s2
b
s3
b
s4
1 1 1 Backward Euler
1 2
1
2
1
2
Trapezoidal Method
2 3
5
12
8
12

1
12
3 4
9
24
19
24

5
24
1
24
4 5
251
720
646
720

264
720
106
720

19
720
n +s n +s 1 n
Stencil: a
j
:
b
j
:
29
Backward Dierence Formulae
Number of Steps Order a
s
a
s1
a
s2
a
s3
a
s4
b
s
1 1 1 1 Backward Euler 1
2 2 1
4
3
1
3
2
3
3 3 1
18
11
9
11

2
11
6
11
4 4 1
48
25
36
25

16
25
3
25
12
25
n +s n +s 1 n +s 2 n
Stencil: a
j
:
b
j
:
Folklore
RK vs. Adams Methods
RK Adams Implicit
Function Evaluations Expensive Poor Preferred
Function Evaluations Inexpensive
& Moderate Accuracy
More Ecient Less Ecient
If Storage is at a Premium Better Worse
Accuracy over a wide range
of Tolerances
Not Suitable Preferred
Problem Sti
-Widely varying time scales
present in the problem
-Stability is more of a
constraint than accuracy
-Explicit Methods dont
work.
BDF2
30
A perspective on second order methods
Function
Explicit/ Evaluations
Method Implicit Per Step Error Const Storage Stability
RK2 Explicit 2 1/6 2N FIGURE
TR Implicit 1/12 2N A-Stable
Weak Decay
AB2 Explicit 1 5/12 3N FIGURE
2BDF Implicit 1/6 3N A-Stable
L-Stable
Notes:
For PDE problems storage is a big concern. Since the spatial discretization introduces errors
there is no point using high order time stepping schemes so second order is usually OK.
If the problem is very sti 2BDF is recommended otherwise AB2 (possibly with a predictor
corrector to control error).
For simpler problems for which storage is not a problem and in which the system is not sti,
use RK4 (or RKF45 for error control) if minimizing computational time is not a priority. Or
use AB4 (or APC4 with error control) otherwise.
31

Вам также может понравиться