You are on page 1of 46

Course: Numerical Solution of Ordinary Differential Equations

Module 2: Multi-step methods

Lecture Content Hours


1 Single step and Multi-step methods, Predictor 2
corrector methods, Milnes method

2 Adams-Moulton method. 1
3 Adams Bashforth method 1
Module 2

Lecture 1

Multi Step Methods

Predictor corrector Methods

keywords: multi-step predictor, corrector, Milne-simpson method, integration formulae

Consider I VP

y f(t,y); t 0 t b with y(t 0 ) y 0 (2.1)


One step methods for solving IVP (2.1) are those methods in which the solution yj+1 at
the j+1th grid point involves only one previous grid point where the solution is already
known. Accordingly, a general one step method may be written as

y j+1=y j +h(t j ,y j ,h)

The increment function depends on solution yj at previous grid point tj and step size h.
If yj+1 can be determined simply by evaluating right hand side then the method is explicit
method. The methods developed in the module 1 are one step methods. These
methods might use additional functional evaluations at number of points between tj and
tj+1. These functional evaluations are not used in further computations at advanced grid
points. In these methods step size can be changed according to the requirement.

It may be reasonable to develop methods that use more information about the solution
(functional values and derivatives) at previously known values while computing solution
at the next grid point. Such methods using information at more than one previous grid
points are known as multi-step methods and are expected to give better results than
one step methods.

To determine solution yj+1, a multi-step method or k-step method uses values of y(t) and
f(t,y(t)) at k previous grid points tj-k, k=0,1,2,k-1,. yj is called the initial point while yj-k
are starting points. The starting points are computed using some suitable one step
method. Thus multi-step methods are not self starting methods.

Integrating (2.1) over an interval (tj-k, tj+1) yields

t j1 t j1
r
y j1 y j f(t,y(t))dt y j a t dt
j
j (2.2)
t jk t j k j 0

The integrand on the right side is approximated by interpolating polynomial of degree r


using equi-spaced points. The integration over the interval is shown in the Fig 2.1.

The method may be explicit or implicit. An implicit method involves computation of yj+1 in
terms of yj+1. First an explicit formula known as predictor formula is used to predict yj+1.
Then another formula, known as corrector formula, is used to improve the predicted
value of yj+1. The predictor-corrector methods form a large class of general methods for
numerical integration of ordinary differential equations. A popular predictor-corrector
scheme is known as the Milne-Simpson method.

Milne-Simpson method

Its predictor is based on integration of f (t, y(t)) over the interval [tj3, tj+1] with k=3 and
r=3. The interpolating polynomial is considered to match the function at three points tj2,
tj1, and tj and the function is extrapolated at both the ends in the interval [tj3, tj-2] and [tj,
tj+1] as shown in the Fig 2.2(a). Since the end points are not used, an open integration
formula is used for the integral in (2.2):

4h 14
p j1 y j1 y j
3
2f(t j ,y j ) f(t j1,y j1 ) 2f(t j2 ,y j2 ) h5 f (4) (); in(t j3 ,t j1 ) (2.3)
45

The explicit predictor formula is of O(h4) and requires starting values. These starting
values should also be of same order of accuracy. Accordingly, if the initial point is y0
then the starting values y1, y2 and y3 are computed by fourth order Runge kutta method.
Then predictor formula (2.3) predicts the approximate solution y4 as p4 at next grid point.
The predictor formula (2.3) is found to be unstable (proof not included) and the solution
so obtained may grow exponentially.

The predicted value is then improved using a corrector formula. The corrector formula is
developed similarly. For this, a second polynomial for f (t, y(t)) is constructed, which is
based on the points (tj1, fj1), (tj, fj) and the predicted point (tj+1, fj+1). The closed
integration of the interpolating polynomial over the interval [tj, tj+1] is carried out [See Fig
2.2 (b)]. The result is the familiar Simpsons rule:

h 1 5 ( 4)
y j1 y j
3
f(t j1,y j1 ) 4f(t j ,y j ) f(t j1,y j1 )
90
h f ( ); in(t j1,t j1 ) (2.4)

fj-1
fj-3 fj
fj-2
fj-k fj+1

tj-k tj-3 tj tj+1


tj-1

Fig 2.1 Scheme for multi-step integration

x
x
xj-1
x
x
x
x
x
x

t
t
tj-3 tj-2 tj-1 tj tj+1
tj-3 tj-1 tj-1 tj tj+1

(a) (b)

Fig 2.2 (a) Open Scheme for Predictor (b) Closed integration for Corrector

In the corrector formula fj+1 is computed from the predicted value pj+1 as obtained from
(2.3).
Denoting f j f(t j , y j ) , the equations (2.3) and (2.4) gives the following predictor corrector

formulae, respectively, for solving IVP (2.1) at equi-spaced discrete points t4,t5,
4h
p j1 y j1 y j3 2fj fj1 2fj2
3 (2.5)
h
y j1 y j1 fj1 4f j f j1
3
The solution at initial point t0 is given in the initial condition and t1, t2 and t3 are the
starting points where solution is to be computed using some other suitable method such
as Runge Kutta method. This is illustrated in the example 2.1

Example 2.1: Solve IVP y=y+3t-t2 with y(0)=1; using Milnes predictor corrector method
take h=0.1
Solution: The following table 2.1 computes Starting values using fourth order Runge
Kutta method.

k y k1= t+ y+ k2 y+ k3 y+ y+h(k1+2k
t f(t,y) h/2 h/2*k h/2*k2 t+h h*k3 k4 2+2k3+k4)
1 /6
0 0 1 1 0.05 1.05 1.197 1.05987 1.2074 0.1 1.1207 1.41074 1.1203415
5 5
1 0.1 1.1203 1.410 0.15 1.190 0.222 1.13147 1.559 0.2 1.2762 1.83624 1.2338409
34 859 7 7
2 0.2 1.2338 1.793 0.25 1.323 0.321 1.24993 1.9374 0.3 1.4276 2.23758 1.3763387
84 533 8 1
3 0.2 1.3763

Table 2.1: Starting values using RK4 in Example 2.1

Using initial value and starting values at t=0, 0.1, 0.2 and 0.3, the predictor formula
predicts the solution at t=0.4 as 1.7199359. It is used in corrector formula to give the
corrected value. The solution is continued at advanced grid points [see table 2.2].
MilnePredictorcorrector1 f(t,p)
k t y f(t,y) corrector
0 0 1 1
rk4 Milne pc exact
1 0.1 1.1203415 1.410341
2 0.2 1.2338409 1.793841 startingpoints
3 0.3 1.3763387 2.186339
4 0.4 1.7199359 2.759936 1.67714525 2.717145
5 0.5 2.0317593 3.281759 1.920894708 3.170895
Table 2.2: Example 2.1 using predictor corrector method with h=0.1
The exact solution is possible in this example; however it may not be possible for other
equations. Table 2.3 compares the solution with the exact solution of given equation.
Clearly the accuracy is better in predictor corrector method than the Runge-Kutta
method.

rk4 Milnepc exact


0.1 1.120341 1.120342
0.2 1.233841 1.282806
0.3 1.376339 1.489718
0.4 1.5452 1.6771453 1.743649
0.5 1.7369 1.9208947 2.047443

Table 2.3: Comparison of solution of example 2.1


Milne Predictor corrector1 f(t,p)
k t y f(t,y)
0 0 1 1
1 0.05 1.055042 1.202542 starting point1
2 0.1 1.120342 1.410342 starting point2
3 0.15 1.196169 1.623669 starting point3
4 0.2 1.282805 1.842805 1.282805 1.842805
5 0.25 1.380551 2.068051 1.380551 2.068051

Table 2.4: Example 2.1 using predictor corrector method with h=0.05

The exercise 2.1 is repeated with h=0.5 in table 2.4.The table 2.5 clearly indicates that
the better accuracy is achieved with h=0.05 [see table 2.5]
0.05 1.055042 1.055042
0.1 1.120342 1.120342
0.15 1.196169 1.196168
0.2 1.282805 1.282806
0.25 1.380551 1.380551

Table 2.5: improved accuracy with h=0.05

Predictor Corrector methods are preferred over Runge-Kutta as it requires only two
functional evaluations per integration step while the corresponding fourth order Runge-
Kutta requires four evaluations. The starting points are the weakness of predictor-
corrector methods. In Runge kutta methods the step size can be changed easily.
Module 2

Lecture 2

Multi Step Methods

Predictor corrector Methods

Contd

Keywords: iterative methods, stability

A predictor-corrector method refers to the use of the predictor equation with one
subsequent application of the corrector equation and the value so obtained is the final
solution at the grid point. This approach is used in example 2.1.

The predicted and corrected values are compared to obtain an estimate of the
truncation error associated with the integration step. The corrected values are accepted
if this error estimate does not exceed a specified maximum value. Otherwise, the
corrected values are rejected and the interval of integration is reduced starting from the
last accepted point. Likewise, if the error estimate becomes unnecessarily small, the
interval of integration may be increased. The predictor formula is more influential in the
stability properties of the predictor-corrector algorithm.

In another more commonly used approach, a predictor formula is used to get a first
estimate of the solution at next grid point and then the corrector formula is applied
iteratively until convergence is obtained. This is an iterative approach and corrector
formula is used iteratively. The number of derivative evaluations required is one greater
than the number of iterations of the corrector and it is clear that this number may in fact
exceed the number required by a Runge-Kutta algorithm. In this case, the stability
properties of the algorithm are completely determined by the corrector equation alone
and the predictor equation only influences the number of iterations required. The step
size is chosen sufficiently small to converge to the solution in one or two iterations. The
step size can be estimated from the error term in (2.4).

Example 2.2: Apply iterative method to solve IVP y=y+3t-t2 with y(0)=1 with h=0.1

Solution: With h=0.1 the computations are arranged in the table 2.6

Note that the corrector formula converges fast but is not converging to the solution of
the equation. It converges to the fixed point of difference scheme given by the corrector
formula. If h=0.05 then the solution converges to the exact solution in just two iterations.

h k t y f(t,y)
Milne Predictor corrector for t=0.4 with h=0.1
0.1 0 0 1 1
0.1 1 0.1 1.120341 1.410341 starting point1
0.1 2 0.2 1.233841 1.793841 starting point2
0.1 3 0.3 1.376339 2.186339 starting point3
0.1 4 0.4 1.719936 2.759936 predictor
0.1 5 0.4 1.677145 2.717145 corrector1
0.1 6 0.4 1.675719 2.715719 corrector2
0.1 7 0.4 1.675671 2.715671 corrector3
0.1 8 0.4 1.67567 2.71567 corrector4
0.1 9 0.4 1.67567 2.71567 corrector5
Milne Predictor corrector at t=0.5 with h=0.1
0.1 1 0.1 1.120341 1.410341
0.1 2 0.2 1.233841 1.793841 Starting values
0.1 3 0.3 1.376339 2.186339
0.1 4 0.4 1.67567 2.71567
0.1 5 0.5 1.840277 3.090277 predictor
0.1 6 0.5 1.914315 3.164315 corrector1
0.1 7 0.5 1.916783 3.166783 corrector2
0.1 8 0.5 1.916865 3.166865 corrector3
0.1 9 0.5 1.916868 3.166868 corrector4
Table 2.6a iterative Milnes predictor corrector method example 2.2 with h=0.1

Several applications of corrector formula is needed to obtain the desired accuracy.


Decreasing the value of h will reduce the number of applications of corrector formula.
This is evident from the next table 2,6b.

Milne Predictor corrector at t=0.2 for h=0.05


h k t y f(t,y)
0.05 0 0 1 1
0.05 1 0.05 1.055042 1.202542 Starting values
0.05 2 0.1 1.120342 1.410342
0.05 3 0.15 1.196169 1.623669
0.05 4 0.2 1.282805 1.842805 predictor
0.05 5 0.2 1.282805 1.842805 corrector1
0.05 6 0.2 1.282805 1.842805 corrector2
Table 2.6b iterative Milnes predictor corrector method example 2.2 with h=0.05

A modified method, or modified predictor-corrector method, refers to the use of the


predictor equation and one subsequent application of the corrector equation with
incorporation of the error estimates as discussed below

Error estimates

Local Truncation Error in predictor and corrector formulae are given as

28 5 ( 4)
y(t j1 ) p j1 h f (); in(t j3 ,t j1 )
90

1 5 ( 4)
y(t j1 ) y j1 h f ( ); in(t j1,t j1 )
90

It is assumed that the derivative is constant over the interval [tj-3, tj+1]. Then simplification
yields the error estimates based on predicted and corrected values.

28
y(t j1 ) p j1 [y j1 p j1 ] (2.6)
29

Further, assume that the difference between predicted and corrected values at each
step changes slowly. Accordingly, pj and yj can be substituted for pj+1 and yj+1 in (2.6)
gives a modifier qj+1 as

28
q j1 p j1 [y j p j ]
29

This modified value is used in functional evaluation fj+1 to be substituted in corrector


formula. This scheme is known as Modified Predictor corrector formula and is given as

4h
p j1 y j1 y j 2fj fj1 2fj2
3
28
q j1 p j1 [y j p j ]; f j1 f(t j 1,q j1 )
29 (2.7)

h
y j1 y j f j1 4fj f j1
3

Another problem associated with Milnes predictor corrector method is the instability
problem in certain cases. This means that error does not tend to zero as h tends to
zero. This is illustrated analytically for a simple IVP

Y=Ay, y(0)=y0

Its solution at t=tn is yn=y0 exp(tn-t0). Substituting y=Ay in the corrector formula gives the
difference equation

y j1 y j1
h
3
Ay j1 4Ay j Ay j1

hA 4hA hA
Or (1 )y j1 y j (1 )y j1 0 (2.8)
3 3 3

Let Z1 and Z2 be the roots of quadratic equation

hA 2 4hA hA
(1 )Z Z (1 )0
3 3 3

The solution of above difference equation (2.8) can be written as

2r 3r 2 1 hA
y j C Z C2 Z ;with Z1,2
j j
,r
1 r
1 1 2
3

For stability the behavior of solution is to be explored as h tends to zero, consider

2r 3r 2 1
Z1 1 3r O(r 2 ) 1 Ah O(h2 )
1 r
2r 3r 2 1 Ah
Z2 1 r O(r 2 ) (1 ) O(h2 )
1 r 3

Also, exp(hA) 1 hA O(h2 ),exp( hA / 3) 1 hA / 3 O(h2 ),


Hence the solution of the given IVP by predictor corrector method is represented as

y j C1 exp A(t j t 0 ) C 2 exp A(t j t 0 ) / 3

When A>0, the second term will die out but the first grows exponentially as j increases
irrespective of h. However, first term will die out and second will grow exponentially
when A<0. This establishes the instability of the solution.
Module 2

Lecture 3

Multi Step Methods

Adams Bashforth method

Keywords: interpolating polynomial, open integration


A general k step method for solving IVP is given as

y j1 ak 1y j ak 2 y j1 ... a0 y j1m
h[bk f(t j1,y j1 ) bk 1f(t j ,y j ) ... b0 f(t j1m ,y j1m )
(2.9)

When bk=0, the method is explicit and yj+1 is explicitly determined from the initial value y0
and starting values yi; i=1,2, , k-1. When bk is nonzero, the method is implicit.

The Milnes predictor corrector formulae are the special cases of (2.9):

Predictor formula : k=4,a1=a 2 =a3 =0,a 4 =1,b0 =0, b1=8/3, b 2 =-4/3, b3 =8/3, b 4 =0

Corrector formula : k=4,a1=a 2 =a 4 =0,a3 =1,b0 =0, b1=0, b 2 =1, b3 =4, b 4 =1 .

Another category of multistep methods, known as Adams methods, are obtained from

t j1 t j1
r
y j1 y j f(t,y(t))dt y j a t dt
j
j
tj t j k j 0

Here the integration is carried out only on the last panel, while many function and
derivative values at equi-spaced points are considered for the interpolating polynomial.
Both open and closed integration are considered giving two types of formulas. The
integration scheme is shown in the fig. 2.3
x

x x

x x

x x
x x

t t
tj-1 tj-1 tj-3 tj-1 tj-1 tj tj+1
tj-3 tj tj+1
Fig. 2.3 Schematic diagram for open and closed Adams integration
formulas
The open integration of Adams formula gives Adams Bashforth formula while
closed integration gives Adams Moulton formula. Different degrees of interpolating
polynomials depending upon the number r of interpolating points give rise to formulae of
different order. Although these formulae can be derived in many different ways, here
backward application of Taylor series expansion is used for the derivation of second
order Adams Bashforth open formula.

Second Order Adams Bashforth open formula

For second order formula, consider degree of polynomial r=2 and points tj, tj-1and tj-2
are used in the following

t j1 t j1
2
y j1 y j f(t,y(t))dt y j a t dt
j
j
tj t j k j 0

Expanding left hand side in Taylor series as

y j1 y j h[fj fj h / 2 fj h2 / 6 ...]
(2.10)

f j f j1 f j
f j h O(h2 )
Also, h 2

Substitution and simplification yields second order Adams formula

3 1 5
y j1 y j h( fj fj1 ) h3 fj()
2 2 12 (2.11)

A fourth order Adams Bashforth formula can be derived on similar lines and it is written
as

55 59 37 9 251 5 iv
y j1 y j h( fj fj1 fj2 f j 3 ) h f ( )
24 24 24 24 720 (2.12)
Example 2.3: Apply Adams Bashforth method to solve IVP

y y 3t t 2 With y(0)=1 with h=0.1

Solution: With h=0.1 the computations are arranged in the table 2.7

Exact
h k t y f(t,y)
solution

0.05 0 0 1 1
0.05 1 0.05 1.0550422 1.202542 Starting
0.05 2 0.1 1.1203418 1.410342
values
0.05 3 0.15 1.1961685 1.623669
0.05 4 0.2 1.2828053 1.842805 1.2828055

0.05 5 0.25 1.3805503 2.06805 1.3805508


Table 2.7: Apply Adams Bashforth method Example 2.3
Module 2

Lecture 4

Multi Step Methods

Adams Moulton method

Keywords: closed integration, local truncation error

Second Order Adams Moulton formula


Backward Taylor series is used for integrand in the closed integration formula:

t j1 t j1
2
y j1 y j f(t,y(t))dt y j a t dt
j
j
tj t j k j 0

y j y j1 h[fj1 fj1 h / 2 fj1 h2 / 6 ...]


(2.13)

y j1 y j h[fj1 fj1 h / 2 fj1 h2 / 6 ...]

f j1 f j fj1
f j1 h O(h2 )
h 2

Substitution and simplification yields second order Adams Moulton formula

1 1 1
y j1 y j h( fj1 fj ) h3 fj1 ()
2 2 12 (2.14)

Fourth order Adams Moulton formula can be obtained on similar lines:

9 19 5 1 19 5 iv
y j1 y j h( fj1 fj fj1 f j2 ) h f ()
24 24 24 24 720 (2.15)

A predictor corrector method is based on Adams integration formulas make use of


Adams Bashforth formula (2.12) as predictor while Adams Moulton formula (2.15) as
corrector. Milnes method is considered to be better due to smaller error terms.
However, Adams method is preferred due to instability of corrector formula in Milnes
method in some cases.
The predictor formula
55 59 37 9 251 5 iv
y j1 y j h( fj fj1 fj2 f j 3 ) h f ( )
24 24 24 24 720
The Corrector formula
9 19 5 1 19 5 iv
y j1 y j h( fj1 fj fj1 f j2 ) h f ()
24 24 24 24 720

Example 2.4 Solve IVP of example 2.3 by Adams predictor corrector Method
h k t y f(t,y)
Solution At t=0.20
0.05 0 0 1 1
0.05 1 0.05 1.0550422 1.202542
Starting
0.05 2 0.1 1.1203418 1.410342
values
0.05 3 0.15 1.1961685 1.623669
0.05 4 0.2 1.2828053 1.842805 predictor
0.05 5 0.2 1.2828055 1.842806 corrector
0.05 6 0.2 1.2828056 1.842806 corrector
Solution At t=0.25
0.05 1 0.05 1.0550422 1.202542
0.05 2 0.1 1.1203418 1.410342
Starting
0.05 3 0.15 1.1961685 1.623669
values
0.05 4 0.2 1.2828056 1.842806
0.05 5 0.25 1.3805506 2.068051 predictor
0.05 6 0.25 1.3805509 2.068051 corrector
0.05 7 0.25 1.3805509 2.068051 corrector
Table 2.8 Solution of IVP of Example 2.3 by Adams predictor corrector Method

Both the predictor and corrector formula have Local Truncation errors of order O(h5):

251 (5 )
y (t k 1 ) pk 1 y (1 )h5
720
19 (5 )
y (t k 1 ) y k 1 y (2 )h5
720

If fifth derivative is nearly constant and h is small then the error estimate can be
obtained by eliminating the derivative and simplifying as

19
y (t k 1 ) y k 1 y (tk 1 ) pk 1
270

Exercise 2
2.1 Consider the IVP
y e t y, y(0)=1
Compute solution at times t=0.05, 0.1 and 0.15 using RK4 by taking h=0.05. Use
these values to compute solution at t=0.2 0.25 and 0.3 using Milne-Simpson
method. Compare solution with the exact solution y (t 1)e t
2.2 Consider the IVP
y y t 2 , y(0)=1
Compute solution at times t=0.2, 0.4 and 0.6 using RK4 by taking h=0.2. Apply
Adams Bashforth method to compute solution at t=0.8 and 1.00
2.3 Solve IVP of exercise 2.2 by fourth order Adams predictor corrector Method.
Course: Numerical Solution of Ordinary Differential Equations

Module 2: Multi-step methods

Lecture Content Hours


1 Single step and Multi-step methods, Predictor 2
corrector methods, Milnes method

2 Adams-Moulton method. 1
3 Adams Bashforth method 1
Module 2

Lecture 1

Multi Step Methods

Predictor corrector Methods

keywords: multi-step predictor, corrector, Milne-simpson method, integration formulae

Consider I VP

y f(t,y); t 0 t b with y(t 0 ) y 0 (2.1)


One step methods for solving IVP (2.1) are those methods in which the solution yj+1 at
the j+1th grid point involves only one previous grid point where the solution is already
known. Accordingly, a general one step method may be written as

y j+1=y j +h(t j ,y j ,h)

The increment function depends on solution yj at previous grid point tj and step size h.
If yj+1 can be determined simply by evaluating right hand side then the method is explicit
method. The methods developed in the module 1 are one step methods. These
methods might use additional functional evaluations at number of points between tj and
tj+1. These functional evaluations are not used in further computations at advanced grid
points. In these methods step size can be changed according to the requirement.

It may be reasonable to develop methods that use more information about the solution
(functional values and derivatives) at previously known values while computing solution
at the next grid point. Such methods using information at more than one previous grid
points are known as multi-step methods and are expected to give better results than
one step methods.

To determine solution yj+1, a multi-step method or k-step method uses values of y(t) and
f(t,y(t)) at k previous grid points tj-k, k=0,1,2,k-1,. yj is called the initial point while yj-k
are starting points. The starting points are computed using some suitable one step
method. Thus multi-step methods are not self starting methods.

Integrating (2.1) over an interval (tj-k, tj+1) yields

t j1 t j1
r
y j1 y j f(t,y(t))dt y j a t dt
j
j (2.2)
t jk t j k j 0

The integrand on the right side is approximated by interpolating polynomial of degree r


using equi-spaced points. The integration over the interval is shown in the Fig 2.1.

The method may be explicit or implicit. An implicit method involves computation of yj+1 in
terms of yj+1. First an explicit formula known as predictor formula is used to predict yj+1.
Then another formula, known as corrector formula, is used to improve the predicted
value of yj+1. The predictor-corrector methods form a large class of general methods for
numerical integration of ordinary differential equations. A popular predictor-corrector
scheme is known as the Milne-Simpson method.

Milne-Simpson method

Its predictor is based on integration of f (t, y(t)) over the interval [tj3, tj+1] with k=3 and
r=3. The interpolating polynomial is considered to match the function at three points tj2,
tj1, and tj and the function is extrapolated at both the ends in the interval [tj3, tj-2] and [tj,
tj+1] as shown in the Fig 2.2(a). Since the end points are not used, an open integration
formula is used for the integral in (2.2):

4h 14
p j1 y j1 y j
3
2f(t j ,y j ) f(t j1,y j1 ) 2f(t j2 ,y j2 ) h5 f (4) (); in(t j3 ,t j1 ) (2.3)
45

The explicit predictor formula is of O(h4) and requires starting values. These starting
values should also be of same order of accuracy. Accordingly, if the initial point is y0
then the starting values y1, y2 and y3 are computed by fourth order Runge kutta method.
Then predictor formula (2.3) predicts the approximate solution y4 as p4 at next grid point.
The predictor formula (2.3) is found to be unstable (proof not included) and the solution
so obtained may grow exponentially.

The predicted value is then improved using a corrector formula. The corrector formula is
developed similarly. For this, a second polynomial for f (t, y(t)) is constructed, which is
based on the points (tj1, fj1), (tj, fj) and the predicted point (tj+1, fj+1). The closed
integration of the interpolating polynomial over the interval [tj, tj+1] is carried out [See Fig
2.2 (b)]. The result is the familiar Simpsons rule:

h 1 5 ( 4)
y j1 y j
3
f(t j1,y j1 ) 4f(t j ,y j ) f(t j1,y j1 )
90
h f ( ); in(t j1,t j1 ) (2.4)

fj-1
fj-3 fj
fj-2
fj-k fj+1

tj-k tj-3 tj tj+1


tj-1

Fig 2.1 Scheme for multi-step integration

x
x
xj-1
x
x
x
x
x
x

t
t
tj-3 tj-2 tj-1 tj tj+1
tj-3 tj-1 tj-1 tj tj+1

(a) (b)

Fig 2.2 (a) Open Scheme for Predictor (b) Closed integration for Corrector

In the corrector formula fj+1 is computed from the predicted value pj+1 as obtained from
(2.3).
Denoting f j f(t j , y j ) , the equations (2.3) and (2.4) gives the following predictor corrector

formulae, respectively, for solving IVP (2.1) at equi-spaced discrete points t4,t5,
4h
p j1 y j1 y j3 2fj fj1 2fj2
3 (2.5)
h
y j1 y j1 fj1 4f j f j1
3
The solution at initial point t0 is given in the initial condition and t1, t2 and t3 are the
starting points where solution is to be computed using some other suitable method such
as Runge Kutta method. This is illustrated in the example 2.1

Example 2.1: Solve IVP y=y+3t-t2 with y(0)=1; using Milnes predictor corrector method
take h=0.1
Solution: The following table 2.1 computes Starting values using fourth order Runge
Kutta method.

k y k1= t+ y+ k2 y+ k3 y+ y+h(k1+2k
t f(t,y) h/2 h/2*k h/2*k2 t+h h*k3 k4 2+2k3+k4)
1 /6
0 0 1 1 0.05 1.05 1.197 1.05987 1.2074 0.1 1.1207 1.41074 1.1203415
5 5
1 0.1 1.1203 1.410 0.15 1.190 0.222 1.13147 1.559 0.2 1.2762 1.83624 1.2338409
34 859 7 7
2 0.2 1.2338 1.793 0.25 1.323 0.321 1.24993 1.9374 0.3 1.4276 2.23758 1.3763387
84 533 8 1
3 0.2 1.3763

Table 2.1: Starting values using RK4 in Example 2.1

Using initial value and starting values at t=0, 0.1, 0.2 and 0.3, the predictor formula
predicts the solution at t=0.4 as 1.7199359. It is used in corrector formula to give the
corrected value. The solution is continued at advanced grid points [see table 2.2].
MilnePredictorcorrector1 f(t,p)
k t y f(t,y) corrector
0 0 1 1
rk4 Milne pc exact
1 0.1 1.1203415 1.410341
2 0.2 1.2338409 1.793841 startingpoints
3 0.3 1.3763387 2.186339
4 0.4 1.7199359 2.759936 1.67714525 2.717145
5 0.5 2.0317593 3.281759 1.920894708 3.170895
Table 2.2: Example 2.1 using predictor corrector method with h=0.1
The exact solution is possible in this example; however it may not be possible for other
equations. Table 2.3 compares the solution with the exact solution of given equation.
Clearly the accuracy is better in predictor corrector method than the Runge-Kutta
method.

rk4 Milnepc exact


0.1 1.120341 1.120342
0.2 1.233841 1.282806
0.3 1.376339 1.489718
0.4 1.5452 1.6771453 1.743649
0.5 1.7369 1.9208947 2.047443

Table 2.3: Comparison of solution of example 2.1


Milne Predictor corrector1 f(t,p)
k t y f(t,y)
0 0 1 1
1 0.05 1.055042 1.202542 starting point1
2 0.1 1.120342 1.410342 starting point2
3 0.15 1.196169 1.623669 starting point3
4 0.2 1.282805 1.842805 1.282805 1.842805
5 0.25 1.380551 2.068051 1.380551 2.068051

Table 2.4: Example 2.1 using predictor corrector method with h=0.05

The exercise 2.1 is repeated with h=0.5 in table 2.4.The table 2.5 clearly indicates that
the better accuracy is achieved with h=0.05 [see table 2.5]
0.05 1.055042 1.055042
0.1 1.120342 1.120342
0.15 1.196169 1.196168
0.2 1.282805 1.282806
0.25 1.380551 1.380551

Table 2.5: improved accuracy with h=0.05

Predictor Corrector methods are preferred over Runge-Kutta as it requires only two
functional evaluations per integration step while the corresponding fourth order Runge-
Kutta requires four evaluations. The starting points are the weakness of predictor-
corrector methods. In Runge kutta methods the step size can be changed easily.
Module 2

Lecture 2

Multi Step Methods

Predictor corrector Methods

Contd

Keywords: iterative methods, stability

A predictor-corrector method refers to the use of the predictor equation with one
subsequent application of the corrector equation and the value so obtained is the final
solution at the grid point. This approach is used in example 2.1.

The predicted and corrected values are compared to obtain an estimate of the
truncation error associated with the integration step. The corrected values are accepted
if this error estimate does not exceed a specified maximum value. Otherwise, the
corrected values are rejected and the interval of integration is reduced starting from the
last accepted point. Likewise, if the error estimate becomes unnecessarily small, the
interval of integration may be increased. The predictor formula is more influential in the
stability properties of the predictor-corrector algorithm.

In another more commonly used approach, a predictor formula is used to get a first
estimate of the solution at next grid point and then the corrector formula is applied
iteratively until convergence is obtained. This is an iterative approach and corrector
formula is used iteratively. The number of derivative evaluations required is one greater
than the number of iterations of the corrector and it is clear that this number may in fact
exceed the number required by a Runge-Kutta algorithm. In this case, the stability
properties of the algorithm are completely determined by the corrector equation alone
and the predictor equation only influences the number of iterations required. The step
size is chosen sufficiently small to converge to the solution in one or two iterations. The
step size can be estimated from the error term in (2.4).

Example 2.2: Apply iterative method to solve IVP y=y+3t-t2 with y(0)=1 with h=0.1

Solution: With h=0.1 the computations are arranged in the table 2.6

Note that the corrector formula converges fast but is not converging to the solution of
the equation. It converges to the fixed point of difference scheme given by the corrector
formula. If h=0.05 then the solution converges to the exact solution in just two iterations.

h k t y f(t,y)
Milne Predictor corrector for t=0.4 with h=0.1
0.1 0 0 1 1
0.1 1 0.1 1.120341 1.410341 starting point1
0.1 2 0.2 1.233841 1.793841 starting point2
0.1 3 0.3 1.376339 2.186339 starting point3
0.1 4 0.4 1.719936 2.759936 predictor
0.1 5 0.4 1.677145 2.717145 corrector1
0.1 6 0.4 1.675719 2.715719 corrector2
0.1 7 0.4 1.675671 2.715671 corrector3
0.1 8 0.4 1.67567 2.71567 corrector4
0.1 9 0.4 1.67567 2.71567 corrector5
Milne Predictor corrector at t=0.5 with h=0.1
0.1 1 0.1 1.120341 1.410341
0.1 2 0.2 1.233841 1.793841 Starting values
0.1 3 0.3 1.376339 2.186339
0.1 4 0.4 1.67567 2.71567
0.1 5 0.5 1.840277 3.090277 predictor
0.1 6 0.5 1.914315 3.164315 corrector1
0.1 7 0.5 1.916783 3.166783 corrector2
0.1 8 0.5 1.916865 3.166865 corrector3
0.1 9 0.5 1.916868 3.166868 corrector4
Table 2.6a iterative Milnes predictor corrector method example 2.2 with h=0.1

Several applications of corrector formula is needed to obtain the desired accuracy.


Decreasing the value of h will reduce the number of applications of corrector formula.
This is evident from the next table 2,6b.

Milne Predictor corrector at t=0.2 for h=0.05


h k t y f(t,y)
0.05 0 0 1 1
0.05 1 0.05 1.055042 1.202542 Starting values
0.05 2 0.1 1.120342 1.410342
0.05 3 0.15 1.196169 1.623669
0.05 4 0.2 1.282805 1.842805 predictor
0.05 5 0.2 1.282805 1.842805 corrector1
0.05 6 0.2 1.282805 1.842805 corrector2
Table 2.6b iterative Milnes predictor corrector method example 2.2 with h=0.05

A modified method, or modified predictor-corrector method, refers to the use of the


predictor equation and one subsequent application of the corrector equation with
incorporation of the error estimates as discussed below

Error estimates

Local Truncation Error in predictor and corrector formulae are given as

28 5 ( 4)
y(t j1 ) p j1 h f (); in(t j3 ,t j1 )
90

1 5 ( 4)
y(t j1 ) y j1 h f ( ); in(t j1,t j1 )
90

It is assumed that the derivative is constant over the interval [tj-3, tj+1]. Then simplification
yields the error estimates based on predicted and corrected values.

28
y(t j1 ) p j1 [y j1 p j1 ] (2.6)
29

Further, assume that the difference between predicted and corrected values at each
step changes slowly. Accordingly, pj and yj can be substituted for pj+1 and yj+1 in (2.6)
gives a modifier qj+1 as

28
q j1 p j1 [y j p j ]
29

This modified value is used in functional evaluation fj+1 to be substituted in corrector


formula. This scheme is known as Modified Predictor corrector formula and is given as

4h
p j1 y j1 y j 2fj fj1 2fj2
3
28
q j1 p j1 [y j p j ]; f j1 f(t j 1,q j1 )
29 (2.7)

h
y j1 y j f j1 4fj f j1
3

Another problem associated with Milnes predictor corrector method is the instability
problem in certain cases. This means that error does not tend to zero as h tends to
zero. This is illustrated analytically for a simple IVP

Y=Ay, y(0)=y0

Its solution at t=tn is yn=y0 exp(tn-t0). Substituting y=Ay in the corrector formula gives the
difference equation

y j1 y j1
h
3
Ay j1 4Ay j Ay j1

hA 4hA hA
Or (1 )y j1 y j (1 )y j1 0 (2.8)
3 3 3

Let Z1 and Z2 be the roots of quadratic equation

hA 2 4hA hA
(1 )Z Z (1 )0
3 3 3

The solution of above difference equation (2.8) can be written as

2r 3r 2 1 hA
y j C Z C2 Z ;with Z1,2
j j
,r
1 r
1 1 2
3

For stability the behavior of solution is to be explored as h tends to zero, consider

2r 3r 2 1
Z1 1 3r O(r 2 ) 1 Ah O(h2 )
1 r
2r 3r 2 1 Ah
Z2 1 r O(r 2 ) (1 ) O(h2 )
1 r 3

Also, exp(hA) 1 hA O(h2 ),exp( hA / 3) 1 hA / 3 O(h2 ),


Hence the solution of the given IVP by predictor corrector method is represented as

y j C1 exp A(t j t 0 ) C 2 exp A(t j t 0 ) / 3

When A>0, the second term will die out but the first grows exponentially as j increases
irrespective of h. However, first term will die out and second will grow exponentially
when A<0. This establishes the instability of the solution.
Module 2

Lecture 3

Multi Step Methods

Adams Bashforth method

Keywords: interpolating polynomial, open integration


A general k step method for solving IVP is given as

y j1 ak 1y j ak 2 y j1 ... a0 y j1m
h[bk f(t j1,y j1 ) bk 1f(t j ,y j ) ... b0 f(t j1m ,y j1m )
(2.9)

When bk=0, the method is explicit and yj+1 is explicitly determined from the initial value y0
and starting values yi; i=1,2, , k-1. When bk is nonzero, the method is implicit.

The Milnes predictor corrector formulae are the special cases of (2.9):

Predictor formula : k=4,a1=a 2 =a3 =0,a 4 =1,b0 =0, b1=8/3, b 2 =-4/3, b3 =8/3, b 4 =0

Corrector formula : k=4,a1=a 2 =a 4 =0,a3 =1,b0 =0, b1=0, b 2 =1, b3 =4, b 4 =1 .

Another category of multistep methods, known as Adams methods, are obtained from

t j1 t j1
r
y j1 y j f(t,y(t))dt y j a t dt
j
j
tj t j k j 0

Here the integration is carried out only on the last panel, while many function and
derivative values at equi-spaced points are considered for the interpolating polynomial.
Both open and closed integration are considered giving two types of formulas. The
integration scheme is shown in the fig. 2.3
x

x x

x x

x x
x x

t t
tj-1 tj-1 tj-3 tj-1 tj-1 tj tj+1
tj-3 tj tj+1
Fig. 2.3 Schematic diagram for open and closed Adams integration
formulas
The open integration of Adams formula gives Adams Bashforth formula while
closed integration gives Adams Moulton formula. Different degrees of interpolating
polynomials depending upon the number r of interpolating points give rise to formulae of
different order. Although these formulae can be derived in many different ways, here
backward application of Taylor series expansion is used for the derivation of second
order Adams Bashforth open formula.

Second Order Adams Bashforth open formula

For second order formula, consider degree of polynomial r=2 and points tj, tj-1and tj-2
are used in the following

t j1 t j1
2
y j1 y j f(t,y(t))dt y j a t dt
j
j
tj t j k j 0

Expanding left hand side in Taylor series as

y j1 y j h[fj fj h / 2 fj h2 / 6 ...]
(2.10)

f j f j1 f j
f j h O(h2 )
Also, h 2

Substitution and simplification yields second order Adams formula

3 1 5
y j1 y j h( fj fj1 ) h3 fj()
2 2 12 (2.11)

A fourth order Adams Bashforth formula can be derived on similar lines and it is written
as

55 59 37 9 251 5 iv
y j1 y j h( fj fj1 fj2 f j 3 ) h f ( )
24 24 24 24 720 (2.12)
Example 2.3: Apply Adams Bashforth method to solve IVP

y y 3t t 2 With y(0)=1 with h=0.1

Solution: With h=0.1 the computations are arranged in the table 2.7

Exact
h k t y f(t,y)
solution

0.05 0 0 1 1
0.05 1 0.05 1.0550422 1.202542 Starting
0.05 2 0.1 1.1203418 1.410342
values
0.05 3 0.15 1.1961685 1.623669
0.05 4 0.2 1.2828053 1.842805 1.2828055

0.05 5 0.25 1.3805503 2.06805 1.3805508


Table 2.7: Apply Adams Bashforth method Example 2.3
Module 2

Lecture 4

Multi Step Methods

Adams Moulton method

Keywords: closed integration, local truncation error

Second Order Adams Moulton formula


Backward Taylor series is used for integrand in the closed integration formula:

t j1 t j1
2
y j1 y j f(t,y(t))dt y j a t dt
j
j
tj t j k j 0

y j y j1 h[fj1 fj1 h / 2 fj1 h2 / 6 ...]


(2.13)

y j1 y j h[fj1 fj1 h / 2 fj1 h2 / 6 ...]

f j1 f j fj1
f j1 h O(h2 )
h 2

Substitution and simplification yields second order Adams Moulton formula

1 1 1
y j1 y j h( fj1 fj ) h3 fj1 ()
2 2 12 (2.14)

Fourth order Adams Moulton formula can be obtained on similar lines:

9 19 5 1 19 5 iv
y j1 y j h( fj1 fj fj1 f j2 ) h f ()
24 24 24 24 720 (2.15)

A predictor corrector method is based on Adams integration formulas make use of


Adams Bashforth formula (2.12) as predictor while Adams Moulton formula (2.15) as
corrector. Milnes method is considered to be better due to smaller error terms.
However, Adams method is preferred due to instability of corrector formula in Milnes
method in some cases.
The predictor formula
55 59 37 9 251 5 iv
y j1 y j h( fj fj1 fj2 f j 3 ) h f ( )
24 24 24 24 720
The Corrector formula
9 19 5 1 19 5 iv
y j1 y j h( fj1 fj fj1 f j2 ) h f ()
24 24 24 24 720

Example 2.4 Solve IVP of example 2.3 by Adams predictor corrector Method
h k t y f(t,y)
Solution At t=0.20
0.05 0 0 1 1
0.05 1 0.05 1.0550422 1.202542
Starting
0.05 2 0.1 1.1203418 1.410342
values
0.05 3 0.15 1.1961685 1.623669
0.05 4 0.2 1.2828053 1.842805 predictor
0.05 5 0.2 1.2828055 1.842806 corrector
0.05 6 0.2 1.2828056 1.842806 corrector
Solution At t=0.25
0.05 1 0.05 1.0550422 1.202542
0.05 2 0.1 1.1203418 1.410342
Starting
0.05 3 0.15 1.1961685 1.623669
values
0.05 4 0.2 1.2828056 1.842806
0.05 5 0.25 1.3805506 2.068051 predictor
0.05 6 0.25 1.3805509 2.068051 corrector
0.05 7 0.25 1.3805509 2.068051 corrector
Table 2.8 Solution of IVP of Example 2.3 by Adams predictor corrector Method

Both the predictor and corrector formula have Local Truncation errors of order O(h5):

251 (5 )
y (t k 1 ) pk 1 y (1 )h5
720
19 (5 )
y (t k 1 ) y k 1 y (2 )h5
720

If fifth derivative is nearly constant and h is small then the error estimate can be
obtained by eliminating the derivative and simplifying as

19
y (t k 1 ) y k 1 y (tk 1 ) pk 1
270

Exercise 2
2.1 Consider the IVP
y e t y, y(0)=1
Compute solution at times t=0.05, 0.1 and 0.15 using RK4 by taking h=0.05. Use
these values to compute solution at t=0.2 0.25 and 0.3 using Milne-Simpson
method. Compare solution with the exact solution y (t 1)e t
2.2 Consider the IVP
y y t 2 , y(0)=1
Compute solution at times t=0.2, 0.4 and 0.6 using RK4 by taking h=0.2. Apply
Adams Bashforth method to compute solution at t=0.8 and 1.00
2.3 Solve IVP of exercise 2.2 by fourth order Adams predictor corrector Method.