6 views

Uploaded by Appili Vamsi Krishna

Numerical Methods

- Engineering Mathematics
- 02.AGRI
- SMES2105 Chapter 6 Students Slides
- bag of words
- FVM
- BodeRules.pdf
- Calculus 3
- Eee
- Topics Math2GMIT 2016
- Morphological Processing
- Double Integrals
- 2 - Managerial Decision Making and Mathematical Optimization Problems_17Jan2019.pdf
- Genetic Algorithm Matlab Code
- Solving Nonlinear Governing Equations of Motion Using Matlab and Simulink in First Dynamics Course
- Eee
- lecture21مهم
- Green
- Topics
- Hown Above May Be Written in the Form
- Sequences Higher 2

You are on page 1of 46

1 Single step and Multi-step methods, Predictor 2

corrector methods, Milnes method

2 Adams-Moulton method. 1

3 Adams Bashforth method 1

Module 2

Lecture 1

Consider I VP

One step methods for solving IVP (2.1) are those methods in which the solution yj+1 at

the j+1th grid point involves only one previous grid point where the solution is already

known. Accordingly, a general one step method may be written as

The increment function depends on solution yj at previous grid point tj and step size h.

If yj+1 can be determined simply by evaluating right hand side then the method is explicit

method. The methods developed in the module 1 are one step methods. These

methods might use additional functional evaluations at number of points between tj and

tj+1. These functional evaluations are not used in further computations at advanced grid

points. In these methods step size can be changed according to the requirement.

It may be reasonable to develop methods that use more information about the solution

(functional values and derivatives) at previously known values while computing solution

at the next grid point. Such methods using information at more than one previous grid

points are known as multi-step methods and are expected to give better results than

one step methods.

To determine solution yj+1, a multi-step method or k-step method uses values of y(t) and

f(t,y(t)) at k previous grid points tj-k, k=0,1,2,k-1,. yj is called the initial point while yj-k

are starting points. The starting points are computed using some suitable one step

method. Thus multi-step methods are not self starting methods.

t j1 t j1

r

y j1 y j f(t,y(t))dt y j a t dt

j

j (2.2)

t jk t j k j 0

using equi-spaced points. The integration over the interval is shown in the Fig 2.1.

The method may be explicit or implicit. An implicit method involves computation of yj+1 in

terms of yj+1. First an explicit formula known as predictor formula is used to predict yj+1.

Then another formula, known as corrector formula, is used to improve the predicted

value of yj+1. The predictor-corrector methods form a large class of general methods for

numerical integration of ordinary differential equations. A popular predictor-corrector

scheme is known as the Milne-Simpson method.

Milne-Simpson method

Its predictor is based on integration of f (t, y(t)) over the interval [tj3, tj+1] with k=3 and

r=3. The interpolating polynomial is considered to match the function at three points tj2,

tj1, and tj and the function is extrapolated at both the ends in the interval [tj3, tj-2] and [tj,

tj+1] as shown in the Fig 2.2(a). Since the end points are not used, an open integration

formula is used for the integral in (2.2):

4h 14

p j1 y j1 y j

3

2f(t j ,y j ) f(t j1,y j1 ) 2f(t j2 ,y j2 ) h5 f (4) (); in(t j3 ,t j1 ) (2.3)

45

The explicit predictor formula is of O(h4) and requires starting values. These starting

values should also be of same order of accuracy. Accordingly, if the initial point is y0

then the starting values y1, y2 and y3 are computed by fourth order Runge kutta method.

Then predictor formula (2.3) predicts the approximate solution y4 as p4 at next grid point.

The predictor formula (2.3) is found to be unstable (proof not included) and the solution

so obtained may grow exponentially.

The predicted value is then improved using a corrector formula. The corrector formula is

developed similarly. For this, a second polynomial for f (t, y(t)) is constructed, which is

based on the points (tj1, fj1), (tj, fj) and the predicted point (tj+1, fj+1). The closed

integration of the interpolating polynomial over the interval [tj, tj+1] is carried out [See Fig

2.2 (b)]. The result is the familiar Simpsons rule:

h 1 5 ( 4)

y j1 y j

3

f(t j1,y j1 ) 4f(t j ,y j ) f(t j1,y j1 )

90

h f ( ); in(t j1,t j1 ) (2.4)

fj-1

fj-3 fj

fj-2

fj-k fj+1

tj-1

x

x

xj-1

x

x

x

x

x

x

t

t

tj-3 tj-2 tj-1 tj tj+1

tj-3 tj-1 tj-1 tj tj+1

(a) (b)

Fig 2.2 (a) Open Scheme for Predictor (b) Closed integration for Corrector

In the corrector formula fj+1 is computed from the predicted value pj+1 as obtained from

(2.3).

Denoting f j f(t j , y j ) , the equations (2.3) and (2.4) gives the following predictor corrector

formulae, respectively, for solving IVP (2.1) at equi-spaced discrete points t4,t5,

4h

p j1 y j1 y j3 2fj fj1 2fj2

3 (2.5)

h

y j1 y j1 fj1 4f j f j1

3

The solution at initial point t0 is given in the initial condition and t1, t2 and t3 are the

starting points where solution is to be computed using some other suitable method such

as Runge Kutta method. This is illustrated in the example 2.1

Example 2.1: Solve IVP y=y+3t-t2 with y(0)=1; using Milnes predictor corrector method

take h=0.1

Solution: The following table 2.1 computes Starting values using fourth order Runge

Kutta method.

k y k1= t+ y+ k2 y+ k3 y+ y+h(k1+2k

t f(t,y) h/2 h/2*k h/2*k2 t+h h*k3 k4 2+2k3+k4)

1 /6

0 0 1 1 0.05 1.05 1.197 1.05987 1.2074 0.1 1.1207 1.41074 1.1203415

5 5

1 0.1 1.1203 1.410 0.15 1.190 0.222 1.13147 1.559 0.2 1.2762 1.83624 1.2338409

34 859 7 7

2 0.2 1.2338 1.793 0.25 1.323 0.321 1.24993 1.9374 0.3 1.4276 2.23758 1.3763387

84 533 8 1

3 0.2 1.3763

Using initial value and starting values at t=0, 0.1, 0.2 and 0.3, the predictor formula

predicts the solution at t=0.4 as 1.7199359. It is used in corrector formula to give the

corrected value. The solution is continued at advanced grid points [see table 2.2].

MilnePredictorcorrector1 f(t,p)

k t y f(t,y) corrector

0 0 1 1

rk4 Milne pc exact

1 0.1 1.1203415 1.410341

2 0.2 1.2338409 1.793841 startingpoints

3 0.3 1.3763387 2.186339

4 0.4 1.7199359 2.759936 1.67714525 2.717145

5 0.5 2.0317593 3.281759 1.920894708 3.170895

Table 2.2: Example 2.1 using predictor corrector method with h=0.1

The exact solution is possible in this example; however it may not be possible for other

equations. Table 2.3 compares the solution with the exact solution of given equation.

Clearly the accuracy is better in predictor corrector method than the Runge-Kutta

method.

0.1 1.120341 1.120342

0.2 1.233841 1.282806

0.3 1.376339 1.489718

0.4 1.5452 1.6771453 1.743649

0.5 1.7369 1.9208947 2.047443

Milne Predictor corrector1 f(t,p)

k t y f(t,y)

0 0 1 1

1 0.05 1.055042 1.202542 starting point1

2 0.1 1.120342 1.410342 starting point2

3 0.15 1.196169 1.623669 starting point3

4 0.2 1.282805 1.842805 1.282805 1.842805

5 0.25 1.380551 2.068051 1.380551 2.068051

Table 2.4: Example 2.1 using predictor corrector method with h=0.05

The exercise 2.1 is repeated with h=0.5 in table 2.4.The table 2.5 clearly indicates that

the better accuracy is achieved with h=0.05 [see table 2.5]

0.05 1.055042 1.055042

0.1 1.120342 1.120342

0.15 1.196169 1.196168

0.2 1.282805 1.282806

0.25 1.380551 1.380551

Predictor Corrector methods are preferred over Runge-Kutta as it requires only two

functional evaluations per integration step while the corresponding fourth order Runge-

Kutta requires four evaluations. The starting points are the weakness of predictor-

corrector methods. In Runge kutta methods the step size can be changed easily.

Module 2

Lecture 2

Contd

A predictor-corrector method refers to the use of the predictor equation with one

subsequent application of the corrector equation and the value so obtained is the final

solution at the grid point. This approach is used in example 2.1.

The predicted and corrected values are compared to obtain an estimate of the

truncation error associated with the integration step. The corrected values are accepted

if this error estimate does not exceed a specified maximum value. Otherwise, the

corrected values are rejected and the interval of integration is reduced starting from the

last accepted point. Likewise, if the error estimate becomes unnecessarily small, the

interval of integration may be increased. The predictor formula is more influential in the

stability properties of the predictor-corrector algorithm.

In another more commonly used approach, a predictor formula is used to get a first

estimate of the solution at next grid point and then the corrector formula is applied

iteratively until convergence is obtained. This is an iterative approach and corrector

formula is used iteratively. The number of derivative evaluations required is one greater

than the number of iterations of the corrector and it is clear that this number may in fact

exceed the number required by a Runge-Kutta algorithm. In this case, the stability

properties of the algorithm are completely determined by the corrector equation alone

and the predictor equation only influences the number of iterations required. The step

size is chosen sufficiently small to converge to the solution in one or two iterations. The

step size can be estimated from the error term in (2.4).

Example 2.2: Apply iterative method to solve IVP y=y+3t-t2 with y(0)=1 with h=0.1

Solution: With h=0.1 the computations are arranged in the table 2.6

Note that the corrector formula converges fast but is not converging to the solution of

the equation. It converges to the fixed point of difference scheme given by the corrector

formula. If h=0.05 then the solution converges to the exact solution in just two iterations.

h k t y f(t,y)

Milne Predictor corrector for t=0.4 with h=0.1

0.1 0 0 1 1

0.1 1 0.1 1.120341 1.410341 starting point1

0.1 2 0.2 1.233841 1.793841 starting point2

0.1 3 0.3 1.376339 2.186339 starting point3

0.1 4 0.4 1.719936 2.759936 predictor

0.1 5 0.4 1.677145 2.717145 corrector1

0.1 6 0.4 1.675719 2.715719 corrector2

0.1 7 0.4 1.675671 2.715671 corrector3

0.1 8 0.4 1.67567 2.71567 corrector4

0.1 9 0.4 1.67567 2.71567 corrector5

Milne Predictor corrector at t=0.5 with h=0.1

0.1 1 0.1 1.120341 1.410341

0.1 2 0.2 1.233841 1.793841 Starting values

0.1 3 0.3 1.376339 2.186339

0.1 4 0.4 1.67567 2.71567

0.1 5 0.5 1.840277 3.090277 predictor

0.1 6 0.5 1.914315 3.164315 corrector1

0.1 7 0.5 1.916783 3.166783 corrector2

0.1 8 0.5 1.916865 3.166865 corrector3

0.1 9 0.5 1.916868 3.166868 corrector4

Table 2.6a iterative Milnes predictor corrector method example 2.2 with h=0.1

Decreasing the value of h will reduce the number of applications of corrector formula.

This is evident from the next table 2,6b.

h k t y f(t,y)

0.05 0 0 1 1

0.05 1 0.05 1.055042 1.202542 Starting values

0.05 2 0.1 1.120342 1.410342

0.05 3 0.15 1.196169 1.623669

0.05 4 0.2 1.282805 1.842805 predictor

0.05 5 0.2 1.282805 1.842805 corrector1

0.05 6 0.2 1.282805 1.842805 corrector2

Table 2.6b iterative Milnes predictor corrector method example 2.2 with h=0.05

predictor equation and one subsequent application of the corrector equation with

incorporation of the error estimates as discussed below

Error estimates

28 5 ( 4)

y(t j1 ) p j1 h f (); in(t j3 ,t j1 )

90

1 5 ( 4)

y(t j1 ) y j1 h f ( ); in(t j1,t j1 )

90

It is assumed that the derivative is constant over the interval [tj-3, tj+1]. Then simplification

yields the error estimates based on predicted and corrected values.

28

y(t j1 ) p j1 [y j1 p j1 ] (2.6)

29

Further, assume that the difference between predicted and corrected values at each

step changes slowly. Accordingly, pj and yj can be substituted for pj+1 and yj+1 in (2.6)

gives a modifier qj+1 as

28

q j1 p j1 [y j p j ]

29

formula. This scheme is known as Modified Predictor corrector formula and is given as

4h

p j1 y j1 y j 2fj fj1 2fj2

3

28

q j1 p j1 [y j p j ]; f j1 f(t j 1,q j1 )

29 (2.7)

h

y j1 y j f j1 4fj f j1

3

Another problem associated with Milnes predictor corrector method is the instability

problem in certain cases. This means that error does not tend to zero as h tends to

zero. This is illustrated analytically for a simple IVP

Y=Ay, y(0)=y0

Its solution at t=tn is yn=y0 exp(tn-t0). Substituting y=Ay in the corrector formula gives the

difference equation

y j1 y j1

h

3

Ay j1 4Ay j Ay j1

hA 4hA hA

Or (1 )y j1 y j (1 )y j1 0 (2.8)

3 3 3

hA 2 4hA hA

(1 )Z Z (1 )0

3 3 3

2r 3r 2 1 hA

y j C Z C2 Z ;with Z1,2

j j

,r

1 r

1 1 2

3

2r 3r 2 1

Z1 1 3r O(r 2 ) 1 Ah O(h2 )

1 r

2r 3r 2 1 Ah

Z2 1 r O(r 2 ) (1 ) O(h2 )

1 r 3

Hence the solution of the given IVP by predictor corrector method is represented as

When A>0, the second term will die out but the first grows exponentially as j increases

irrespective of h. However, first term will die out and second will grow exponentially

when A<0. This establishes the instability of the solution.

Module 2

Lecture 3

A general k step method for solving IVP is given as

y j1 ak 1y j ak 2 y j1 ... a0 y j1m

h[bk f(t j1,y j1 ) bk 1f(t j ,y j ) ... b0 f(t j1m ,y j1m )

(2.9)

When bk=0, the method is explicit and yj+1 is explicitly determined from the initial value y0

and starting values yi; i=1,2, , k-1. When bk is nonzero, the method is implicit.

The Milnes predictor corrector formulae are the special cases of (2.9):

Predictor formula : k=4,a1=a 2 =a3 =0,a 4 =1,b0 =0, b1=8/3, b 2 =-4/3, b3 =8/3, b 4 =0

Another category of multistep methods, known as Adams methods, are obtained from

t j1 t j1

r

y j1 y j f(t,y(t))dt y j a t dt

j

j

tj t j k j 0

Here the integration is carried out only on the last panel, while many function and

derivative values at equi-spaced points are considered for the interpolating polynomial.

Both open and closed integration are considered giving two types of formulas. The

integration scheme is shown in the fig. 2.3

x

x x

x x

x x

x x

t t

tj-1 tj-1 tj-3 tj-1 tj-1 tj tj+1

tj-3 tj tj+1

Fig. 2.3 Schematic diagram for open and closed Adams integration

formulas

The open integration of Adams formula gives Adams Bashforth formula while

closed integration gives Adams Moulton formula. Different degrees of interpolating

polynomials depending upon the number r of interpolating points give rise to formulae of

different order. Although these formulae can be derived in many different ways, here

backward application of Taylor series expansion is used for the derivation of second

order Adams Bashforth open formula.

For second order formula, consider degree of polynomial r=2 and points tj, tj-1and tj-2

are used in the following

t j1 t j1

2

y j1 y j f(t,y(t))dt y j a t dt

j

j

tj t j k j 0

y j1 y j h[fj fj h / 2 fj h2 / 6 ...]

(2.10)

f j f j1 f j

f j h O(h2 )

Also, h 2

3 1 5

y j1 y j h( fj fj1 ) h3 fj()

2 2 12 (2.11)

A fourth order Adams Bashforth formula can be derived on similar lines and it is written

as

55 59 37 9 251 5 iv

y j1 y j h( fj fj1 fj2 f j 3 ) h f ( )

24 24 24 24 720 (2.12)

Example 2.3: Apply Adams Bashforth method to solve IVP

Solution: With h=0.1 the computations are arranged in the table 2.7

Exact

h k t y f(t,y)

solution

0.05 0 0 1 1

0.05 1 0.05 1.0550422 1.202542 Starting

0.05 2 0.1 1.1203418 1.410342

values

0.05 3 0.15 1.1961685 1.623669

0.05 4 0.2 1.2828053 1.842805 1.2828055

Table 2.7: Apply Adams Bashforth method Example 2.3

Module 2

Lecture 4

Backward Taylor series is used for integrand in the closed integration formula:

t j1 t j1

2

y j1 y j f(t,y(t))dt y j a t dt

j

j

tj t j k j 0

(2.13)

f j1 f j fj1

f j1 h O(h2 )

h 2

1 1 1

y j1 y j h( fj1 fj ) h3 fj1 ()

2 2 12 (2.14)

9 19 5 1 19 5 iv

y j1 y j h( fj1 fj fj1 f j2 ) h f ()

24 24 24 24 720 (2.15)

Adams Bashforth formula (2.12) as predictor while Adams Moulton formula (2.15) as

corrector. Milnes method is considered to be better due to smaller error terms.

However, Adams method is preferred due to instability of corrector formula in Milnes

method in some cases.

The predictor formula

55 59 37 9 251 5 iv

y j1 y j h( fj fj1 fj2 f j 3 ) h f ( )

24 24 24 24 720

The Corrector formula

9 19 5 1 19 5 iv

y j1 y j h( fj1 fj fj1 f j2 ) h f ()

24 24 24 24 720

Example 2.4 Solve IVP of example 2.3 by Adams predictor corrector Method

h k t y f(t,y)

Solution At t=0.20

0.05 0 0 1 1

0.05 1 0.05 1.0550422 1.202542

Starting

0.05 2 0.1 1.1203418 1.410342

values

0.05 3 0.15 1.1961685 1.623669

0.05 4 0.2 1.2828053 1.842805 predictor

0.05 5 0.2 1.2828055 1.842806 corrector

0.05 6 0.2 1.2828056 1.842806 corrector

Solution At t=0.25

0.05 1 0.05 1.0550422 1.202542

0.05 2 0.1 1.1203418 1.410342

Starting

0.05 3 0.15 1.1961685 1.623669

values

0.05 4 0.2 1.2828056 1.842806

0.05 5 0.25 1.3805506 2.068051 predictor

0.05 6 0.25 1.3805509 2.068051 corrector

0.05 7 0.25 1.3805509 2.068051 corrector

Table 2.8 Solution of IVP of Example 2.3 by Adams predictor corrector Method

Both the predictor and corrector formula have Local Truncation errors of order O(h5):

251 (5 )

y (t k 1 ) pk 1 y (1 )h5

720

19 (5 )

y (t k 1 ) y k 1 y (2 )h5

720

If fifth derivative is nearly constant and h is small then the error estimate can be

obtained by eliminating the derivative and simplifying as

19

y (t k 1 ) y k 1 y (tk 1 ) pk 1

270

Exercise 2

2.1 Consider the IVP

y e t y, y(0)=1

Compute solution at times t=0.05, 0.1 and 0.15 using RK4 by taking h=0.05. Use

these values to compute solution at t=0.2 0.25 and 0.3 using Milne-Simpson

method. Compare solution with the exact solution y (t 1)e t

2.2 Consider the IVP

y y t 2 , y(0)=1

Compute solution at times t=0.2, 0.4 and 0.6 using RK4 by taking h=0.2. Apply

Adams Bashforth method to compute solution at t=0.8 and 1.00

2.3 Solve IVP of exercise 2.2 by fourth order Adams predictor corrector Method.

Course: Numerical Solution of Ordinary Differential Equations

1 Single step and Multi-step methods, Predictor 2

corrector methods, Milnes method

2 Adams-Moulton method. 1

3 Adams Bashforth method 1

Module 2

Lecture 1

Consider I VP

One step methods for solving IVP (2.1) are those methods in which the solution yj+1 at

the j+1th grid point involves only one previous grid point where the solution is already

known. Accordingly, a general one step method may be written as

The increment function depends on solution yj at previous grid point tj and step size h.

If yj+1 can be determined simply by evaluating right hand side then the method is explicit

method. The methods developed in the module 1 are one step methods. These

methods might use additional functional evaluations at number of points between tj and

tj+1. These functional evaluations are not used in further computations at advanced grid

points. In these methods step size can be changed according to the requirement.

It may be reasonable to develop methods that use more information about the solution

(functional values and derivatives) at previously known values while computing solution

at the next grid point. Such methods using information at more than one previous grid

points are known as multi-step methods and are expected to give better results than

one step methods.

To determine solution yj+1, a multi-step method or k-step method uses values of y(t) and

f(t,y(t)) at k previous grid points tj-k, k=0,1,2,k-1,. yj is called the initial point while yj-k

are starting points. The starting points are computed using some suitable one step

method. Thus multi-step methods are not self starting methods.

t j1 t j1

r

y j1 y j f(t,y(t))dt y j a t dt

j

j (2.2)

t jk t j k j 0

using equi-spaced points. The integration over the interval is shown in the Fig 2.1.

The method may be explicit or implicit. An implicit method involves computation of yj+1 in

terms of yj+1. First an explicit formula known as predictor formula is used to predict yj+1.

Then another formula, known as corrector formula, is used to improve the predicted

value of yj+1. The predictor-corrector methods form a large class of general methods for

numerical integration of ordinary differential equations. A popular predictor-corrector

scheme is known as the Milne-Simpson method.

Milne-Simpson method

Its predictor is based on integration of f (t, y(t)) over the interval [tj3, tj+1] with k=3 and

r=3. The interpolating polynomial is considered to match the function at three points tj2,

tj1, and tj and the function is extrapolated at both the ends in the interval [tj3, tj-2] and [tj,

tj+1] as shown in the Fig 2.2(a). Since the end points are not used, an open integration

formula is used for the integral in (2.2):

4h 14

p j1 y j1 y j

3

2f(t j ,y j ) f(t j1,y j1 ) 2f(t j2 ,y j2 ) h5 f (4) (); in(t j3 ,t j1 ) (2.3)

45

The explicit predictor formula is of O(h4) and requires starting values. These starting

values should also be of same order of accuracy. Accordingly, if the initial point is y0

then the starting values y1, y2 and y3 are computed by fourth order Runge kutta method.

Then predictor formula (2.3) predicts the approximate solution y4 as p4 at next grid point.

The predictor formula (2.3) is found to be unstable (proof not included) and the solution

so obtained may grow exponentially.

The predicted value is then improved using a corrector formula. The corrector formula is

developed similarly. For this, a second polynomial for f (t, y(t)) is constructed, which is

based on the points (tj1, fj1), (tj, fj) and the predicted point (tj+1, fj+1). The closed

integration of the interpolating polynomial over the interval [tj, tj+1] is carried out [See Fig

2.2 (b)]. The result is the familiar Simpsons rule:

h 1 5 ( 4)

y j1 y j

3

f(t j1,y j1 ) 4f(t j ,y j ) f(t j1,y j1 )

90

h f ( ); in(t j1,t j1 ) (2.4)

fj-1

fj-3 fj

fj-2

fj-k fj+1

tj-1

x

x

xj-1

x

x

x

x

x

x

t

t

tj-3 tj-2 tj-1 tj tj+1

tj-3 tj-1 tj-1 tj tj+1

(a) (b)

Fig 2.2 (a) Open Scheme for Predictor (b) Closed integration for Corrector

In the corrector formula fj+1 is computed from the predicted value pj+1 as obtained from

(2.3).

Denoting f j f(t j , y j ) , the equations (2.3) and (2.4) gives the following predictor corrector

formulae, respectively, for solving IVP (2.1) at equi-spaced discrete points t4,t5,

4h

p j1 y j1 y j3 2fj fj1 2fj2

3 (2.5)

h

y j1 y j1 fj1 4f j f j1

3

The solution at initial point t0 is given in the initial condition and t1, t2 and t3 are the

starting points where solution is to be computed using some other suitable method such

as Runge Kutta method. This is illustrated in the example 2.1

Example 2.1: Solve IVP y=y+3t-t2 with y(0)=1; using Milnes predictor corrector method

take h=0.1

Solution: The following table 2.1 computes Starting values using fourth order Runge

Kutta method.

k y k1= t+ y+ k2 y+ k3 y+ y+h(k1+2k

t f(t,y) h/2 h/2*k h/2*k2 t+h h*k3 k4 2+2k3+k4)

1 /6

0 0 1 1 0.05 1.05 1.197 1.05987 1.2074 0.1 1.1207 1.41074 1.1203415

5 5

1 0.1 1.1203 1.410 0.15 1.190 0.222 1.13147 1.559 0.2 1.2762 1.83624 1.2338409

34 859 7 7

2 0.2 1.2338 1.793 0.25 1.323 0.321 1.24993 1.9374 0.3 1.4276 2.23758 1.3763387

84 533 8 1

3 0.2 1.3763

Using initial value and starting values at t=0, 0.1, 0.2 and 0.3, the predictor formula

predicts the solution at t=0.4 as 1.7199359. It is used in corrector formula to give the

corrected value. The solution is continued at advanced grid points [see table 2.2].

MilnePredictorcorrector1 f(t,p)

k t y f(t,y) corrector

0 0 1 1

rk4 Milne pc exact

1 0.1 1.1203415 1.410341

2 0.2 1.2338409 1.793841 startingpoints

3 0.3 1.3763387 2.186339

4 0.4 1.7199359 2.759936 1.67714525 2.717145

5 0.5 2.0317593 3.281759 1.920894708 3.170895

Table 2.2: Example 2.1 using predictor corrector method with h=0.1

The exact solution is possible in this example; however it may not be possible for other

equations. Table 2.3 compares the solution with the exact solution of given equation.

Clearly the accuracy is better in predictor corrector method than the Runge-Kutta

method.

0.1 1.120341 1.120342

0.2 1.233841 1.282806

0.3 1.376339 1.489718

0.4 1.5452 1.6771453 1.743649

0.5 1.7369 1.9208947 2.047443

Milne Predictor corrector1 f(t,p)

k t y f(t,y)

0 0 1 1

1 0.05 1.055042 1.202542 starting point1

2 0.1 1.120342 1.410342 starting point2

3 0.15 1.196169 1.623669 starting point3

4 0.2 1.282805 1.842805 1.282805 1.842805

5 0.25 1.380551 2.068051 1.380551 2.068051

Table 2.4: Example 2.1 using predictor corrector method with h=0.05

The exercise 2.1 is repeated with h=0.5 in table 2.4.The table 2.5 clearly indicates that

the better accuracy is achieved with h=0.05 [see table 2.5]

0.05 1.055042 1.055042

0.1 1.120342 1.120342

0.15 1.196169 1.196168

0.2 1.282805 1.282806

0.25 1.380551 1.380551

Predictor Corrector methods are preferred over Runge-Kutta as it requires only two

functional evaluations per integration step while the corresponding fourth order Runge-

Kutta requires four evaluations. The starting points are the weakness of predictor-

corrector methods. In Runge kutta methods the step size can be changed easily.

Module 2

Lecture 2

Contd

A predictor-corrector method refers to the use of the predictor equation with one

subsequent application of the corrector equation and the value so obtained is the final

solution at the grid point. This approach is used in example 2.1.

The predicted and corrected values are compared to obtain an estimate of the

truncation error associated with the integration step. The corrected values are accepted

if this error estimate does not exceed a specified maximum value. Otherwise, the

corrected values are rejected and the interval of integration is reduced starting from the

last accepted point. Likewise, if the error estimate becomes unnecessarily small, the

interval of integration may be increased. The predictor formula is more influential in the

stability properties of the predictor-corrector algorithm.

In another more commonly used approach, a predictor formula is used to get a first

estimate of the solution at next grid point and then the corrector formula is applied

iteratively until convergence is obtained. This is an iterative approach and corrector

formula is used iteratively. The number of derivative evaluations required is one greater

than the number of iterations of the corrector and it is clear that this number may in fact

exceed the number required by a Runge-Kutta algorithm. In this case, the stability

properties of the algorithm are completely determined by the corrector equation alone

and the predictor equation only influences the number of iterations required. The step

size is chosen sufficiently small to converge to the solution in one or two iterations. The

step size can be estimated from the error term in (2.4).

Example 2.2: Apply iterative method to solve IVP y=y+3t-t2 with y(0)=1 with h=0.1

Solution: With h=0.1 the computations are arranged in the table 2.6

Note that the corrector formula converges fast but is not converging to the solution of

the equation. It converges to the fixed point of difference scheme given by the corrector

formula. If h=0.05 then the solution converges to the exact solution in just two iterations.

h k t y f(t,y)

Milne Predictor corrector for t=0.4 with h=0.1

0.1 0 0 1 1

0.1 1 0.1 1.120341 1.410341 starting point1

0.1 2 0.2 1.233841 1.793841 starting point2

0.1 3 0.3 1.376339 2.186339 starting point3

0.1 4 0.4 1.719936 2.759936 predictor

0.1 5 0.4 1.677145 2.717145 corrector1

0.1 6 0.4 1.675719 2.715719 corrector2

0.1 7 0.4 1.675671 2.715671 corrector3

0.1 8 0.4 1.67567 2.71567 corrector4

0.1 9 0.4 1.67567 2.71567 corrector5

Milne Predictor corrector at t=0.5 with h=0.1

0.1 1 0.1 1.120341 1.410341

0.1 2 0.2 1.233841 1.793841 Starting values

0.1 3 0.3 1.376339 2.186339

0.1 4 0.4 1.67567 2.71567

0.1 5 0.5 1.840277 3.090277 predictor

0.1 6 0.5 1.914315 3.164315 corrector1

0.1 7 0.5 1.916783 3.166783 corrector2

0.1 8 0.5 1.916865 3.166865 corrector3

0.1 9 0.5 1.916868 3.166868 corrector4

Table 2.6a iterative Milnes predictor corrector method example 2.2 with h=0.1

Decreasing the value of h will reduce the number of applications of corrector formula.

This is evident from the next table 2,6b.

h k t y f(t,y)

0.05 0 0 1 1

0.05 1 0.05 1.055042 1.202542 Starting values

0.05 2 0.1 1.120342 1.410342

0.05 3 0.15 1.196169 1.623669

0.05 4 0.2 1.282805 1.842805 predictor

0.05 5 0.2 1.282805 1.842805 corrector1

0.05 6 0.2 1.282805 1.842805 corrector2

Table 2.6b iterative Milnes predictor corrector method example 2.2 with h=0.05

predictor equation and one subsequent application of the corrector equation with

incorporation of the error estimates as discussed below

Error estimates

28 5 ( 4)

y(t j1 ) p j1 h f (); in(t j3 ,t j1 )

90

1 5 ( 4)

y(t j1 ) y j1 h f ( ); in(t j1,t j1 )

90

It is assumed that the derivative is constant over the interval [tj-3, tj+1]. Then simplification

yields the error estimates based on predicted and corrected values.

28

y(t j1 ) p j1 [y j1 p j1 ] (2.6)

29

Further, assume that the difference between predicted and corrected values at each

step changes slowly. Accordingly, pj and yj can be substituted for pj+1 and yj+1 in (2.6)

gives a modifier qj+1 as

28

q j1 p j1 [y j p j ]

29

formula. This scheme is known as Modified Predictor corrector formula and is given as

4h

p j1 y j1 y j 2fj fj1 2fj2

3

28

q j1 p j1 [y j p j ]; f j1 f(t j 1,q j1 )

29 (2.7)

h

y j1 y j f j1 4fj f j1

3

Another problem associated with Milnes predictor corrector method is the instability

problem in certain cases. This means that error does not tend to zero as h tends to

zero. This is illustrated analytically for a simple IVP

Y=Ay, y(0)=y0

Its solution at t=tn is yn=y0 exp(tn-t0). Substituting y=Ay in the corrector formula gives the

difference equation

y j1 y j1

h

3

Ay j1 4Ay j Ay j1

hA 4hA hA

Or (1 )y j1 y j (1 )y j1 0 (2.8)

3 3 3

hA 2 4hA hA

(1 )Z Z (1 )0

3 3 3

2r 3r 2 1 hA

y j C Z C2 Z ;with Z1,2

j j

,r

1 r

1 1 2

3

2r 3r 2 1

Z1 1 3r O(r 2 ) 1 Ah O(h2 )

1 r

2r 3r 2 1 Ah

Z2 1 r O(r 2 ) (1 ) O(h2 )

1 r 3

Hence the solution of the given IVP by predictor corrector method is represented as

When A>0, the second term will die out but the first grows exponentially as j increases

irrespective of h. However, first term will die out and second will grow exponentially

when A<0. This establishes the instability of the solution.

Module 2

Lecture 3

A general k step method for solving IVP is given as

y j1 ak 1y j ak 2 y j1 ... a0 y j1m

h[bk f(t j1,y j1 ) bk 1f(t j ,y j ) ... b0 f(t j1m ,y j1m )

(2.9)

When bk=0, the method is explicit and yj+1 is explicitly determined from the initial value y0

and starting values yi; i=1,2, , k-1. When bk is nonzero, the method is implicit.

The Milnes predictor corrector formulae are the special cases of (2.9):

Predictor formula : k=4,a1=a 2 =a3 =0,a 4 =1,b0 =0, b1=8/3, b 2 =-4/3, b3 =8/3, b 4 =0

Another category of multistep methods, known as Adams methods, are obtained from

t j1 t j1

r

y j1 y j f(t,y(t))dt y j a t dt

j

j

tj t j k j 0

Here the integration is carried out only on the last panel, while many function and

derivative values at equi-spaced points are considered for the interpolating polynomial.

Both open and closed integration are considered giving two types of formulas. The

integration scheme is shown in the fig. 2.3

x

x x

x x

x x

x x

t t

tj-1 tj-1 tj-3 tj-1 tj-1 tj tj+1

tj-3 tj tj+1

Fig. 2.3 Schematic diagram for open and closed Adams integration

formulas

The open integration of Adams formula gives Adams Bashforth formula while

closed integration gives Adams Moulton formula. Different degrees of interpolating

polynomials depending upon the number r of interpolating points give rise to formulae of

different order. Although these formulae can be derived in many different ways, here

backward application of Taylor series expansion is used for the derivation of second

order Adams Bashforth open formula.

For second order formula, consider degree of polynomial r=2 and points tj, tj-1and tj-2

are used in the following

t j1 t j1

2

y j1 y j f(t,y(t))dt y j a t dt

j

j

tj t j k j 0

y j1 y j h[fj fj h / 2 fj h2 / 6 ...]

(2.10)

f j f j1 f j

f j h O(h2 )

Also, h 2

3 1 5

y j1 y j h( fj fj1 ) h3 fj()

2 2 12 (2.11)

A fourth order Adams Bashforth formula can be derived on similar lines and it is written

as

55 59 37 9 251 5 iv

y j1 y j h( fj fj1 fj2 f j 3 ) h f ( )

24 24 24 24 720 (2.12)

Example 2.3: Apply Adams Bashforth method to solve IVP

Solution: With h=0.1 the computations are arranged in the table 2.7

Exact

h k t y f(t,y)

solution

0.05 0 0 1 1

0.05 1 0.05 1.0550422 1.202542 Starting

0.05 2 0.1 1.1203418 1.410342

values

0.05 3 0.15 1.1961685 1.623669

0.05 4 0.2 1.2828053 1.842805 1.2828055

Table 2.7: Apply Adams Bashforth method Example 2.3

Module 2

Lecture 4

Backward Taylor series is used for integrand in the closed integration formula:

t j1 t j1

2

y j1 y j f(t,y(t))dt y j a t dt

j

j

tj t j k j 0

(2.13)

f j1 f j fj1

f j1 h O(h2 )

h 2

1 1 1

y j1 y j h( fj1 fj ) h3 fj1 ()

2 2 12 (2.14)

9 19 5 1 19 5 iv

y j1 y j h( fj1 fj fj1 f j2 ) h f ()

24 24 24 24 720 (2.15)

Adams Bashforth formula (2.12) as predictor while Adams Moulton formula (2.15) as

corrector. Milnes method is considered to be better due to smaller error terms.

However, Adams method is preferred due to instability of corrector formula in Milnes

method in some cases.

The predictor formula

55 59 37 9 251 5 iv

y j1 y j h( fj fj1 fj2 f j 3 ) h f ( )

24 24 24 24 720

The Corrector formula

9 19 5 1 19 5 iv

y j1 y j h( fj1 fj fj1 f j2 ) h f ()

24 24 24 24 720

Example 2.4 Solve IVP of example 2.3 by Adams predictor corrector Method

h k t y f(t,y)

Solution At t=0.20

0.05 0 0 1 1

0.05 1 0.05 1.0550422 1.202542

Starting

0.05 2 0.1 1.1203418 1.410342

values

0.05 3 0.15 1.1961685 1.623669

0.05 4 0.2 1.2828053 1.842805 predictor

0.05 5 0.2 1.2828055 1.842806 corrector

0.05 6 0.2 1.2828056 1.842806 corrector

Solution At t=0.25

0.05 1 0.05 1.0550422 1.202542

0.05 2 0.1 1.1203418 1.410342

Starting

0.05 3 0.15 1.1961685 1.623669

values

0.05 4 0.2 1.2828056 1.842806

0.05 5 0.25 1.3805506 2.068051 predictor

0.05 6 0.25 1.3805509 2.068051 corrector

0.05 7 0.25 1.3805509 2.068051 corrector

Table 2.8 Solution of IVP of Example 2.3 by Adams predictor corrector Method

Both the predictor and corrector formula have Local Truncation errors of order O(h5):

251 (5 )

y (t k 1 ) pk 1 y (1 )h5

720

19 (5 )

y (t k 1 ) y k 1 y (2 )h5

720

If fifth derivative is nearly constant and h is small then the error estimate can be

obtained by eliminating the derivative and simplifying as

19

y (t k 1 ) y k 1 y (tk 1 ) pk 1

270

Exercise 2

2.1 Consider the IVP

y e t y, y(0)=1

Compute solution at times t=0.05, 0.1 and 0.15 using RK4 by taking h=0.05. Use

these values to compute solution at t=0.2 0.25 and 0.3 using Milne-Simpson

method. Compare solution with the exact solution y (t 1)e t

2.2 Consider the IVP

y y t 2 , y(0)=1

Compute solution at times t=0.2, 0.4 and 0.6 using RK4 by taking h=0.2. Apply

Adams Bashforth method to compute solution at t=0.8 and 1.00

2.3 Solve IVP of exercise 2.2 by fourth order Adams predictor corrector Method.

- Engineering MathematicsUploaded bynishanali
- 02.AGRIUploaded byStephen.K
- SMES2105 Chapter 6 Students SlidesUploaded byYs Ong
- bag of wordsUploaded byIsabelColmenares
- FVMUploaded byTrần Quốc Ziệc
- BodeRules.pdfUploaded byZachary
- Calculus 3Uploaded byImraans
- EeeUploaded byMikhaelA.Rodriguez
- Topics Math2GMIT 2016Uploaded byshineod
- Morphological ProcessingUploaded byghanshyam_37596119
- Double IntegralsUploaded byUmange Ranasinghe
- 2 - Managerial Decision Making and Mathematical Optimization Problems_17Jan2019.pdfUploaded bycihtanbio
- Genetic Algorithm Matlab CodeUploaded byDhananjay Singh
- Solving Nonlinear Governing Equations of Motion Using Matlab and Simulink in First Dynamics CourseUploaded byendoparasite
- EeeUploaded byuking uking
- lecture21مهمUploaded byapi-3713962
- GreenUploaded byHertati Semarli Sinaga
- TopicsUploaded byLediana Mehmeti
- Hown Above May Be Written in the FormUploaded byIntan Puri
- Sequences Higher 2Uploaded byMissMiller
- JURNAL_DAFTAR JURNAL TERINDEX SCOPUS.docxUploaded byHashifah Anisah
- 10.1007@BF00282279Uploaded byHo Nhat Nam
- 29 3 Int Vec ThmsUploaded bychuyenvien94
- A Penny Saved is a Penny EarnedUploaded byruchipatel20
- GreenUploaded byChuznul Fatimah
- varUploaded bySameena Riyas
- Combine Approach of Enhancing Images Using Histogram Equalization and Laplacian PyramidUploaded byseventhsensegroup
- Calculus 01 Numerical IntegrationUploaded byeliseudesafate
- Important ConceptsUploaded byJay Prakash Tiwari
- week3.pdfUploaded bygeancar

- Dsp AssignmentUploaded byParamesh Waran
- 8532-Business Math & StatUploaded byHassan Malik
- Differential EqUploaded byanc62
- A1.2 - Perms and CombosUploaded byamolkhadse
- Computational Fluid Dynamics in Practice BOOK.pdfUploaded byZikic Dejan
- Cfd MomentumUploaded bySêlvâkûmâr Jayabala
- Models.woptics.dielectric Slab WaveguideUploaded byneomindx
- quantum harmonic oscillator.pdfUploaded byghoraisoumendra9252
- a First Course in Topology Continuity and Dimension Student Mathematical LibraryUploaded byali_rahebi
- 12 Isometric DrawingUploaded byLorenzo Torres
- 11 SBST1303 T7 (WM)Uploaded byFrizal Rahim
- Mathematics ReportUploaded byShivaraj Subramaniam
- 1st Midterm SolUploaded byOsama Maher
- h2 Mathematics Practice Paper 1 for Prelim Exam 2011Uploaded byAugustine Ng
- Robot Mass OptimizationUploaded byVitor Fc
- Axel Hutt et al- Additive noise-induced Turing transitions in spatial systems with application to neural fields and the Swift–Hohenberg equationUploaded byJmasn
- chapter 3Uploaded byFrancis Viray
- MathematicsUploaded byAshok Pradhan
- L9 - Analysis of AlgorithmsUploaded bymurad
- Algebra and Linear Equations _(14!8!2014)Uploaded bySudhir Kumar
- History Fourier SeriesUploaded byEram Khan
- [International Journal of Electronics and Telecommunications] Synthesis of the Magnetic Field Using Transversal 3D Coil System.pdfUploaded byOlga Blagojevic
- Sheet 6 (2015-2016)Uploaded byMenna Elsaoudy
- Orthogonality relations for the associated Legendre functions of imaginary orderUploaded byvlava89
- hw5Uploaded byKiran Adhikari
- Maximum Power Point Tracking (MPPT) Algorithm in Wind Energy Conversion SystemUploaded byIJSTE
- Engineering Vibrations 3rd Edition Inman.pdfUploaded byAnonymous i6iMRM1C2
- Finding Limits AlgebraicallyUploaded byPammy Poerio Pieretti
- mathgen-2049188854.pdfUploaded byDiego Koz
- lecture notesUploaded byMompati Letsweletse