Вы находитесь на странице: 1из 20

Numerical Differential Equation Solvers

Robert Piché
Tampere University of Technology

After you’ve formed a model of a physical system, you’ll want to compute


the solution. Usually, the model is so large and complex that you’ll want to
use numerical solvers rather than attempt an “exact” solution. You can use
the numerical solvers available in your simulation software or use algorithms
available in general-purpose mathematical software packages. In either case,
you will be faced with a range of choices: which method to use and what
solution parameters to give it. This lesson provides some guidance to help you
make the right choices.
A model of the dynamics of a physical system can have many different kinds
of mathematical object, including ordinary differential equations, partial differ-
ential equations, algebraic equations, and difference equations (“discrete-time”
models). In this lesson, attention is restricted to models that are ordinary differ-
ential equation initial value problems. We start by giving a precise description
of this mathematical problem.
We then look at the general ideas behind numerical solution algorithms. We
explain the difference between algorithms that are constant or variable time
step, one-step or multistep, explicit or implicit, low-order or high-order.
The lesson closes with a discussion of the importance of choosing appropriate
error tolerances for the solution algorithm.

1 The Mathematical Problem


1.1 Standard form
The ordinary differential equation initial value problem is to find functions
y1 (t), . . . , yn (t) of time t that satisfy the n first-order ordinary differential equa-
tions

y˙1 = f1 (t, y1 , . . . , yn )
y˙2 = f2 (t, y1 , . . . , yn )
.. .
. = ..
y˙n = fn (t, y1 , . . . , yn )

1
for some interval t0 ≤ t ≤ T , where f1 , . . . , fn are given functions, and satisfying
the n initial conditions

y1 (t0 ) = a1
y2 (t0 ) = a2
.. .
. = ..
yn (t0 ) = an

where the n values a1 , . . . , an are given. Here the expression “given” means
that they have been defined as a result of modeling the physical system, either
explicitly by you the modeler or automatically by your modeling software.
The matrix J of partial derivatives given by
∂fi
Jij =
∂yj

is called the jacobian matrix. If the jacobian matrix is constant, then the ordi-
nary differential equation is said to be linear shift-invariant.
Introducing the vectors y = [y1 , . . . , yn ]T , f = [f1 , . . . , fn ]T , and a =
[a1 , . . . , an ]T , the system of first-order ordinary differential equations can be
written in the form

ẏ = f (t, y) (1)

and the initial condition can be written

y(t0 ) = a (2)

1.2 Converting to Standard Form


Some models have ordinary differential equations that contain derivatives of
higher order, but most of the available solvers are for ordinary differential equa-
tions in first-order form, so to use them you (or your simulation software) should
know how to reformulate the ordinary differential equations. This is done by
introducing new variables. For example, the second-order equation

mz̈ + cż + kz = sin(t)

can be reformulated as a set of two first-order equations by introducing the


variables y1 = z, y2 = ż, and writing

y˙1 = y2
sin(t) − ky1 − cy2
y˙2 =
m

1.2.1 Example
The above example can be submitted to the DYNAST simulation software be-
low, as a set of two first-order equations. The reader is encouraged to submit
the example to DYNAST and try a few modifications to see how the results
change.

2
*:forced SHM with damping -second-order d.e.

*SYSTEM;

: Parameters

m = 10.0; :[kg] mass on spring


c = 20.0; :[N /(m/s)] damping constant
k = 10.0; :[N/ m] spring constant

: Equations of motion

SYSVAR y1, y2;


0 = - VD.y1 + y2;
0 = - VD.y2 + (1/m)*sin(time) - (k/m)*y1 - (c/m)*y2;
:
*TR; TR 0 100; : time ranges from t=0 to t=100
INIT y1=0.0, y2=1.0; : specifies the initial conditions
PRINT y1, y2; : prints columns of y1 and y2 values
:
RUN;
*END;

2 Algorithms
The simplest solution algorithm for ordinary differential equation initial value
problems is Euler’s method, given by

Y0 = a
Yj+1 = Yj + hj f (tj , Yj ), j = 0, 1, . . . , p − 1 (3)

where hj = tj+1 − tj is the time step. Euler’s method has the basic features
common to all solution algorithms. The algorithm starts with the given initial
value Y0 = y(t0 ) = a, and then marches forward in time, computing the sequence
of approximate solution values Y0 = y(t0 ), Y1 ≈ y(t1 ), . . . , Yp ≈ y(tp ) in order.
Euler’s method is a good pedagogical example, but is not recommended for
most practical use. There are many algorithms to solve ordinary differential
equation initial value problems that are much more efficient. Simulation soft-
ware and mathematical software libraries typically offer a selection of several
different algorithms. As a user, you are left with the task of choosing the algo-
rithm that is suitable for your problem.
The software documentation will try to guide your choice by telling some-
thing about the algorithms. You will typically be informed that a particular
algorithm is

• constant time step or variable time step,

• one-step or multistep,

3
• explicit or implicit,
• low-order or high-order
Thus, to make an informed choice of algorithm, you need to understand what
these terms mean. They are explained in this section, which describes the gen-
eral features of various algorithms for numerical solution of ordinary differential
equation initial value problems.

2.1 Constant time step vs. variable time step


The simplest algorithms use a fixed constant time step h in their computations,
computing the values at times t0 , t0 + h, t0 + 2h, . . . , t0 + ph. Solvers with such
algorithms are very common, mainly because they are relatively easy to code.
Modern solvers do not use constant time steps. Instead, they use algorithms
that continuously monitor the accuracy of the solution during the course of the
computation, and adaptively change the step size to maintain a consistent level
of accuracy. The step size may change many times during the course of the
computations, as larger time steps are used where the solution is varying slowly,
and smaller steps are used where the solution varies rapidly.
The varying time step algorithm is more complicated than its constant time
step alternative, because it includes code to estimate the accuracy and code to
control the step size. At each time step during the computation, the algorithm
goes through a sequence something like this:
1. a step size hj for the next step is chosen, based on information collected
in previous time steps;
2. the solution value at the next time point tj + hj is computed;
3. the solution at time tj+1 is computed a second time using a more accurate
formula;
4. the accuracy is estimated by comparing the two approximations of y(tj +
hj );
5. if the accuracy is acceptable, the solution moves forward to the next time
point tj+1 , otherwise a smaller time step hj is chosen and the algorithm
goes back to stage 2.
The user will often want the solution at specified points in time, for example
at equally spaced intervals, to produce tables or plots. A modern varying time
step algorithm will provide these solution values by interpolating the computed
solution points. This is more efficient than computing solution values at all the
specified time points.
A varying time step algorithm is usually much faster than its constant time
step version, because it concentrates its computational effort only on those time
intervals that need it most, taking large strides over intervals that don’t need
small time steps. Except in unusual cases, this easily compensates for the extra
computational work required for error estimation, step size adjustment, and
interpolation.
More importantly, varying time step algorithms are more reliable. They
can cope with sharp changes in the solution by reducing the time step, where

4
constant time step algorithms will plow right through and compute completely
erroneous results. Also, a varying time step algorithm ensures that numerical
instability does not occur, as discussed later in this lesson.

2.1.1 Example
The purpose of this example is to show a case where a variable time step method
is better than a constant time step method. We consider the following initial
value problem (IVP)

        
ẏ1 −0.1 −199.9 y1 y1 (0) 2
= , = (4)
ẏ2 0 −200 y2 y2 (0) 1

This IVP has an analytic solution:

y1 (t) = exp(−0.1 t) + exp(−200 t) y2 (t) = exp(−200 t) (5)

The response of y1 is the sum of two parts, one of which is much faster than
the other. In other words this IVP is stiff.
Exercise 1. Verify that equation (5) is a solution of (4) by differentiation.

Equation (4) can be solved using a constant time step or variable time step
methods with the DYNAST software as follows.
The Dynast program for a constant time step is shown below.

*:example comparing constant vs. varying time step (t=0 to 0.1)


*SYSTEM;
SYSVAR y1, y2;
:
: the differential equation to be solved
0 = - VD.y1 -0.1*y1 -199.9*y2;
0 = - VD.y2 -200.0*y2;
:
:
*TR; TR 0 0.1; : time ranges from 0 to 0.1
INIT y1=2.0, y2=1.0; : initial values
PRINT y1, y2;
RUN MIN=10, MAX=10; : set MAX=MIN= (no. of steps - 1)
*END;

The penultimate line in this input, RUN MIN=10, MAX=10 forces DYNAST
to use a constant time step method rather than a varying time step method.
MIN specifies the minimum step length a variable time step method should use.
The minimum step length is given by the time range interval divided by the
value of MIN . Likewise MAX specifies the maximum step length a variable time
step method should use. By making MIN and MAX equal the method is clearly
retricted to be a constant time step method.

5
Figure 1: Plot of the numerical solutions of equation (4) using DYNAST with
a constant time step method.

Figure 2: Discrete plot of numerical solution y1 of equation (4) using DYNAST


with a constant time step method.

When you click Submit to DYNAST you get a new browser window with
text output. To see the plot, click on graphical form in the output window.
Only the first output variable, y1 , is shown in the plot. To display y2 as well
click on y2 under Dependent variables and then on the box Redraw. To obtain
a common y-axis click on Common Y. It is also instructive to choose the Show
point marks option to show the solution points computed by the solver. The
continuous graph is a linear interpolation between these points. Click on Redraw
after choosing these options. You should obtain a plot of y1 and y2 shown in
Figure 1. Choose Discrete X instead of Show point marks and y1 only to obtain
Figure 2. This shows only the function points calculated by this numerical
method. Notice how they are evenly spaced (except at the end : a minor bug?).

We can compare the numerical solution with the exact analytical solution
by adding some lines to the DYNAST input code, as follows.

6
Figure 3: Plot of exact and numerical solution of equation (4) using DYNAST
with a constant time step method.

*:example comparing constant vs. varying time step (t=0 to 0.1)


*SYSTEM;
SYSVAR y1, y2;
:
: differential equation to be solved
0 = - VD.y1 -0.1*y1 -199.9*y2;
0 = - VD.y2 -200.0*y2;
:
: have values of exact analytical solution for comparison
yexact1 = ( exp(-0.1*time) + exp(-200*time) );
:
: compute errors between exact and numerical solutions
error1 = (y1 - yexact1);
:
:
*TR; TR 0 0.1; : time ranges from 0 to 0.1
INIT y1=2.0, y2=1.0; : initial values
PRINT y1, yexact1, error1;
:
RUN MIN=10, MAX=10; : set MAX=MIN=( no. of steps - 1)
*END;

Figure 3 shows a plot of the numerical and exact solutions for y1 .


Figure 4 shows a plot of the errors in the numerical constant time step
approximation to y1 .
Some of the DYNAST output results are given in Table 1. This shows that
the maximum error is 0.198.
Exercise 2. Add some lines to the above DYNAST input code to obtain com-
parison of the exact and numerical solutions for y2 and the absolute error.

7
Table 1: Output of system (4) solved using DYNAST with a constant time step
method.

Number of nodes: 2
Number of equations: 2

# example comparing constant vs. varying time step (t=0 to 0.1)

X ... TIME
1 ... y1
2 ... yexact1
3 ... error1

X 1 2 3
0.000000E+000 2.000000E+000 2.000000E+000 0.000000E+000
1.000000E-002 1.332334E+000 1.134336E+000 1.979985E-001
2.000000E-002 1.109114E+000 1.016318E+000 9.279647E-002
3.000000E-002 1.034043E+000 9.994832E-001 3.455978E-002
4.000000E-002 1.008356E+000 9.963435E-001 1.201221E-002
5.000000E-002 9.991302E-001 9.950579E-001 4.072312E-003
6.000000E-002 9.953927E-001 9.940241E-001 1.368578E-003
7.000000E-002 9.934852E-001 9.930253E-001 4.598891E-004
8.000000E-002 9.921883E-001 9.920320E-001 1.562687E-004
9.000000E-002 9.910956E-001 9.910404E-001 5.524675E-005
9.500000E-002 9.905750E-001 9.905450E-001 2.997530E-005
1.000000E-001 9.900672E-001 9.900498E-001 1.739895E-005

MAX 2.000000E+000 2.000000E+000 1.979985E-001


MIN 9.900672E-001 9.900498E-001 0.000000E+000
NP 12

Statistics: 11 steps, 0 rejected steps, 18 iterations


Order: 1 2 3 4 5 6
Steps: 11 0 0 0 0 0

Errors: 0, Warnings: 0
Total seconds used up by DYNAST: 0.00
Program DYNAST exited on Friday, 3 March 2000, at 15:40:18

8
Figure 4: Absolute error in numerical solution y1 of equation (4) using DYNAST
with a constant time step method.

Figure 5: Discrete plot of the numerical solution y1 of equation (4) using DY-
NAST with a variable time step method.

You can now try the same problem using the DYNAST program with a
variable time step method. You should change the DYNAST input code from
RUN MIN=10, MAX=10;
to
RUN;
and run this.
Again you can plot the results graphically as discrete time steps by clicking
on the graphical form and then on the box next to Discrete X and then on the
box Redraw. The plot for y1 is given in Figure 5 as a discrete plot. Figure 5
should be compared with Figure 2. The variable time step differs from the
constant time step solution: it uses small steps at the beginning where the
function is changing rapidly, and large steps where the function is not changing
much.

9
Figure 6: Plot of the exact and numerical solution, y1 , of equation (4) using
DYNAST with variable step method.

Figure 7: Absolute and relative error for numerical solution, y1 , of equation (4)
using DYNAST with variable step method.

Exercise 3. Produce plots of y1 and y2 as continuous functions. These are


not obtained directly from the above code, so you need to modify the DYNAST
input by adding in y2 after PRINT. This will give good instruction on how to use
DYNAST.
Comparison with the exact solution, for y1 given by (5) is given in Figure 6.
The difference is almost undetectable by eye. The error is plotted in Figure 7.
Some of the output information is given in Table 2.
Comparing Figure 4 with Figure 7 shows that the variable time step method
is clearly more accurate: the maximum absolute error is 0.000474 compared to
0.198 for the fixed time step solution. It samples the functions more closely
where the function is changing rapidly, and less closely where the function is
not changing much.
Exercise 4. Find the number of steps required to make a constant time step
method with DYNAST be of comparable accuracy to the variable time step

10
Table 2: Output of system (4) solved using DYNAST with a variable time step
method.

# example comparing constant vs. varying time step (t=0 to 0.1)

X ... TIME
1 ... y1
2 ... yexact1
3 ... error1

MAX 2.000000E+000 2.000000E+000 4.739823E-004


MIN 9.900546E-001 9.900498E-001 -2.212571E-004
NP 36

Statistics: 35 steps, 0 rejected steps, 36 iterations


Order: 1 2 3 4 5 6
Steps: 10 5 5 10 5 0

Errors: 0, Warnings: 0
Total seconds used up by DYNAST: 0.01
Program DYNAST exited on Friday, 3 March 2000, at 16:37:19

method used in the example above. How much more time does it take to compute
using this method compared with the variable time step method?
HINT: This involves running DYNAST with different step lengths and ex-
amining the maximum error in the output. This requires some trial and error.
An examination of the spacing of the few steps in the variable time step
method in the output file should give a hint of how closely the steps need to be
spaced in the constant time step method. However, it turns out that this gives
an overestimate so try something between the order of 1000 and 10000.

2.2 One-step vs. multistep


One-step algorithms are characterised by the fact that they have no “memory”.
To compute the solution point Yj+1 , a one-step algorithm uses the solution point
Yj as initial value, and does not use any previously computed solution points.
In other words, it treats each new time step computation as an initial value
problem.
The Euler method (3) is the simplest example of a one-step method. The
Runge-Kutta methods are a large class of one-step algorithms, of which the

11
Yj +1

Yj

t
tj tj + h /2 tj + h

Figure 8: In the classic Runge-Kutta method, the solution point Yj+1 is com-
puted using the previous solution value Yj and a weighted average of the slopes
at four points.

most well known variant, the classic Runge-Kutta method, has the form
k0 = f (tj , Yj )
h h
k1 = f (tj + , Yj + k0 )
2 2
h h
k2 = f (tj + , Yj + k1 )
2 2
k3 = f (tj + h, Yj + hk2 )
h
Yj+1 = Yj + (k0 + 2k1 + 2k2 + k3 ) (6)
6
This method requires four evaluations of the function f per time step, compared
to one evaluation required by the Euler method. The additional computation
is more than compensated by the larger time steps that the algorithm can use
because of its higher accuracy.
The values k0 , k1 , k2 , k3 in the Runge-Kutta method are the derivatives at
various points close to the initial solution value Yj (Figure 8). In equation (6),
the next solution point Yj+1 is computed using a weighted average of these
derivatives.
The following are some of the one-step algorithms that are widely available:
• Runge-Kutta methods with varying time step such as the Merson, Fehlberg,
England, and Prince-Dormand methods;
• Extrapolation methods such as the Bulirsch-Stoer method;
• Implicit Runge-Kutta methods and Rosenbrock methods, which are used
for stiff problems (discussed later).
Multistep algorithms, in contrast to one-step algorithms, have “memory”, in
the sense that they use k+1 previously computed solution values Yj , Yj−1 , . . . Yj−k
to compute the next solution value Yj+1 . The Adams methods is a widely used
class of multistep algorithm, of which an example with k = 2 is
h
Yj+1 = Yj + (23f (tj , Yj ) − 16f (tj−1 , Yj−1 ) + 5f (tj−2 , Yj−2 )) (7)
12

12
Yj +1

Yj
Yj
-1
Yj -2

t
tj-2 tj -1 tj tj +1

Figure 9: In a multistep method, the solution point Yj+1 is computed using the
solution value Yj and a weighted average of the slopes at several time points.

This formula uses values at times tj , tj−1 , tj−2 to compute the solution at time
tj+1 (Figure 9).
A modern implementation of a multistep method varies both the time step
size hj and the number of memory levels k in the course of the computation. At
the initial point t0 , there are no solution values available other than the initial
value, so the algorithm starts with k = 0 and a small step size.
In general, multistep algorithms need fewer evaluations of the differential
equation function f per time step than one-step algorithms. For example, the
Adams method (7) only needs to evaluate f once per time step, because values
can be stored and reused from previous time steps. By contrast, the classic
Runge-Kutta method (6) requires four evaluations of f per time step.
Nevertheless, one-step algorithms are often faster than multistep algorithms,
because of differences in accuracy and differences in the computational complex-
ity. Older textbooks recommend multistep algorithms for smooth problems that
require high accuracy and whose differential equation functions f are computa-
tionally expensive, and one-step algorithms for the rest. Nowadays there is not
such a clear a-priori distinction between newer one-step and multistep codes,
and it is worthwhile to try out both types of algorithm for your particular
problem.

2.3 Explicit vs. implicit


Implicit algorithms contain algebraic formulas that need to be solved. For ex-
ample, the backward Euler method

Yj+1 = Yj + hj f (tj+1 , Yj+1 ) (8)

is implicit, because it is an algebraic equation to be solved for Yj+1 . The Euler,


Runge-Kutta, and Adams methods described earlier are all explicit.
One way of solving the implicit formula in the backward Euler method is to
simply iterate it:
m+1 m
Yj+1 = Yj + hj f (tj+1 , Yj+1 )

13
0
with the initial value of the iteration Yj+1 computed using the explicit Euler
method. This idea of using an explicit method and then iterating an implicit
method is called a predictor-corrector approach, and is usually associated with
multistep algorithms.
Another way of solving implicit formulas is to use Newton iteration. In the
backward Euler formula the Newton iteration is
m+1
Yj+1 m
= Yj+1 − (I − hj J(tj+1 , Yj+1
m
))−1 [Yj+1
m
− Yj − hj f (tj+1 , Yj+1
m
)]

Each Newton iteration step requires the solution of a system of linear alge-
braic equations that involves the jacobian matrix. The jacobian matrix can
be computed using approximate formulas based on finite differences with small
perturbations of the arguments of f .
The formation of the jacobian matrix and the linear algebra of the Newton
iterations are computationally expensive operations. However, implicit algo-
rithms with Newton iteration are the most efficient alternatives when dealing
with stiff problems, as explained in the next paragraph.

2.4 Stiff vs. nonstiff


Even if the ordinary differential equation is stable, the numerical solution may
be unstable. A simple example of this phenomenon is the initial value problem

ẏ = −20y, y(0) = 1

which has the solution

y(t) = e−20t

The Euler method (3) with constant time step h = 0.01 gives a reasonably
good approximation, but with a constant step h = 0.11 the solution is unstable
(Figure 10). In fact, the Euler method gives the solution

Yj = (1 − 20h)j

which is unstable for any h > 0.1.


The kind of numerical instability seen in the above example is typical of
explicit algorithms and most predictor-corrector algorithms. These algorithms
are only conditionally stable, that is, the solution is stable if the constant time
step value h is sufficiently small, but is unstable if the time step is larger than
some critical value hc . In the above example, hc = 0.1.
Fortunately, in varying time step algorithms the time step will stay below
hc , so that numerical instability doesn’t arise. This is a good reason to use
varying time step instead of constant time step algorithms.
Generally, the critical time step hc depends on the fastest settling rate (eigen-
value) of the dynamics of the system of ordinary differential equations. In prob-
lems that have both large and small eigenvalues, the solution may have an brief
initial transient as the fast modes settle, followed by a longer period where it is
the behaviour of the slow modes that dominate. In this latter phase, the fast
modes are still present in the system even if they are not visible in the solu-
tion. Nevertheless, the algorithm continues to use a small time step to ensure
numerical stability. A system with this behaviour is said to be stiff.

14
6

0
y

-6
0 1

Figure 10: The Euler method applied to ẏ = −20y is stable for time step
h = 0.01 (dots) but is unstable for time step h = 0.11 (circles).

Whenever you notice that an explicit algorithm is using excessively small


time steps to compute a slowly varying solution, you should try an algorithm is
especially intended for stiff problems. This is an implicit algorithm with Newton
iteration. It does more computation per time step than an explicit algorithm,
but this is amply compensated by its ability to take large time steps without
being restricted by numerical instability effects.

2.4.1 Example
The example can be submitted to the DYNAST program.

*: Example of a stiff ode


*SYSTEM;
SYSVAR y;
0=VD.y +20*y;
*TR; TR 0 1;
INIT y=1.0;
PRINT y;
RUN; *END;

Here there is no instability problem. This is because DYNAST uses an


implicit algorithm.

15
2.5 Low-order vs. high-order
All algorithms compute their approximation of the solution of the differential
equation using a finite number of values of the function f . The error that
arises in this approximation is called truncation error. For constant time step
algorithms, the maximum truncation error when solving a differential equation
over a fixed time interval t0 ≤ t ≤ T is proportional to the time step h raised
to some exponent p. The value of p is called the order of the method.
For example, it can be shown that the truncation error of one time step of
the Euler method is, for small time step h, proportional to h2 , and that the
maximum error over an interval t0 ≤ t ≤ T is proportional to h. Thus, Euler’s
method has order 1. The backward Euler method also has order 1, while the
classic Runge-Kutta method has order 4, and the Adams method with k memory
levels has order k.
The order of a method is determined using Taylor series analysis. For exam-
ple, the trucation error of one time step of the Euler method applied to a single
differential equation is found as follows. The Taylor series of the differential
equation is

h2
y(tj+1 ) = y(tj + h) = y(tj ) + hẏ(tj ) + ÿ(ξ), tj ≤ ξ ≤ tj + h
2
Assuming Yj = y(tj ) and invoking (3), the truncation error is found to be

h2
y(tj+1 ) − Yj+1 = ÿ(ξ)
2
which is proportional to h2 .
A high-order method is more efficient than a low-order method in most prac-
tical problems. For example, the Euler method applied to a particular problem
has error Ce h while the classic Runge-Kutta method has error Crk h4 , where Ce
and Crk are constants. Then, to achieve an error smaller than some specified
level tol, the Euler method has to use a time step size no larger than tol/Ce . The
classic Runge-Kutta method may use a time step of (tol/Ce )1/4 , which is much
larger than the Euler method’s time step size, when tol is sufficiently small.
For less stringent accuracy requirements, however, the advantage of high-order
methods is not so great.
Also, if the solution is not sufficiently smooth, for example because of discon-
tinuous forcing functions, a high-order algorithm will not be as accurate as the
theoretical analysis promises. This is because the Taylor series analysis assumes
that high-order derivatives of the solution are continuous. Some algorithms vary
the number of memory stages k during the course of the computation, adap-
tively varying the order of the method as well as the time step. Software may
allow you to set an upper limit to the order to be used by the algorithm. This
is useful for problems that are not sufficiently smooth to justify the additional
computational expense of high-order methods.

3 Accuracy tolerance
The accuracy tolerance that you specify for the numerical solution of an ordinary
differential equation initial value problem affects the speed and accuracy of the

16
computation. This section presents some guidelines on how to select appropriate
values for the tolerance.
As was explained earlier, modern algorithms adaptively adjust the time step
(and sometimes the order of the method) to maintain the estimated truncation
error below a level that you specify. The specified tolerance is usually in terms
of relative error,
 
 Yp − y(tp ) 
 
 y(tp )  ≤ rtol

It is useful to also specify an error tolerance in terms of absolute error,

|Yp − y(tp )| ≤ atol

For example, if some components of the solution are zero (at the initial value,
say), then the relative error is not defined. Absolute error tolerance is also useful
to avoid situations where the algorithm works too hard to achieve high relative
accuracy on solution components that are of little interest or influence when
they are small.
You should avoid choosing error tolerances that are too large or too small.
If your error tolerance is too large, the algorithm’s error estimation formulas,
which are only valid for sufficiently small time step size, will give incorrect error
estimates. As a result, the adaptive time step size control will not work correctly
and the computed solution will not have the accuracy that you specified. You
should specify a relative accuracy of at least 1%.
If your error tolerance is too small, the computation will take an excessively
long time. The error tolerance may even be impossible, in the sense that the
requested truncation error is smaller than the roundoff error of the computer.
For example, if you specify atol=10−3 and one of the solution components is
somewhere around 106 , then the corresponding relative error is 10−9 , which is
smaller than the unit roundoff error of 32-bit floating point computers. When
you ask for impossible accuracy, the computation takes a long time and the
accuracy of the results is worse than when less stringent error tolerances are
specified.
Your software may ask for a single tolerance for all solution components.
This is easier for you than specifying a tolerance for each component, but may
give poor results if the solution components are not scaled the same, because the
error tolerance may be too large or too small for some of the components. For
example, the scales may differ by many orders of magnitude when the solution
components include variables with different physical units.
When the problem is stiff or has highly oscillatory components, the algo-
rithm’s error estimate may be significantly inaccurate. Repeating the com-
putation with different error tolerances can be helpful in detecting erroneous
solutions.

4 Answers to exercises
These are the answers to the exercises set in the course units.

17
Answer to Exercise 1. The following steps show the differentiation steps to
get the system of odes
        
ẏ1 −0.1 −199.9 y1 y1 (0) 2
= , =
ẏ2 0 −200 y2 y2 (0) 1

We differentiate the supposed analytical solutions to obtain


d d d
y1 (t) = exp(−0.1 t) + exp(−200 t)
dt dt dt
This gives

−0.1 exp(−0.1 t) + (−200) exp(−200 t) = −0.1y1 − 200y2

Furthermore
d d
y2 (t) = exp(−200 t)
dt dt
which gives

= −200 exp(−200 t) = −200y2

as required.

Answer to Exercise 2. DYNAST code should be as follows:

*:example comparing constant vs. varying time step (t=0 to 0.1)


*SYSTEM;
SYSVAR y1, y2;
:
: differential equation to be solved
0 = - VD.y1 -0.1*y1 -199.9*y2;
0 = - VD.y2 -200.0*y2;
:
: have values of exact analytical solution for comparison
yexact1 = ( exp(-0.1*time) + exp(-200*time) );
yexact2 = exp(-200*time);
:
: compute errors between exact and numerical solutions
error1 = (y1 - yexact1);
error2 = (y2 - yexact2);
:
:
*TR; TR 0 0.1; : time ranges from 0 to 0.1
INIT y1=2.0, y2=1.0; : initial values
PRINT y2, yexact2, error2;
:
RUN MIN=10, MAX=10; : set MAX=MIN=( no. of steps - 1)
*END;

18
Figure 11: Plot of the numerical solutions of equation (4) using DYNAST with
a variable time step method.

Answer to Exercise 3. Continuous plots of y1 and y2 appear in Figure 11.


Answer to Exercise 4. To obtain an estimate for the number of steps re-
quired, observe that in the output file for the variable step method the very
first step length is 1.0E-6. Since the time range interval is 0.1 seconds then
a constant time step method with a step length requires 0.1/1.OE-6 or 1.0E5
steps. However, this turns out to be an overestimate giving a maximum error
of 3.678490E-005 which is much smaller than the required 4.739823E-004 .
In fact to obtain comparable accuracy to the variable step method calculation
one needs to use the constant time step method with about 7800 steps. The
following DYNAST input code uses a constant time step method with 7800 steps.
*:example comparing constant vs. varying time step (t=0 to 0.1)
*SYSTEM;
SYSVAR y1, y2;
:
: differential equation to be solved
0 = - VD.y1 -0.1*y1 -199.9*y2;
0 = - VD.y2 -200.0*y2;
:
: have values of exact analytical solution for comparison
yexact1 = ( exp(-0.1*time) + exp(-200*time) );
:
: compute errors between exact and numerical solutions
error1 = (y1 - yexact1);
:
:
*TR; TR 0 0.1; : time ranges from 0 to 0.1
INIT y1=2.0, y2=1.0; : initial values
PRINT y1, yexact1, error1;
:
RUN MIN=7800, MAX=7800; : set MAX=MIN=( no. of steps - 1)
*END;

19
The output is as follows

# example comparing constant vs. varying time step (t=0 to 0.1)

X ... TIME
1 ... y1
2 ... yexact1
3 ... error1

X 1 2 3
MAX 2.000000E+000 2.000000E+000 4.711374E-004
MIN 9.900498E-001 9.900498E-001 0.000000E+000
NP 7801

Statistics: 7800 steps, 0 rejected steps, 7801 iterations


Order: 1 2 3 4 5 6
Steps: 7800 0 0 0 0 0

Errors: 0, Warnings: 0
Total seconds used up by DYNAST: 1.58
Program DYNAST exited on Tuesday, 7 March 2000, at 10:22:43

Notice that the value of the maximum of error1 is comparable to the variable
step method but the total number of steps is very large (7800) and the compu-
tational time is larger (1.58 seconds) than the time for the variable step method
of similar accuracy (0.01 seconds).

End of answers.

20

Вам также может понравиться