Академический Документы
Профессиональный Документы
Культура Документы
Heraclitus, a famous Greek philosopher, has said, Everything changes, and nothing stands still.
Indeed, rates of change are essential to many principles and laws describing the behavior of the real
world. Differential equations, containing derivatives, encapsulate these changes in mathematical terms.
They are useful because through humans observations of the natural world, rates of change are easily
discovered. Their broad applicability in helping to model processes in almost every scientific field,
whether it be the motion of fluids or the growth of economies, motivates the study of their solutions as
an important field in applied mathematics. The investigation of differential equations is, for me, an
important endeavor as someone interested in physics because differential equations and their solutions
are cornerstones of the mathematical models that inform physics.
By solutions of differential equations I mean the set of functions that satisfy the differential
equation, often only on a specific interval. Because differential equations involve derivatives of functions,
integrating is required to solve these equations. It is not always feasible or possible to integrate and
resolve functions analytically into an expression involving only a finite number of operations. In fact, a
very small subset of differential equations is solved explicitly. Therefore, for most differential equations
with real-world applications, one needs to approximate these solutions numerically. The famous
mathematician Leonhard Euler published one such method to approximate these solutions, which led to a
variety of other techniques to be developed. Among them are Runge-Kutta methods and multistep
methods. Today, these methods are coupled with the power of computers to help elucidate and modify
mathematical models of a vast array of behaviors and processes that are important to advancements in
knowledge.
However, because these solutions are only approximated, 1 it is important that error analysis is an
integral part of solving these equations numerically. Mathematicians are concerned with the size and
analytic form of errors. Each numerical method, depending on their complexity, has a different error term
and thus, quantifying and analytically representing these errors are important problems in mathematics.
For this essay, I will only be investigating first-order differential equations (what the hell are they???) This
leads to the research question:
How can we use Taylors Theorem to analyze and quantify truncation errors when using RungeKutta methods to approximate first-order differential equations?
When one mentions error in mathematics one means the difference between the approximate
solution and the exact solution.2 In this essay, I am primarily concerned with truncation errors, or errors
due to an inability to calculate exact formulas due to the infinite number of terms. I must reduce the
truncation by performing more calculations at the expense of more complexity and time required. What I
am essentially doing is studying how the remainder of Taylor polynomials diminishes as I progressively
use more terms in the infinite Taylor series in the quest for more accurate results.
Due to a very powerful theorem called Taylors Theorem, it is possible to analyze truncation error
quantitatively. I can even provide an upper bound for the errors. In this essay I will connect how Taylors
Theorem provides a general framework to provide an analytical formula for the truncation errors of
different methods of approximating the solutions to a differential equation with illustrations of specific
methods for approximating the solutions, of which there are two broad types: the Runge-Kutta methods
and the multistep methods. [MY THESIS IS: COMPARE AND CONTRAST PLEASE!!!]
1
2
http://homepage.math.uiowa.edu/~atkinson/NA_Overview.pdf
http://www.math.unl.edu/~gledder1/Math447/EulerError
Definitions of Errors
( ) where ( ) is unknown. I know the
at time
: ( )
I am interested in the local and global truncation errors of different methods of approximating
differential equations. In other words I want to find how closely the approximation
real value (
then I
Thus, if I know
(
approaches the
, then
(
,
))
. The function
and
first derivative (which yields Euler method) or much more complicated, yielding different methods.
The increment function is only an approximation of the real change. The local truncation error is
defined to be the error in the approximation
we can approximate
of step-size (
( ))
, then
( )):
))
over all steps. If
( )
)
(
))
Big O Notation
To properly analyze how quickly an error term grows, we use big O notation to compare how
quickly the error term, which is usually an infinite series, grows compared to one function.
If we define functions ( ) and ( ), we write
( )
( ( ))
| ( )|
| ( )|
For example,
| ( )|
| ( )|
because as
| |
| |
| |
|
| |
| |
| |
| |
close
, we have
| |
| |
of that neighborhood.
, we have
| |
| |
which means that 3 we can force
http://www.math.fsu.edu/~pkirby/mad2104/SlideShow/s4_4.pdf
| |
Good numerical methods to approximate the solution of differential equations are consistent and
convergent. A numerical method is consistent if the local truncation error is in ( ). If as the step size
approaches 0, the global truncation error of a numerical method also approaches zero, then the
numerical method is convergent.
if its global truncation error is in (
). Thus,
we can determine that the higher the order of a method, the more accurate it is because the error rapidly
decays to 0 for small step sizes .
Taylors Theorem
In order to understand the research question, we must define the terms: initial value problem, a
first-order differential equation, and Taylor series. In this essay, we mean an initial value problem4 to be
an ordinary differential equation taken together with an initially specified value, the initial condition. A
first-order differential equation is a differential equation of the form: . A Taylor series is an infinite
power series involving all of the functions derivatives at a single point that, in many functions, closely
approximate their behavior around a specific value, better than any another polynomial approximation.
The Taylor series of any function around
( )
( )
( )
is:
( )
( )
( )
Then, it is important that we ask how good these approximations are. Because no computer can
calculate the exact value of this power series to infinity, there will be errors. Taylors Theorem tells us
that this error will be:
( )
( )
( )
( )
where
( )
(
)
Proof. We shall prove the theorem using a type of reverse mathematical induction. What we have
to prove is that for all natural numbers
( )
( )
(
we have:
( )(
( )(
( )(
( )(
)
(
)
To prove this inductively, we first have to consider the basis case. Our basis case for this proof shall
be looking at the -th derivatives relationship to its derivative, the (
we can then work backwards towards the first derivative, and finally, the function. In other words, I am
considering the case
( )
( ) on
this interval:
( )
( )(
( )
( )(
) for some
( )
( )
( )
( )
to
( )(
))
second part of the fundamental theorem of calculus, which states that to find the definite integral from
to
we subtract the values of the antiderivatives at , the upper bound, and , the lower bound.
( )
( )
( )
( )
( )(
( )
( )
)
(
( )(
( )(
( )(
Thus we have provided the basis case and the intuition for our proof. In order to prove the
inductive step, I am assuming that the hypothesis is true for
( )
( )
( )(
( ))
( )(
( )
( )(
)
(
)
( )(
( )(
)
(
)
( )
because
( )
)
( )(
)
(
)
( )(
( )(
( )(
)
(
(
where
)( )
)( )
)( )
)( )
at .
Equipped with Taylors formula, we shall turn our attention to describe a variety of different
numerical methods that are used to approximate solutions to differential equations and express their
error analytically.
The most famous example of Taylors theorem is the Mean Value Theorem, which states that if a
function ( ) is differentiable then for every two points
(
and (
and
) at which the derivative is equal to the slope of the secant line between the points (
( ))
( )
( )
( )
( )
and
for some
( )
( )(
We can see that this fits into the Taylors polynomial of degree 1 for ( ), with the remainder term
in keeping with Taylors Theorem:
( )
This gives us a bound for the value of the error for the Taylors first-degree polynomial
approximation to ( ) because if we find
( ) where
, then:
Taylors theorem, which gives us the analytic form of the error, along with Big-O notation, which
gives us the rate of error growth compared to other functions, allow us to analyze methods for their
errors.
While it will not be shown here, Taylors formula also generalizes to partial differential equations in
many dimensions.
Eulers Formula
Eulers method uses information about the first derivative and the initial value and assumes that
the function acts linearly locally to project the value of the function into the future. Suppose I have a
function
differential equation
( ))
)
and
Geometrically, the problem is to approximate the shape of a continuous curve using polygonal lines.
By taking short steps into the future, determining the how the curve will change at that point using the
slope at that point, we can hopefully create a polygonal curve that does not deviate too far from the
curve we want to approximate.
To derive Eulers formula using Taylors Theorem, I first assume that the true solution ( ) has two
continuous derivatives and is twice-differentiable on an open interval that I am approximate the solution
on. Then, with Taylors Theorem, I have
(
( )
( )
( )
) where
( ) where
(
( )
( )
)
(
and so on.
( )
( )
))
)
( )
( )
( )
This demonstrates that when , the step size approaches 0, the local truncation error is
proportional to
( )
lower power of the step size, meaning that the error is greater for small .
The global truncation error of Eulers method tells us the error accumulated by the method when I
approximate
through
steps and the local truncation error of one single step. If we set the step size to be
number of steps is
then the
( )
( )
( )
We then, can expect the global truncation error to be proportional to . This shows that Eulers
method is first order.
Figure 16.
Midpoint Method
If I know the derivative of the function at a point, then the function is locally linear, increasing or
decreasing on some interval around that point. Let us choose the step size
, to approximate
the function
derivative (
) approaches (
) as approaches
at time
)]
) and (
and use that slope instead in my projection to find a more accurate value of
Figure 2. 7
I know that the error of the
midpoint method will be less than
that of Eulers method, because we
are sacrificing computational
simplicity for accuracy.
To find the error, I first find the Taylor series around the midpoint of the interval *
(
( )
)(
) (
)|
) (
)|
)|
( )
)(
)|
( ) [
) (
)]
( ) (
)(
Finding (
)|
( ) [
)]
)|
) (
( ) (
)|
)|
)|
( ) I have
( )
( )
)]
)]
We can see that the final step approximates the midpoint method, but we omit the error term.
However, since
by approximating
then finding the midpoint of the segment, I have to find the error of
the exact value
*
(
)+. from
( ) which I know exactly due to the given conditions of the initial value
( )
)(
( ) (
)|
) (
)|
)|
( )
( )
) as
. I can compute
) to be
)|
)|
( )
The partial differential equations version of Taylors Theorem applies here. We want to
approximate *
http://web.mit.edu/10.10/www/Study_Guide/DiffEq.html
]
(
)]
,[
{[
)]
)] -
)]
)|
)]
)|
)]
)|
)]
)] }
)]
)]
)|
)]
To find the local truncation error, I subtract the exact value of the function with the approximated
value
( )
( )
(
)]
I have established that the local truncation error of the midpoint method is in (
). The global
truncation error of the midpoint method is calculated in a similar manner to Eulers method. The global
truncation error will be
( )
( )
), it is a second-order method.
We have found a way to provide the intuition for and quantify the errors of different methods to
approximate a numerical solution using Taylor series. Eulers method and the Midpoint method are called
Runge-Kutta methods, which generalize them to any order possible. By continually predicting a solution
at a future time, then finding the midpoint and correcting the slope by finding the slope at an interior
point, these methods are also called predictor-corrector methods. We are essentially projecting the
behavior of the function at a future time, then correcting the behavior by exploiting the continuity of the
function, making sure that as we approach the function, we are getting closer to the true function.
( ) at time
(
of
( ))
( ))
http://www.academia.edu/200568/Linear_multistep_numerical_methods_for_ordinary_differential_equations
( )
( ))
The linear multistep method approximates the integral by interpolating the slope, or passing a
curve between the known values of
time
at times
. A linear multistep method is a linear sum of the approximated functions values and the
slopes values at previous points. We can write the general linear multistep method equation with
whose values
steps,
I know as10:
and
To find the error, I find the Taylor expansion of the terms around
( ) and its derivatives at that point so I will combine every other numbers as constants.
( )
( )
( )
) ( )
(
( )
( )
( )
( )
( )
( )
( )
) ( )
( )
(
( )
( )
( )
( )
( )
Therefore every single terms in this sum consists of like terms, which means that when we combine them,
we have according to Taylors Theorem
(
10
( )
( )
( )
ftp://130.149.13.120/pub/numerik/baerwolf/ode_nonstiff_I.pdf
( )
( )
( )
( )
( )
( )
( )
(
( )
( )
where
and
( ))
( )
We can check that the approximation of the slope gives the desired values (
and (
) at time
) at time
Thus, the two-step Adam-Bashforth Method can be derived using the general principle of linear
multistep methods.
( )
( )
( )
( )
( )
( ))
(
)(
( )
)(
( )
)(
( )
( )
( )
( )
( )
( )
( )
[
( )
)+
( )
)(
( )
)
(
* (
)
(
)]
) because
( )
( )
( )
( )
( )
( )
( )
( )
( )
( )
( )
( )
( )
)
)
( )
( )
( )
( )
)]
( )
)]
[
[
( )
( )
( )
)]
( )
( )
( )
( )
( )
( )
( )
( )
( )
) because
( )
An Example
Consider the differential equation
(
Let the step size
( )
( )
Eulers method
Midpoint Method
Adams-Bashforth method
(
0
0.5
1.0
1.5
-0.5
-0.875
-1.25781
-1.66214
-0.5
-0.876953
-1.26847
-1.68285
2.0
2.5
3.0
3.5
4.0
4.5
5.0
-2.09421
-2.55154
-3.0271
-3.51392
-4.00706
-4.50355
-5.00178
-2.12069
-2.57804
-3.04982
-3.53155
-4.01988
-4.51249
-5.00783
Real value
0
-0.5
0.5
-0.877541
1.0
-1.26894
1.5
-1.68243
2.0
-2.1192
2.5
-2.57586
3.0
-3.04743
3.5
-3.52931
4.0
-4.01799
4.5
-4.51099
5.0
-5.00669
( )
)]
-0.5
-0.876953
( )
)]
This illustrates the fact that the midpoint method is more accurate than Eulers method
and is roughly as accurate as the two-step Adam-Bashforth method.
Conclusion
There is a vast range of methods one can use to approximate solutions to differential equations. The
more complex the calculations, i.e. the higher order the method, the more accurate of a solution one
obtains. To arrive at the perfect numerical method to approximate the differential equation, the
numerical analyst has to balance the need for accuracy with the computational power he has at his
disposal.