Академический Документы
Профессиональный Документы
Культура Документы
T. Romańczukiewicz
Jagiellonian University
2010/2011
Plan
1 Organization
2 Introduction
3 Numerical derivative
Plan
1 Organization
2 Introduction
3 Numerical derivative
Plan
1 Organization
2 Introduction
3 Numerical derivative
Outline
Softwere
NR should be efficient and accurate.
General usage soft. usually give accuracy xor efficiency.
Mathematica and Maple are useful only for small numerical problems. They
consume huge amount of memory and perform lots of unnecessary calculations.
Matlab, and its free clone octave are much more efficient, but are still running
quite slow
C and fortran libs are much better choice for large problems. But one need to run
a few tests first.
Differential equations:
Ordinary differential equations (ODEs):
Newton’s law (usually initial value problem):
d 2~
x
m =~
F (~
x , t)
dt 2
static solutions to 1-d Schrd̈inger equations (usually two-point boundary conditions)
~2 00
− ψ + V (r )ψ = Eψ
2m
many other static 1-d problems
Partial differential equations (PDEs)
wave equations (+solitonic equations)
2
utt − c uxx = 0
diffusion equation (similar to Schrödinger eq)
ut = uxx
Poisson’s equation
2
∇ u(~
x ) = f (~
x)
... and many others including hydrodynamic, general relativity, solid state ...
Plan
1 Organization
2 Introduction
3 Numerical derivative
Simple finite difference
Differential matrices
Spectral methods
Automatic differentiation
Summary
In order to solve differential equation (both ODE and PDE) one has to calculate
derivative numerically.
The most straightforward method is to use the definition
f (x + h) − f (x) f (x) − f (x − h)
f 0 (x) = lim = lim . (1)
h→0 h h→0 h
But numerically we cannot take h as small as we want.
Numerical error:
= trunc + prec . (2)
Truncation error. From Taylor series we know:
1 2 00
f (x + h) − f (x) f (x) + hf 0 (x) + h f (x) + · · · − f (x) 1
= 2!
= f 0 (x)+ hf 00 (x)+· · ·
h h 2
(3)
so
1 00
trunc = h|f (x)|. (4)
2
1 00 m |f (x)|
∼ h|f (x)| + . (5)
2 h
The smallest error is for h of order
r
m f
hopt ∼ (6)
f 00
For f ∼ f 00 ∼ 1 we lose eight digits of precision
For solving the equation we would need to use the derivative in each step, from
one point to the next one. That would give 108 number of points for unit interval.
The same would happen if we take the backward scheme, but
Combining the two schemes gives
f (x + h) − f (x − h)
f̃ 0 (x) ' . (7)
2h
2/3
trunc ∼ f 000 h2 . Optimal choice is h = O(m ). Smaller error with the same
computational effort!
Of course we can go even further: Instead of two points we can have 2N points:
x ± h, x ± 2h, . . . and too seek derivatives in the following form:
N
X
f̃ 0 (x) = an [f (x + nh) − f (x − nh)]. (8)
n=1
N/(N+1) 1/(N+1)
This would give accuracy of order N = O m and hopt = O m .
an can be found by expanding the scheme using Taylor series and canceling as
many terms as possible or using Lagrange approximation method.
However only small N can be used due to the Runge phenomenon (will be
explained later).
For N = 2 we have so called five point stencil:
2 1
f̃ 0 (x) [f (x + h) − f (x − h)] − [f (x + 2h) − f (x − 2h)]. (9)
3h 12h
These schemes can be used to construct stepping methods for solving initial
value-problems (ie. Euler’s methods.)
Sometimes it is needed to perform a derivative in each point of some grid. For uniform
grid xn = nh we can construct a vector of function values yn = f (xn ) and write a
differential matrix D̂~y = ~y 0 for all schemes.
item for forward differential scheme:
yn+1 − yn
yn0 = (10)
h
−1 1 0 0 ...
1 0 −1 1 0 ...
D̂f = 0 0 −1 1 ... (11)
h
.. .. .. ..
..
. . . . .
For symmetric central scheme
yn+1 − yn−1
yn0 = (12)
2h
0 1 0 0 ...
1 −1 0 1 0 ...
D̂c = 0 −1 0 1 ... (13)
2h
.. .. .. ..
..
. . . . .
Note that for achieving a decent accuracy that matrix must be huge. An advantage
is that the matrix is sparse.
1 1 −2 1 0 ...
D̂f D̂b = 2 0 1 −2 1 . . . (14)
h
.. .. .. .. . .
. . . . .
One can use these matrices to solve differential equations especially for two-point
value problem.
Ridders’ Method
Ridders’ method
If one needs to calculate a derivative in a single point with a high precision one can use
some extrapolation methods (similar to Romberg’s method) using Richardson’s
algorithm.
One seeks to extrapolate, to h → 0, the result of finite-difference calculations with
smaller and smaller finite values of h. By the use of Neville’s algorithm , one uses each
new finite-difference calculation to produce both an extrapolation of higher order, and
also extrapolations of previous, lower, orders but with smaller scales h.
Dh = p00
Dh/a = p10 p11
Dh/a2 = p20 p21 p22 (15)
Dh/a3 = p30 p31 p32 p33
where
f (x+a−i h)−f (x−a−i h)
pi0 = 2a−i h (16)
aj pi,j−1 −pi−1,j−1
p
ij = aj −1
, 0≤ i<j
pNN is the sought derivative calculated with the high precision. a = 1.4 is a scaling
factor.
Example
This method is however useless for solving ODE, but similar strategy of extrapolation is
used by some high precision algorithms.
T. Romańczukiewicz Num. Meth. 01
Organization Introduction Numerical derivative Simple finite difference Differential matrices Spectral methods Autom
Example
Let us consider two functions
both are even, but only f1 (x) is periodic with period π. Lets take a look at the
expansion:
XN
f (x) = cn cos(nx)
n=0
Example, cont.
∞
1 − e−π 2 X 1 − (−1)n e−π
f1 (x) = + cos(nx)
π π 1 + n2
n=1
We don’t know the analytic form of coefficients for f2 but we can perform Fourier
transform (later)
n cn (f1 ) cn (f2 )
0 3.05831387e-01 1.26606588e+00
1 1.67428418e-01 -5.65159104e-01
2 6.22007344e-02 1.35747670e-01
3 3.46308157e-02 -2.21684249e-02
4 1.92448433e-02 2.73712022e-03
5 1.42559066e-02 -2.71463156e-04
6 9.63230727e-03 2.24886615e-05
7 8.22303205e-03 -1.59921823e-06
8 6.19684873e-03 9.96062403e-08
9 5.77791126e-03 -5.51838584e-09
10 4.68982313e-03 2.75294805e-10
11 4.66541353e-03 -1.24897784e-11
12 4.01424819e-03 5.19545313e-13
13 4.21502377e-03 -1.99897733e-14
14 3.81796182e-03 1.39967403e-15
Example
Note that cn for f2 (x) decay very fast, so using just 15 coefficients we obtain accuracy
of order 10−15 . We can calculate the derivative using
N
X
f 0 (x) = − ncn sin(nx) (18)
n=1
Automatic differentiation
One can make use of some properties of derivatives and advantages of some
object-oriented languages.
Let us create a vector consisting of a value of a function and derivative.
We can now overload functions and operators +,-,*,/ using the same rules as for
derivatives:
x f (x) f (x) h(f (x))
f = 0 , h = (19)
1 f (x) f 0 (x) h0 (f (x))f 0 (x)
f (x) sin(f (x)) a1 a2 a1 a2
sin = , · = (20)
f 0 (x) cos(f (x))f 0 (x) b1 b2 b1 a2 + a1 b2
1 class AD
2 {
3 public:
4 // constructors
5 AD() {u = du = 0; };
6 AD(double a) { u=a; du = 0; } ; / / constant
7 AD(double a, double da) { u=a; du = da; }
8
9 / /−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
10 friend ostream& operator<<( ostream&, AD );
11 friend AD operator*( double, AD);
12
13 AD operator + ( AD ); // add
14 AD operator - ( AD ); // subtract
15 AD operator * ( AD ); // multiply
16 AD operator / ( AD ); // devide
17
18 / /−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
19 / / some f u n c t i o n s
20 friend AD sin( AD );
21 friend AD cos( AD );
22 friend AD log( AD );
23 friend AD exp( AD );
24 private:
25 double u, du;
26 };
1
2 AD AD :: operator + (AD a)
3 {
4 AD result ( a.u+u, a.du+du ) ;
5 return result;
6 }
7
8 AD exp(AD x)
9 {
10 AD y;
11 y.u = exp(x.u);
12 y.du = x.du * exp(x.u);
13 return y;
14 }
15
16 int main()
17 {
18 AD x(1, 1), b, c; / / x = 1 a t p o i n t x=1
19 b = x;
20 c = 2.0 * x * b;
21 cout << x << b << c ;
22 cout << setprecision(15) << cos(2.0 * exp( -3.0*x )) * x;
23 return 0;
24 }
Summary