Вы находитесь на странице: 1из 228

Numerical Methods

Aaron Naiman
Jerusalem College of Technology
naiman@jct.ac.il
http://jct.ac.il/naiman
based on: Numerical Mathematics and Computing
by Cheney & Kincaid, c 1994
Brooks/Cole Publishing Company
ISBN 0-534-20112-1
Copyright c 2011 by A. E. Naiman
Taylor Series
Denitions and Theorems
Examples
Proximity of x to c
Additional Notes
Copyright c 2011 by A. E. Naiman NM Slides Taylor Series, p. 1
Motivation
Sought: cos (0.1)
Missing: calculator or lookup table
Known: cos for another (nearby) value, i.e., at 0
Also known: lots of (all) derivatives at 0
Can we use them to approximate cos (0.1)?
What will be the worst error of our approximation?
These techniques are used by computers, calculators, tables.
Copyright c 2011 by A. E. Naiman NM Slides Taylor Series, p. 2
Taylor Series
Series denition: If f
(k)
(c), k = 0, 1, 2, . . ., then:
f(x) f(c) +f

(c)(x c) +
f

(c)
2!
(x c)
2
+
=

k=0
f
(k)
(c)
k!
(x c)
k
c is a constant and much is known about it (f
(k)
(c))
x a variable near c, and f(x) is sought
With c = 0 Maclaurin series
What is the maximum error if we stop after n terms?
Real life: crowd estimation: 100K 10K vs. 100K 1K
Key NM questions: What is estimate? What is its max error?
Copyright c 2011 by A. E. Naiman NM Slides Taylor Series, p. 3
Taylor Series cos x
-1.5
-1
-0.5
0
0.5
1
1.5
-4 -2 0 2 4
f
u
n
c
t
i
o
n
v
a
l
u
e
cos x
1
1
x
2
2
1
x
2
2
+
x
4
4!
1
x
2
2
+
x
4
4!

x
6
6!
!
s

z
z
Better and better approximation, near c, and away.
Copyright c 2011 by A. E. Naiman NM Slides Taylor Series, p. 4
Taylors Theorem
Theorem: If f C
n+1
[a, b] then
f(x) =
n

k=0
f
(k)
(c)
k!
(x c)
k
+
f
(n+1)
((x))
(n +1)!
(x c)
n+1
where
x, c [a, b], (x) open interval between x and c
Notes:
f C(X) means f is continuous on X
f C
k
(X) means f, f

, f

, f
(3)
, . . . , f
(k)
are continuous
on X
= (x), i.e., a point whose position is a function of x
Error term is just like other terms, with k := n +1
-term is truncation error, due to series termination
Copyright c 2011 by A. E. Naiman NM Slides Taylor Series, p. 5
Taylor SeriesProcedure
Writing it out, step-by-step:
write formula for f
(k)
(x)
choose c (if not already specied)
write out summation and error term
note: sometimes easier to write out a few terms
Things to (possibly) prove by analyzing worst case
letting n
LHS remains f(x)
summation becomes innite Taylor series
if error term 0
innite Taylor series represents f(x)
for given n, we can estimate max of error term
Copyright c 2011 by A. E. Naiman NM Slides Taylor Series, p. 6
Taylor Series
Denitions and Theorems
Examples
Proximity of x to c
Additional Notes
Copyright c 2011 by A. E. Naiman NM Slides Taylor Series, p. 7
Taylor Series: e
x
f(x) = e
x
, |x| <

f
(k)
(x) = e
x
, k
Choose c := 0
We have
e
x
=
n

k=0
x
k
k!
+
e
(x)
(n +1)!
x
n+1
As n take worst case (just less than x)
error term 0 (why?)

e
x
=

k=0
x
k
k!
= 1 +x +
x
2
2!
+
x
3
3!
+
Copyright c 2011 by A. E. Naiman NM Slides Taylor Series, p. 8
Taylor Series: sinx
f(x) = sinx, |x| <

f
(k)
(x) = sin
_
x +
k
2
_
, k, c := 0
We have
sinx =
n

k=0
sin
_
k
2
_
k!
x
k
+
sin
_
(x) +
(n+1)
2
_
(n +1)!
x
n+1
Error term 0 as n
Even k terms are zero

= 0, 1, 2, . . ., and k 2 +1
sinx =

=0
sin
_
(2+1)
2
_
(2+1)!
x
2+1
=

k=0
(1)
k
x
2k+1
(2k+1)!
= x
x
3
3!
+
x
5
5!

Copyright c 2011 by A. E. Naiman NM Slides Taylor Series, p. 9
Taylor Series: cos x
f(x) = cos x, |x| <

f
(k)
(x) = cos
_
x +
k
2
_
, k, c := 0
We have
cos x =
n

k=0
cos
_
k
2
_
k!
x
k
+
cos
_
(x) +
(n+1)
2
_
(n +1)!
x
n+1
Error term 0 as n
Odd k terms are zero

= 0, 1, 2, . . ., and k 2
cos x =

=0
cos
_
(2)
2
_
(2)!
x
2
=

k=0
(1)
k
x
2k
(2k)!
= 1
x
2
2!
+
x
4
4!

Copyright c 2011 by A. E. Naiman NM Slides Taylor Series, p. 10
Numerical Example: cos (0.1)
We have
1)
f(x) = cos x and
2)
c = 0
obtain series: cos x = 1
x
2
2!
+
x
4
4!

Actual value: cos (0.1) = 0.99500416527803 . . .
With
3)
x = 0.1 and
4)
specic ns
from Taylor approximations:
n

approximation |error|
0, 1 1 0.01/2!
2, 3 0.995 0.0001/4!
4, 5 0.99500416 0.000001/6!
6, 7 0.99500416527778 0.00000001/8!
.
.
.
.
.
.
.
.
.

includes odd k
Obtain accurate approximation easily and quickly.
Copyright c 2011 by A. E. Naiman NM Slides Taylor Series, p. 11
Taylor Series: (1 x)
1
f(x) =
1
1x
, |x| < 1

f
(k)
(x) =
k!
(1x)
k+1
, k, choose c := 0
We have
1
1 x
=
n

k=0
x
k
+
(n +1)!
(1 (x))
n+2

x
n+1
(n +1)!
=
n

k=0
x
k
+
_
x
1 (x)
_
n+1
1
1 (x)
Why bother, with LHS so simple? Ideas?
Sucient:

x
1(x)

n+1
0 as n
For what range of x is this satised?
Need to determine radius of convergence.
Copyright c 2011 by A. E. Naiman NM Slides Taylor Series, p. 12
(1 x)
1
Range of Convergence
Sucient:

x
1(x)

< 1
Approach:
get variable x in middle of suciency inequality
transform range of inequality to LHS and RHS of
suciency inequality
require restriction on x
but check if already satised
|| < 1 1 > 0 sucient: (1 ) < x < 1
Copyright c 2011 by A. E. Naiman NM Slides Taylor Series, p. 13
(1 x)
1
Range of Convergence (cont.)
case x < < 0:
LHS: (1 x) < (1 ) < 1 require: 1 x

RHS: 1 < 1 < 1 x require: x 1

case 0 < < x:
LHS: 1 < (1 ) < (1 x) require: (1 x) x,
or: 1 < 0

RHS: 1 x < 1 < 1 require: x 1 x, or: x
1
2
Therefore, for 1 < x
1
2
1
1 x
=

k=0
x
k
= 1 +x +x
2
+x
3
+
_
Zeno: x =
1
2
, . . .
_
Need more analysis for the whole range |x| < 1.
Copyright c 2011 by A. E. Naiman NM Slides Taylor Series, p. 14
Taylor Series: lnx
f(x) = lnx, 0 < x 2

f
(k)
(x) = (1)
k1
(k1)!
x
k
, k 1
Choose c := 1
We have
lnx =
n

k=1
(1)
k1
(x 1)
k
k
+(1)
n
1
n +1
(x 1)
n+1

n+1
(x)
Sucient

x1
(x)

n+1
0 as n
Again, for what range of x is this satised?
Copyright c 2011 by A. E. Naiman NM Slides Taylor Series, p. 15
lnx Range of Convergence
Sucient:

x1
(x)

< 1 . . . 1 < x < 1 +


case 1 < < x:
LHS: 1 x < 1 < 0 require: 0 x

RHS: 2 < 1 + < 1 +x require: x 2

case x < < 1:
LHS: 0 < 1 < 1 x require: 1 x x, or:
1
2
x
RHS: 1 +x < 1 + < 2 require: x 1 +x

Therefore, for
1
2
x 2
lnx =

k=1
(1)
k1
(x 1)
k
k
= (x 1)
(x 1)
2
2
+
(x 1)
3
3

Again, need more analysis for entire range of x.
Copyright c 2011 by A. E. Naiman NM Slides Taylor Series, p. 16
Ratio Test and lnx Revisited
Theorem:

a
n+1
a
n

(< 1) partial sums converge


lnx: ratio of adjacent summand terms (not the error term)

a
n+1
a
n

(x 1)
n
n +1

Obtain convergence of partial sums for 0 < x < 2


Note: not looking at and the error term
x = 2: 1
1
2
+
1
3
, which is convergent (why?)
x = 0: same series, all same sign divergent harmonic series

we have 0 < x 2
Copyright c 2011 by A. E. Naiman NM Slides Taylor Series, p. 17
(1 x)
1
Revisited
Letting x (1 x)
ln(1 x) =
_
x +
x
2
2
+
x
3
3
+
_
, 1 x < 1

d
dx
: lhs =
1
1x
and rhs =
_
1 +x +x
2
+x
3
+
_

!
: no = for x = 1 as rhs oscillates (note: correct avg
value)
|x| < 1 we have (also with ratio test)
1
1 x
= 1 +x +x
2
+x
3
+
Copyright c 2011 by A. E. Naiman NM Slides Taylor Series, p. 18
Taylor Series
Denitions and Theorems
Examples
Proximity of x to c
Additional Notes
Copyright c 2011 by A. E. Naiman NM Slides Taylor Series, p. 19
Proximity of x to c
Problem: Approximate ln2
Solution 1: Taylor ln(1 +x) around 0 with x = 1
ln2 = 1
1
2
+
1
3

1
4
+
1
5

1
6
+
1
7

1
8
+
Solution 2: Taylor ln
_
1+x
1x
_
around 0 with x =
1
3
ln2 = 2
_
3
1
+
3
3
3
+
3
5
5
+
3
7
7
+
_
Copyright c 2011 by A. E. Naiman NM Slides Taylor Series, p. 20
Proximity of x to c (cont.)
Approximated values, rounded:
Solution 1, rst 8 terms: 0.63452
Solution 2, rst 4 terms: 0.69313
Actual value, rounded: 0.69315

importance of proximity of evaluation and expansion points


This error is in addition to the truncation error.
Copyright c 2011 by A. E. Naiman NM Slides Taylor Series, p. 21
Taylor Series
Denitions and Theorems
Examples
Proximity of x to c
Additional Notes
Copyright c 2011 by A. E. Naiman NM Slides Taylor Series, p. 22
Polynomials and a Second Form
Polynomials C

(, )
have nite number of non-zero derivatives,

Taylor series c . . . original polynomial, i.e., error = 0


f(x) = 3x
2
1, . . . f(x) =
2

k=0
f
(k)
(0)
k!
x
k
= 1+0+3x
2
Taylor Theorem can be used for fewer terms
e.g.: approximate a P
17
near c by a P
3
Taylors Theorem, second form (x = constant expansion
point, h = distance, x +h = variable evaluation point):
If f C
n+1
[a, b] then
f(x +h) =
n

k=0
f
(k)
(x)
k!
h
k
+
f
(n+1)
((h))
(n +1)!
h
n+1
x, x +h [a, b], (h) open interval between x and x +h
Copyright c 2011 by A. E. Naiman NM Slides Taylor Series, p. 23
Taylor Approximate: (1 3h)
4
5
Dene: f(z) z
4
5
; x = 1 is the constant expansion point
Derivs: f

(z) =
4
5
z

1
5
, f

(z) =
4
5
2
z

6
5
, f

(z) =
24
5
3
z

11
5
, . . .

:
(x +h)
4
5
= x
4
5
+
4
5
x

1
5
h
4
2! 5
2
x

6
5
h
2
+
24
3! 5
3
x

11
5
h
3
+. . .
(x 3h)
4
5
= x
4
5

4
5
x

1
5
3h
4
2! 5
2
x

6
5
9h
2

24
3! 5
3
x

11
5
27h
3
+. . .
(1 3h)
4
5
= 1
4
5
3h
4
2! 5
2
9h
2

24
3! 5
3
27h
3
+. . .
= 1
12
5
h
18
25
h
2

108
125
h
3
+. . .
Copyright c 2011 by A. E. Naiman NM Slides Taylor Series, p. 24
Second Form ln(e +h)
Evaluation of interest: ln(e +h), for e < h e
Dene: f(z) ln(z)
x = e is the constant expansion point
ln z > 0
Derivatives
f(z) = lnz f(e) = 1
f

(z) = z
1
f

(e) = e
1
f

(z) = z
2
f

(e) = e
2
f

(z) = 2z
3
f

(e) = 2e
3
f
(n)
(z) = (1)
n1
(n 1)!z
n
f
(n)
(e) = (1)
n1
(n 1)! e
n
Copyright c 2011 by A. E. Naiman NM Slides Taylor Series, p. 25
ln(e +h) Expansion and Convergence
Expansion (recall: x = e)
ln(e +h) f(x +h) = 1 +
n

k=1
(1)
k1
(k 1)!e
k
h
k
k!
+
(1)
n
n! (h)
(n+1)
h
n+1
(n +1)!
or
ln(e +h) = 1 +
n

k=1
(1)
k1
k
_
h
e
_
k
+
(1)
n
n +1
_
h
(h)
_
n+1
Range of convergence, sucient (for variable h): < h <
case e +h < < e: . . .
e
2
h
case e < < e +h: . . . h e
Copyright c 2011 by A. E. Naiman NM Slides Taylor Series, p. 26
O() Notation and MVT
As h 0, we write the speed of f(h) 0
f(h) = O
_
h
k
_
|f(h)| C|h|
k
e.g., f(h): h,
1
1000
h, h
2
; let h
1
10
,
1
100
,
1
1000
, . . .
Taylor truncation error = O
_
h
n+1
_
; if for a given n the max
exists, then
C :=

max
(h)
f
(n+1)
((h))

/(n +1)!
Mean value theorem (Taylor, n = 0): If f C
1
[a, b] then
f(b) = f(a) +(b a)f

(), (a, b)
or:
f

() =
f(b) f(a)
b a
Copyright c 2011 by A. E. Naiman NM Slides Taylor Series, p. 27
Alternating Series Theorem
Alternating series theorem: If a
k
> 0, a
k
a
k+1
, k 0, and
a
k
0, then
n

k=0
(1)
k
a
k
S and |S S
n
| a
n+1
Intuitively understood
Note: direction of error is also know for specic n
We had this with sin and cos
Another useful method for max truncation error estimation
Max truncation error estimation without -analysis
Copyright c 2011 by A. E. Naiman NM Slides Taylor Series, p. 28
ln(e +h) Max Trunc. Error Estimate
What is the max error after n +1 terms?
Max error estimate also depends on proximitysize of h
from Taylor: obtain O
_
h
n+1
_
|error|
1
n +1
|h|
n+1
max

n+1
from AST (check the conditions!): also obtain
O
_
h
n+1
_
, with dierent constant
|error|
1
n +1

h
e

n+1
E.g.: h =
e
2
: ln
3
2
e = 1 +
1
2

1
2

1
2
2
+
1
3

1
2
3

1
4

1
2
4
+
Taylor max error (occurs as e
+
):
1
n+1

1
2
n+1
AST max error:
1
n+1

1
2
n+1
note same max error estimate; but can be very dierent
Copyright c 2011 by A. E. Naiman NM Slides Taylor Series, p. 29
Base Representations
Denitions
Conversions
Computer Representation
Loss of Signicant Digits
Copyright c 2011 by A. E. Naiman NM Slides Base Representations, p. 1
Number Representation
Simple representation in one base simple representation in
another base, e.g.
(0.1)
10
= (0.0 0011 0011 0011 . . .)
2
Base 10:
37294 = 4 +90 +200 +7000 +30000
= 4 10
0
+9 10
1
+2 10
2
+7 10
3
+3 10
4
in general: a
n
. . . a
0
=
n

k=0
a
k
10
k
Copyright c 2011 by A. E. Naiman NM Slides Base Representations, p. 2
Fractions and Irrationals
Base 10 fraction:
0.7217 = 7 10
1
+2 10
2
+1 10
3
+7 10
4
In general, for real numbers:
a
n
. . . a
0
.b
1
. . . =
n

k=0
a
k
10
k
+

k=1
b
k
10
k
Note: numbers, i.e., irrationals, such that an innite number
of digits are required, in any rational base, e.g., e, ,

2
Need innite number of digits in a base irrational
(0.333. . .)
10
but
1
3
is not irrational
Copyright c 2011 by A. E. Naiman NM Slides Base Representations, p. 3
Other Bases
Base 8, 8 or 9, using octal digits
(21467)
8
= = (9015)
10
(0.36207)
8
= 8
5
_
3 8
4
+
_
=
15495
32768
= (0.47286 . . .)
10
Base 16: 0, 1, . . . , 9, A (10), B (11), C (12), D
(13), E (14), F (15)
Base
(a
n
. . . a
0
.b
1
. . .)

=
n

k=0
a
k

k
+

k=1
b
k

k
Base 2: just 0 and 1, or for computers: o and on,
bit = binary digit
Copyright c 2011 by A. E. Naiman NM Slides Base Representations, p. 4
Base Representations
Denitions
Conversions
Computer Representation
Loss of Signicant Digits
Copyright c 2011 by A. E. Naiman NM Slides Base Representations, p. 5
Conversion: Base 10 Base 2
Basic idea:
3781 = 1 + 10
..
(1010)
2
_
_
_ 8
..
(1000)
2
+10(7 +10(3))
_
_
_ =
= (111 011 000 101)
2
Easy for computer, but by hand: (3781.372)
10
remainder
2)3781
2)1890 1 = a
0

2)945 0 = a
1
.
.
.
0.372
2
b
1
= 0 .744
2
b
2
= 1 .488 (drop 1 )
.
.
.
only useful for converting to lower base (one-digit /)
Copyright c 2011 by A. E. Naiman NM Slides Base Representations, p. 6
Base 8 Shortcut
Base 2 base 8, trivial
(551.624)
8
= (101 101 001.110 010 100)
2
3 bits for every 1 octal digit
One digit produced for every step in (hand) conversion

base 10 base 8 base 2


Copyright c 2011 by A. E. Naiman NM Slides Base Representations, p. 7
Base Representations
Denitions
Conversions
Computer Representation
Loss of Signicant Digits
Copyright c 2011 by A. E. Naiman NM Slides Base Representations, p. 8
Computer Representation
Scientic notation:
32.213 0.32213 10
2
In general
x = 0.d
1
d
2
. . . 10
n
, d
1
= 0, or: x = r 10
n
,
1
10
r < 1
we have sign, mantissa r and exponent n
On the computer, base 2 is represented
x = 0.b
1
b
2
. . . 2
n
, b
1
= 0, or: x = r 2
n
,
1
2
r < 1
Finite number of mantissa digits, therefore roundo or
truncation error
Copyright c 2011 by A. E. Naiman NM Slides Base Representations, p. 9
Base Representations
Denitions
Conversions
Computer Representation
Loss of Signicant Digits
Copyright c 2011 by A. E. Naiman NM Slides Base Representations, p. 10
LSDAddition
(a +b) +c = a +(b +c) on the computer?
Six decimal digits for mantissa
1, 000, 000. +1. + +1.
. .
million times
= 1, 000, 000.
because
0.100000 10
7
+0.100000 10
1
= 0.100000 10
7
but
1. + +1.
. .
million times
+1, 000, 000. = 2, 000, 000.
Add numbers in size order.
Copyright c 2011 by A. E. Naiman NM Slides Base Representations, p. 11
LSDSubtraction
E.g.: x sinx for xs close to zero
x =
1
15
(radians)
x = 0.66666 66667 10
1
sinx = 0.66617 29492 10
1
x sinx = 0.00049 37175 10
1
= 0.49371 75000 10
4
Note
still have 10
10
precision (because no more info), but
can we rework calculation for 10
13
precision?
Avoid subtraction of close numbers.
Copyright c 2011 by A. E. Naiman NM Slides Base Representations, p. 12
LSD Avoidance for Subtraction
x sinx for x 0 use Taylor series
no subtraction of close numbers
e.g., 3 terms: 0.49371 74328 10
4
actual: 0.49371 74327 10
4
e
x
e
2x
for x 0 use Taylor series twice and add
common powers

_
x
2
+1 1 for x 0
x
2

x
2
+1+1
cos
2
x sin
2
x for x

4
cos 2x
lnx 1 for x e ln
x
e
Copyright c 2011 by A. E. Naiman NM Slides Base Representations, p. 13
Nonlinear Equations
Motivation
Bisection Method
Newtons Method
Secant Method
Summary
Copyright c 2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 1
Motivation
For a given function f(x), nd its root(s), i.e.:
nd x (or r = root) such that f(x) = 0
BVP: dipping of suspended power cable. What is ?
cosh
50

10 = 0
(Some) simple equations solve analytically
6x
2
7x +2 = 0
(3x 2)(2x 1) = 0
x =
2
3
,
1
2
cos 3x cos 7x = 0
2sin5xsin2x = 0
x =
n
5
,
n
2
, n ZZ
Copyright c 2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 2
Motivation (cont.)
In general, we cannot exploit the function, e.g.:
2
x
2
10x +1 = 0
and
cosh
__
x
2
+1 e
x
_
+log|sinx| = 0
Note: at times multiple roots
e.g., previous parabola and cosine
we want at least one
we may only get one (for each search)
Need a general, function-independent algorithm.
Copyright c 2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 3
Nonlinear Equations
Motivation
Bisection Method
Newtons Method
Secant Method
Summary
Copyright c 2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 4
Bisection MethodExample
-5
-4
-3
-2
-1
0
1
2
3
4
a
b
x
0
x
1
x
2
x
3
f
u
n
c
t
i
o
n
v
a
l
u
e
Intuitive, like guessing a number [0, 100].
Copyright c 2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 5
Restrictions and Max Error Estimate
Restrictions
function slices x-axis at root
start with two points a and b f(a)f(b) < 0
graphing tool (e.g., Matlab) can help to nd a
and b
require C
0
[a, b] (why? note: not a big deal)
Max error estimate
after n steps, guess midpoint of current range
error:
ba
2
n+1
(think of n = 0, 1, 2)
note: error is in x; can also look at error in f(x) or
combination
enters entire world of stopping criteria
Question: Given tolerance (in x), what is n? . . .
Copyright c 2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 6
Convergence Rate
Given tolerance (e.g., 10
6
), how many steps are needed?
Tolerance restriction ( from before):
_

b a
2
n+1
_
<

1)
2,
2)
log (any base)
log(b a) nlog2 < log2
or
n >
log(b a) log2
log2
Rate is independent of function.
Copyright c 2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 7
Convergence Rate (cont.)
Base 2 (i.e., bits of accuracy)
n > log
2
(b a) 1 log
2

i.e., number of steps is a constant plus one step per bit


Linear convergence rate: C [0, 1)

x
n+1
r

C|x
n
r|, n 0
i.e., monotonic decreasing error at every step, and

x
n+1
r

C
n+1
|x
0
r|
Bisection convergence
not linear (examples?), but compared to init. max error:
similar form:

x
n+1
r

C
n+1
(b a), with C =
1
2
Okay, but restrictive and slow.
Copyright c 2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 8
Nonlinear Equations
Motivation
Bisection Method
Newtons Method
Secant Method
Summary
Copyright c 2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 9
Newtons MethodDenition
Approximate f(x) near x
0
by tangent (x)
f(x) f(x
0
) +f

(x
0
)(x x
0
) (x)
Want (r) = 0 r = x
0

f(x
0
)
f

(x
0
)
,

x
1
:= r, likewise:
x
n+1
= x
n

f(x
n
)
f

(x
n
)
Alternatively (Taylors): have x
0
, for what h is
f
_
_
x
0
+h
. .
x
1
_
_
= 0
f(x
0
+h) f(x
0
) +hf

(x
0
) or h =
f(x
0
)
f

(x
0
)
Copyright c 2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 10
Newtons MethodExample
-0.2
0
0.2
0.4
0.6
0.8
1
1.2
x
0
x
1
x
2
x
3
f
u
n
c
t
i
o
n
v
a
l
u
e
Copyright c 2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 11
Convergence Rate
English: With enough continuity and proximity
quadratic convergence!
Theorem: With the following three conditions:
1)
f(r) = 0,
2)
f

(r) = 0,
3)
f C
2
_
B
_
r,

__

x
0
B(r, ) and n we have

x
n+1
r

C()|x
n
r|
2
for a given , C is a constant (not necessarily < 1)
Note: again, use graphing tool to seed x
0
Newtons method can be very fast.
Copyright c 2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 12
Convergence Rate Example
f(x) = x
3
2x
2
+x 3, x
0
= 4
n x
n
f(x
n
)
0 4 33
1 3 9
2 2.4375 2.036865234375
3 2.21303271631511 0.256363385061418
4 2.17555493872149 0.00646336148881306
5 2.17456010066645 4.47906804996122e 06
6 2.17455941029331 2.15717547991101e 12
Stopping criteria
theorem: uses x; above: uses f(x)often all we have
possibilities: absolute/relative, size/change, x or f(x)
(combos, . . . )
But proximity issue can bite, . . . .
Copyright c 2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 13
Sample Newton Failure #1
-4
-3
-2
-1
0
1
2
3
4
5
x
n
f
u
n
c
t
i
o
n
v
a
l
u
e
Runaway process
Copyright c 2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 14
Sample Newton Failure #2
-4
-2
0
2
4
x
n
f
u
n
c
t
i
o
n
v
a
l
u
e
Division by zero derivativerecall algorithm
Copyright c 2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 15
Sample Newton Failure #3
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
x
n
x
n+1
f
u
n
c
t
i
o
n
v
a
l
u
e
Loop-d-loop (can happen over m points)
Copyright c 2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 16
Nonlinear Equations
Motivation
Bisection Method
Newtons Method
Secant Method
Summary
Copyright c 2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 17
Secant MethodDenition
Motivation: avoid derivatives
Taylor (or derivative): f

(x
n
)
f(x
n
)f
(
x
n1
)
x
n
x
n1

x
n+1
= x
n
f(x
n
)
x
n
x
n1
f(x
n
) f
_
x
n1
_
Bisection requirements comparison:


2 previous points
f(a)f(b) < 0
Additional advantage vs. Newton:
only one function evaluation per iteration
Superlinear convergence:

x
n+1
r

C|x
n
r|
1.618...
(recognize the exponent?)
Copyright c 2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 18
Nonlinear Equations
Motivation
Bisection Method
Newtons Method
Secant Method
Summary
Copyright c 2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 19
Root FindingSummary
Performance and requirements
f C
2
nbhd(r) init. pts. speedy
bisection 2

1
Newton

1 2

secant

2 1

\
requirement that f(a)f(b) < 0
function evaluations per iteration
Often methods are combined (how?), with restarts for
divergence or cycles
Recall: use graphing tool to seed x
0
(and x
1
)
Copyright c 2011 by A. E. Naiman NM Slides Nonlinear Equations, p. 20
Interpolation and Approximation
Motivation
Polynomial Interpolation
Numerical Dierentiation
Additional Notes
Copyright c 2011 by A. E. Naiman NM Slides Interpolation and Approximation, p. 1
Motivation
Three sample problems
{(x
i
, y
i
)|i = 0, . . . , n}, (x
i
distinct), want simple (e.g.,
polynomial) p(x) y
i
= p(x
i
), i = 0, . . . , n
interpolation
Assume data includes errors, relax equality but still
close, . . . least squares
Replace complicated f(x) with simple p(x) f(x)
Interpolation
similar to English term (contrast: extrapolation)
for now: polynomial
later: splines
Use p(x) for p(x
new
),
_
p(x) dx, . . . .
Copyright c 2011 by A. E. Naiman NM Slides Interpolation and Approximation, p. 2
Interpolation and Approximation
Motivation
Polynomial Interpolation
Numerical Dierentiation
Additional Notes
Copyright c 2011 by A. E. Naiman NM Slides Interpolation and Approximation, p. 3
Constant and Linear Interpolation
y
0
y
1
x
0
x
1
f
u
n
c
t
i
o
n
v
a
l
u
e

p
0
(x)
p
1
(x)
n = 0: p(x) = y
0
n = 1: p(x) = y
0
+g(x)(y
1
y
0
), g(x) P
1
, and
g(x) =
_
0, x = x
0
,
1, x = x
1

g(x) =
xx
0
x
1
x
0
n = 2: more complicated, . . . .
Copyright c 2011 by A. E. Naiman NM Slides Interpolation and Approximation, p. 4
Lagrange Polynomials
Given: x
i
, i = 0, . . . , n; Kronecker delta:
i j
=
_
0, i = j,
1, i = j
Lagrange polynomials:
i
(x) P
n
,
i
_
x
j
_
=
i j
, i = 0, . . . , n
independent of any y
i
values
E.g., n = 2:
0
1
x
0
x
1
x
2
f
u
n
c
t
i
o
n
v
a
l
u
e

0
(x)
1
(x)
2
(x)


Copyright c 2011 by A. E. Naiman NM Slides Interpolation and Approximation, p. 5
Lagrange Interpolation
We have

0
(x) =
x x
1
x
0
x
1

x x
2
x
0
x
2
,

1
(x) =
x x
0
x
1
x
0

x x
2
x
1
x
2
,

2
(x) =
x x
0
x
2
x
0

x x
1
x
2
x
1
,
y
0

0
_
x
j
_
= y
0

0j
=
_
0, j = 0,
y
0
, j = 0
y
1

1
_
x
j
_
= y
1

1j
=
_
0, j = 1,
y
1
, j = 1
y
2

2
_
x
j
_
= y
2

2j
=
_
0, j = 2,
y
2
, j = 2

!p(x) P
2
, with p
_
x
j
_
= y
j
, j = 0, 1, 2: p(x) =
2

i=0
y
i

i
(x)
In general:
i
(x) =
n

j = 0
j = i
x x
j
x
i
x
j
, i = 0, . . . , n
Great! What could be wrong? Easy functions (polynomials),
interpolation (

error = 0 at x
i
) . . . but what about p(x
new
)?
Copyright c 2011 by A. E. Naiman NM Slides Interpolation and Approximation, p. 6
Interpolation Error & the Runge Function
{(x
i
, f(x
i
))|i = 0, . . . , n}, |f(x) p(x)| ?
Runge function: f
R
(x) =
_
1 +x
2
_
1
, x [5, 5] and uniform
mesh:
!
p(x)s wrong shape and high oscillations
lim
n
max
5x5
|f
R
(x) p
n
(x)| =
0
1
x
0
x
1
x
2
x
3 0
x
5
x
6
x
7
x
8
f
u
n
c
t
i
o
n
v
a
l
u
e
(= 5) (= x
4
) (= 5)

Copyright c 2011 by A. E. Naiman NM Slides Interpolation and Approximation, p. 7


Error Theorem
Theorem: . . . , f C
n+1
[a, b], x [a, b], (a, b)
f(x) p(x) =
1
(n +1)!
f
(n+1)
()
n

i=0
(x x
i
)
Max error
with x
i
and x, still need max
(a,b)
f
(n+1)
()
with x
i
only, also need max of

without x
i
:
max
(a,b)
n

i=0
(x x
i
) = (b a)
n+1
Copyright c 2011 by A. E. Naiman NM Slides Interpolation and Approximation, p. 8
Chebyshev Points
0
1
x
0
x
1
x
2
x
3
x
4
x
5
x
6
x
7
x
8
f
u
n
c
t
i
o
n
v
a
l
u
e
(= 1) (= 0) (= 1)
Chebyshev points on [1, 1]: x
i
= cos
__
i
n
_

_
, i = 0, . . . , n
In general on [a, b]: x
i
=
1
2
(a +b) +
1
2
(b a) cos
__
i
n
_

_
,
i = 0, . . . , n
Points concentrated at edges
Copyright c 2011 by A. E. Naiman NM Slides Interpolation and Approximation, p. 9
Runge Function with Chebyshev Points
0
1
x
0
x
1
x
2
x
3 0
x
5
x
6
x
7
x
8
f
u
n
c
t
i
o
n
v
a
l
u
e
(= 5) (= x
4
) (= 5)

Is this good interpolation?


Copyright c 2011 by A. E. Naiman NM Slides Interpolation and Approximation, p. 10
Chebyshev Interpolation
Same interpolation method
Dierent interpolation points
Minimizes

i=0
(x x
i
)

Periodic behavior interpolate with sins/coss instead of P


n
uniform mesh minimizes max error
Note: uniform partition with spacing = cheb
1
cheb
0
num. points

polynomial degree

oscillations
Note: shape is still wrong . . . see splines later
Copyright c 2011 by A. E. Naiman NM Slides Interpolation and Approximation, p. 11
Interpolation and Approximation
Motivation
Polynomial Interpolation
Numerical Dierentiation
Additional Notes
Copyright c 2011 by A. E. Naiman NM Slides Interpolation and Approximation, p. 12
Numerical Dierentiation
Note: until now, approximating f(x), now f

(x)
f

(x)
f(x+h)f(x)
h
Error = ?
Taylor: f(x +h) = f(x) +hf

(x) +h
2
f

()
2

(x) =
f(x +h) f(x)
h

1
2
hf

()
I.e., truncation error: O(h)
Can we do better?
Copyright c 2011 by A. E. Naiman NM Slides Interpolation and Approximation, p. 13
Numerical DierentiationTake Two
Taylor for +h and h:
f(x h) =
f(x) hf

(x) +h
2
f

(x)
2!
h
3
f

(x)
3!
+h
4
f
(4)
(x)
4!
h
5
f
(5)
(x)
5!
+
Subtracting:
f(x +h) f(x h) = 2hf

(x) +2h
3
f

(x)
3!
+2h
5
f
(5)
(x)
5!
+

(x) =
f(x +h) f(x h)
2h

1
6
h
2
f

(x)
We gained O(h) to O
_
h
2
_
. However, . . .
Copyright c 2011 by A. E. Naiman NM Slides Interpolation and Approximation, p. 14
Richardson ExtrapolationTake Three
We have
f

(x) =
f(x +h) f(x h)
2h
. .
(h)
+a
2
h
2
+a
4
h
4
+a
6
h
6
+
Halving the stepsize,

(h) = f

(x) a
2
h
2
a
4
h
4
a
6
h
6

_
h
2
_
= f

(x) a
2
_
h
2
_
2
a
4
_
h
2
_
4
a
6
_
h
2
_
6

(h) 4
_
h
2
_
= 3f

(x)
3
4
a
4
h
4

15
16
a
6
h
6

Q: So what? A: The h
2
term disappeared!
Copyright c 2011 by A. E. Naiman NM Slides Interpolation and Approximation, p. 15
RichardsonTake Three (cont.)
Divide by 3 and write f

(x)
f

(x) =
4
3

_
h
2
_

1
3
(h)
1
4
a
4
h
4

5
16
a
6
h
6

=
_
h
2
_
+
1
3
_

_
h
2
_
(h)
_
. .
()
+O
_
h
4
_
() only uses old and current information
We gained O
_
h
2
_
to O
_
h
4
_
!!
Copyright c 2011 by A. E. Naiman NM Slides Interpolation and Approximation, p. 16
Interpolation and Approximation
Motivation
Polynomial Interpolation
Numerical Dierentiation
Additional Notes
Copyright c 2011 by A. E. Naiman NM Slides Interpolation and Approximation, p. 17
Additional Notes
Three f

(x) formulae used additional points


vs. Taylor, more derivatives in same point
Similar for f

(x):
f(x h) = f(x)hf

(x)+h
2
f

(x)
2!
h
3
f

(x)
3!
+h
4
f
(4)
(x)
4!
h
5
f
(5)
(x)
5!
+
Adding:
f(x +h) +f(x h) = 2f(x) +h
2
f

(x) +
1
12
h
4
f
(4)
(x) +
or:
f

(x) =
f(x +h) 2f(x) +f(x h)
h
2
+
1
12
h
2
f
(4)
(x) +

error = O
_
h
2
_
Copyright c 2011 by A. E. Naiman NM Slides Interpolation and Approximation, p. 18
Numerical Quadrature
Introduction
Riemann Integration
Composite Trapezoid Rule
Composite Simpsons Rule
Gaussian Quadrature
Copyright c 2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 1
Numerical QuadratureInterpretation
f(x) 0 on [a, b] bounded
_
b
a
f(x) dx is area under f(x)
a
b
f
u
n
c
t
i
o
n
v
a
l
u
e
Copyright c 2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 2
Numerical QuadratureMotivation
Analytical solutionsrare:
_
2
0
sinxdx = cos x|

2
0
= (0 1) = 1
In general:
_
2
0
_
1 a
2
sin
2

_1
3
d
Need general numerical technique.
Copyright c 2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 3
Denitions
Mesh: P {a = x
0
< x
1
< < x
n
= b}, n subintervals (n +1
points)
Inma and suprema (or minima and maxima):
m
i
inf
_
f(x) : x
i
x x
i+1
_
M
i
sup
_
f(x) : x
i
x x
i+1
_
Two methods (i.e., integral estimates): lower and upper sums
L(f; P)
n1

i=0
m
i
_
x
i+1
x
i
_
U(f; P)
n1

i=0
M
i
_
x
i+1
x
i
_
For example, . . . .
Copyright c 2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 4
Lower SumInterpretation
x
0
x
1
x
2
x
3
x
4
f
u
n
c
t
i
o
n
v
a
l
u
e
(= a) (= b)
Clearly a lower bound of integral estimate, and . . .
Copyright c 2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 5
Upper SumInterpretation
x
0
x
1
x
2
x
3
x
4
f
u
n
c
t
i
o
n
v
a
l
u
e
(= a) (= b)
. . . an upper bound. What is the max error?
Copyright c 2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 6
Lower and Upper SumsExample
Third method, use lower and upper sums: (L +U)/2
f(x) = x
2
, [a, b] = [0, 1] and P =
_
0,
1
4
,
1
2
,
3
4
, 1
_
. . . , L =
7
32
, U =
15
32
Split the dierence: estimate
11
32
(actual
1
3
)
Bottom line
naive approach
low n
still error of
1
96
. (!)
Max error: (U L)/2 =
1
8
Is this good enough?
Copyright c 2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 7
Numerical QuadratureRethinking
Perhaps lower and upper sums are enough?
Error seems small
Work seems small as well
But: estimate of max error was not small (
1
8
)
Do they converge to integral as n ?
Will the extrema always be easy to calculate? Accurately?
(Probably not!)
Proceed in theoretical and practical directions.
Copyright c 2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 8
Numerical Quadrature
Introduction
Riemann Integration
Composite Trapezoid Rule
Composite Simpsons Rule
Gaussian Quadrature
Copyright c 2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 9
Riemann Integrability
f C
0
[a, b], [a, b] bdd f is Riemann integrable
When integrable, and max subinterval in P 0 (|P|0):
lim
|P|0
L(f; P) =
_
b
a
f(x) dx = lim
|P|0
U(f; P)
Counter example: Dirichlet function d(x)
_
0, x rational,
1, x irrational
L = 0, U = b a
Copyright c 2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 10
Challenge: Estimate n for Third Method
Current restrictions for n estimate:
Monotone functions
Uniform partition
Challenge:
estimate
_

0
e
cos x
dx
error tolerance =
1
2
10
3
using L and U
n = ?
Copyright c 2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 11
Estimate nSolution
f(x) = e
cos x
on [0, ]

m
i
= f
_
x
i+1
_
and M
i
= f(x
i
)

L(f; P) = h
n1

i=0
f
_
x
i+1
_
and U(f; P) = h
n1

i=0
f(x
i
), h =

n
Want
1
2
(U L) <
1
2
10
3
or

n
_
e
1
e
1
_
< 10
3
. . . n 7385 (!!) (note for later: max error estimate = O(h))
Number of f(x) evaluations
2 for (U L) max error calculation
> 7000 for either L or U
We need something better.
Copyright c 2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 12
Numerical Quadrature
Introduction
Riemann Integration
Composite Trapezoid Rule
Composite Simpsons Rule
Gaussian Quadrature
Copyright c 2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 13
Composite Trapezoid Rule (CTR)
Each area:
1
2
_
x
i+1
x
i
__
f(x
i
) +f
_
x
i+1
__
Rule: T(f; P)
1
2
n1

i=0
_
x
i+1
x
i
__
f(x
i
) +f
_
x
i+1
__
Note: for monotone functions and any given mesh (why?):
T = (L +U)/2
Pro: no need for extrema calculations
Con: adding new points to existing ones (for a
non-monotonic function)
T can land on bad point
no monotonic improvement (necessarily)
L, U and (L +U)/2 look for extrema on
_
x
i
, x
i+1
_

monotonic improvement
Copyright c 2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 14
CTRInterpretation
x
0
x
1
x
2
x
3
x
4
f
u
n
c
t
i
o
n
v
a
l
u
e
(= a) (= b)
Almost always better than L or U. (When not?)
Copyright c 2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 15
Uniform Mesh and Associated Error
Constant stepsize h =
ba
n
T(f; P) h
_
_
_
n1

i=1
f(x
i
) +
1
2
[f(x
0
) +f(x
n
)]
_
_
_
Theorem: f C
2
[a, b] (a, b)
_
b
a
f(x) dx T(f; P) =
1
12
(b a)h
2
f

() = O
_
h
2
_
Note: leads to popular Romberg algorithm (built on
Richardson extrapolation)
How many steps does T(f; P) require?
Copyright c 2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 16
e
cos x
RevisitedUsing CTR
Challenge:
_

0
e
cos x
dx, error tolerance =
1
2
10
3
, n = ?
f(x) = e
cos x
f

(x) = e
cos x
sinx . . .

(x)

e on (0, )

|error|
1
12
(/n)
2
e
1
2
10
3
. . . n 119
Recall perennial two questions/calculations of NM
monotonic

estimate of T produces same (L +U)/2


but previous max error estimate was less exact (O(h))
Better estimate of max error

better estimate of n
Copyright c 2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 17
Another CTR Example
Challenge:
_
1
0
e
x
2
dx, error tolerance =
1
2
10
4
, n = ?
f(x) = e
x
2
, f

(x) = 2xe
x
2
and f

(x) =
_
4x
2
2
_
e
x
2

(x)

2 on (0, 1)
|error|
1
6
h
2

1
2
10
4
We have: n
2

1
3
10
4
or n 58 subintervals
How can we do better?
Copyright c 2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 18
Numerical Quadrature
Introduction
Riemann Integration
Composite Trapezoid Rule
Composite Simpsons Rule
Gaussian Quadrature
Copyright c 2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 19
Trapezoid Rule as
_
Linear Interpolant
Linear interpolant, one subinterval: p
1
(x) =
xb
ab
f(a) +
xa
ba
f(b),
intuitively:
_
b
a
p
1
(x) dx =
f(a)
a b
_
b
a
(x b) dx +
f(b)
b a
_
b
a
(x a) dx
=
f(a)
a b
_
b
2
a
2
2
b(b a)
_
+
f(b)
b a
_
b
2
a
2
2
a(b a)
_
= f(a)
_
a +b
2
b
_
+f(b)
_
a +b
2
a
_
= f(a)
_
a b
2
_
+f(b)
_
b a
2
_
=
b a
2
(f(a) +f(b))
CTR is integral of composite linear interpolant.
Copyright c 2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 20
CTR for Two Equal Subintervals
n = 2 (i.e., 3 points):
T(f) =
b a
2
_
f
_
a +b
2
_
+
1
2
[f(a) +f(b)]
_
=
b a
4
_
f(a) +2f
_
a +b
2
_
+f(b)
_
with error = O
_
_
ba
2
_
3
_
(Previously, CTR error = O
_
h
2
_
= TR error n subintervals
= O
_
h
3
_
O
_
1
h
_
)
Deciency: each subinterval ignores the other
How can we take the entire picture into account?
Copyright c 2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 21
Simpsons Rule
Motivation: use p
2
(x) over the two equal subintervals
Similar analysis actually loses O(h), but . . . (a, b)
_
b
a
f(x) dx =
b a
6
_
f(a) +4f
_
a +b
2
_
+f(b)
_

1
90
_
b a
2
_
5
f
(4)
()
Similar to CTR, but weights midpoint more
Note: for each method, denominator =

coecients
Each method multiplies width by weighted average of height.
Copyright c 2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 22
Composite Simpsons Rule (CSR)
For an even number of subintervals n, h =
ba
n
, (a, b)
_
b
a
f(x) dx =
h
3
_

_
[f(a) +f(b)] +4
n/2

i=1
f[a +(2i 1)h]
. .
odd nodes
+
2
(n2)/2

i=1
f(a +2ih)
. .
even nodes
_

b a
180
h
4
f
(4)
()
Note: denominator =

coecients = 3n
but only n +1 function evaluations
Can we do better than O
_
h
4
_
?
Copyright c 2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 23
Evaluating the Error
Another important accuracy angle
until now: error = O(h

)
now on, looking at f
()
: error = 0 f P
1
With higher , p

(x) can approximate any f(x) better


Dene (x) f(x) p

(x)

_
f =
_
(p

+) =
_
p

+
_
= method
_
p

_
+
_
=
method(f) method() +
_

As : (x) ,
__
method()
_

method(f)
_
f
Can we do better than Simpsons P
3
?
Copyright c 2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 24
Integration Introspection
Simpson beat CTR because heavier weighted midpoint
But CSR similarly suers at subinterval-pair boundaries
(weight = 2 vs. 4 for no reason)
All composite rules
ignore other areas
patch together local calculations

will suer from this


What about using all nodes and higher degree interpolation?
Also note: we can choose
weights
location of calculation nodes
Copyright c 2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 25
Numerical Quadrature
Introduction
Riemann Integration
Composite Trapezoid Rule
Composite Simpsons Rule
Gaussian Quadrature
Copyright c 2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 26
Interpolatory Quadrature
x
i
,
i
(x) =
n

j = 0
j = i
x x
j
x
i
x
j
, i = 0, . . . , n; p(x) =
n

i=0
f(x
i
)
i
(x)
If f(x) p(x) hopefully
_
b
a
f(x) dx
_
b
a
p(x) dx

_
b
a
p(x) dx =
_
b
a
n

i=0
f(x
i
)
i
(x) dx =
n

i=0
f(x
i
)
_
b
a

i
(x) dx
. .
A
i
A
i
= A
i
_
a, b;
_
x
j
_
n
j=0
_
, but A
i
= A
i
(f) !
(Endpoints, nodes) A
i

_
b
a
f(x) dx
n

i=0
A
i
f(x
i
).
Copyright c 2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 27
Interp. Quad.Error Analysis
f P
n
f(x) = p(x), and

f P
n

_
b
a
f(x) dx =
n

i=0
A
i
f(x
i
), i.e., error = 0
n +1 weights determined by nodes x
i
(and a and b)
True for any choice of n +1 nodes x
i
What if we choose n +1 specic nodes (with weights, total:
2(n +1) choices)?
Can we get error = 0 f P
2n+1
?
Copyright c 2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 28
Gaussian Quadrature (GQ)Theorem
Let
q(x) P
n+1

_
b
a
x
k
q(x) dx = 0, k = 0, . . . , n
i.e., q(x) all polynomials of lower degree
note: n +2 coecients, n +1 conditions
unique to a constant multiplier
x
i
, i = 0, . . . , n, q(x
i
) = 0
i.e., x
i
are zeros of q(x)
Then f P
2n+1
, even though f(x) = p(x) (f P
m
, m > n)
_
b
a
f(x) dx =
n

i=0
A
i
f(x
i
)
We jumped from P
n
to P
2n+1
!
Copyright c 2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 29
Gaussian QuadratureProof
Let f P
2n+1
, and divide by q f = sq +r

s, r P
n
We have (note: until last step, x
i
can be arbitrary)
_
b
a
f(x) dx =
_
b
a
s(x)q(x) dx +
_
b
a
r(x) dx (division above)
=
_
b
a
r(x) dx
_

ity of q(x)
_
=
n

i=0
A
i
r(x
i
) (r P
n
)
=
n

i=0
A
i
[f(x
i
) s(x
i
)q(x
i
)] (division above)
=
n

i=0
A
i
f(x
i
) (x
i
are zeros of q(x))
Copyright c 2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 30
GQAdditional Notes
Example q
n
(x): Legendre Polynomials: for [a, b] = [1, 1] and
q
n
(1) = 1 ( a 3-term recurrence formula)
q
0
(x) = 1, q
1
(x) = x, q
2
(x) =
3
2
x
2

1
2
, q
3
(x) =
5
2
x
3

3
2
x, . . .
Use q
n+1
(x) (why?), depends only on a, b and n
Gaussian nodes (a, b)
good if f(a) = and/or f(b) = (e.g.,
_
1
0
1

x
dx)
More general: with weight function w(x) in
original integral
q(x) orthogonality
weights A
i
Copyright c 2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 31
Numerical QuadratureSummary
n +1 function evaluations
composite? node placement error = 0 P
CTR

uniform (usually)

1
CSR

uniform (usually)

3
interp. any (distinct) n
GQ zeros of q(x) 2n +1

P.S. There are also powerful adaptive quadrature methods


Copyright c 2011 by A. E. Naiman NM Slides Numerical Quadrature, p. 32
Linear Systems
Introduction
Naive Gaussian Elimination
Limitations
Operation Counts
Additional Notes
Copyright c 2011 by A. E. Naiman NM Slides Linear Systems, p. 1
What Are Linear Systems (LS)?
a
11
x
1
+ a
12
x
2
+ + a
1n
x
n
= b
1
a
21
x
1
+ a
22
x
2
+ + a
2n
x
n
= b
2
.
.
. +
.
.
. +
.
.
. +
.
.
. =
.
.
.
a
m1
x
1
+ a
m2
x
2
+ + a
mn
x
n
= b
m
Dependence on unknowns: powers of degree 1
Summation form:
n

j=1
a
i j
x
j
= b
i
, 1 i m, i.e., m
equations
Presently: m = n, i.e., square systems (later: m = n)
Q: How to solve for [x
1
x
2
. . . x
n
]
T
? A: . . .
Copyright c 2011 by A. E. Naiman NM Slides Linear Systems, p. 2
Linear Systems
Introduction
Naive Gaussian Elimination
Limitations
Operation Counts
Additional Notes
Copyright c 2011 by A. E. Naiman NM Slides Linear Systems, p. 3
Overall Algorithm and Denitions
Currently: direct methods only (later: iterative methods)
General idea:
Generate upper triangular system
(forward elimination)
Easily calculate unknowns in reverse order
(backward substitution)
Pivot row = current one being processed
pivot = diagonal element of pivot row
Steps applied to RHS as well.
Copyright c 2011 by A. E. Naiman NM Slides Linear Systems, p. 4
Forward Elimination
Generate zero columns below diagonal
Process rows downward
for each row i := 1, n 1 { // the pivot row
for each row k := i +1, n { // rows below pivot
multiply pivot row a
i i
= a
k i
subtract pivot row from row
k
// now a
k i
= 0
} // now column below a
i i
is zero
} // now a
i j
= 0, i > j
Obtain triangular system
Lets work an example, . . .
Copyright c 2011 by A. E. Naiman NM Slides Linear Systems, p. 5
Compact Form of LS
6x
1
2x
2
+ 2x
3
+ 4x
4
= 16
12x
1
8x
2
+ 6x
3
+ 10x
4
= 26
3x
1
13x
2
+ 9x
3
+ 3x
4
= 19
6x
1
+ 4x
2
+ 1x
3
18x
4
= 34
_

_
_
_
_
_
6 2 2 4 16
12 8 6 10 26
3 13 9 3 19
6 4 1 18 34
_
_
_
_
_
Proceeding with the forward elimination, . . .
Copyright c 2011 by A. E. Naiman NM Slides Linear Systems, p. 6
Forward EliminationExample
_
_
_
_
_
6 2 2 4 16
12 8 6 10 26
3 13 9 3 19
6 4 1 18 34
_
_
_
_
_

_
_
_
_
_
6 2 2 4 16
0 4 2 2 6
0 12 8 1 27
0 2 3 14 18
_
_
_
_
_

_
_
_
_
_
6 2 2 4 16
0 4 2 2 6
0 0 2 5 9
0 0 4 13 21
_
_
_
_
_

_
_
_
_
_
6 2 2 4 16
0 4 2 2 6
0 0 2 5 9
0 0 0 3 3
_
_
_
_
_
Matrix is upper triangular.
Copyright c 2011 by A. E. Naiman NM Slides Linear Systems, p. 7
Backward Substitution
_
_
_
_
_
6 2 2 4 16
0 4 2 2 6
0 0 2 5 9
0 0 0 3 3
_
_
_
_
_
Last equation: 3x
4
= 3 x
4
= 1
Second to last equation: 2x
3
5 x
4
..
=1
= 2x
3
5 = 9
x
3
= 2
. . . second equation . . . x
2
= . . .
. . . [x
1
x
2
x
3
x
4
]
T
= [3 1 2 1]
T
For small problems, check solution in original system.
Copyright c 2011 by A. E. Naiman NM Slides Linear Systems, p. 8
Linear Systems
Introduction
Naive Gaussian Elimination
Limitations
Operation Counts
Additional Notes
Copyright c 2011 by A. E. Naiman NM Slides Linear Systems, p. 9
Zero Pivots
Clearly, zero pivots prevent forward elimination

!
zero pivots can appear along the way
Later: When guaranteed no zero pivots?
All pivots = 0
?
we are safe
Experiment with system with known solution.
Copyright c 2011 by A. E. Naiman NM Slides Linear Systems, p. 10
Vandermonde Matrix
_
_
_
_
_
_
_
_
1 2 4 8 2
n1
1 3 9 27 3
n1
1 4 16 64 4
n1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1 n +1 (n +1)
2
(n +1)
3
(n +1)
n1
_
_
_
_
_
_
_
_
Want row sums on RHS x
i
= 1, i = 1, . . . , n
Geometric series:
1 +t +t
2
+ +t
n1
=
t
n
1
t 1
We obtain b
i
, for row i = 1, . . . , n
n

j=1
(1 +i)
j1
. .
a
i j
1
..
x
j
=
(1 +i)
n
1
(1 +i) 1
=
1
i
[(1 +i)
n
1]
. .
b
i
System is ready to be tested.
Copyright c 2011 by A. E. Naiman NM Slides Linear Systems, p. 11
Vandermonde Test
Platform with 7 signicant (decimal) digits
n = 1, . . . , 8 expected results
n = 9: error > 16,000% !!
Questions:
What happened?
Why so sudden?
Can anything be done?
Answer: matrix is ill-conditioned
Sensitivity to roundo errors
Leads to error propagation and magnication
First, how to assess vector errors.
Copyright c 2011 by A. E. Naiman NM Slides Linear Systems, p. 12
Errors
Given system: Ax = b and solution estimate x
Residual (error): r A x b
Absolute error (if x is known): e x x
Norm taken of r or e: vector scalar quantity
(more on norms later)
Relative errors: ||r||/||b|| and ||e||/||x||
Back to ill-conditioning, . . .
Copyright c 2011 by A. E. Naiman NM Slides Linear Systems, p. 13
Ill-conditioning

0 x
1
+ x
2
= 1
x
1
+ x
2
= 2
_
0 pivot
General rule: if 0 is problematic
numbers near 0 are problematic

x
1
+ x
2
= 1
x
1
+ x
2
= 2
_
. . . x
2
=
21/
11/
and x
1
=
1x
2

small (e.g., = 10
9
with 8 signicant digits) x
2
= 1 and
x
1
= 0wrong!
What can be done?
Copyright c 2011 by A. E. Naiman NM Slides Linear Systems, p. 14
Pivoting
Switch order of equations, moving oending element o
diagonal

x
1
+ x
2
= 2
x
1
+ x
2
= 1
_
, x
2
=
12
1
and x
1
= 2 x
2
=
1
1
This is correct, even for small (or even = 0)
Compare size of diagonal (pivot) elements above, to
Ratio of rst row of Vandermonde matrix = 1 : 2
n1
Issue is relative size, not absolute size.
Copyright c 2011 by A. E. Naiman NM Slides Linear Systems, p. 15
Scaled Partial Pivoting
Also called row pivoting (vs. column pivoting)
Instability source: subtracting large values: a
k j
-= a
i j
a
k i
a
i i
W|o l.o.g.: n rows, and choosing which row to be rst
Find i rows k = i, columns j > 1: minimize

a
i j
a
k 1
a
i 1

O
_
n
3
_
calculations!

simplify (remove k), imagine: a


k 1
= 1

nd i columns j > 1: min


i

a
i j
a
i 1

Still
1)
O
_
n
2
_
calculations,
2)
how to minimize each row?
Find i: min
i
max
j
|a
i j
|
|a
i 1
|
, or: max
i
|a
i 1
|
max
j

a
i j

(e.g., rst matrix)


Copyright c 2011 by A. E. Naiman NM Slides Linear Systems, p. 16
Linear Systems
Introduction
Naive Gaussian Elimination
Limitations
Operation Counts
Additional Notes
Copyright c 2011 by A. E. Naiman NM Slides Linear Systems, p. 17
How Much Work on A?
Real life: crowd estimation costs? (will depend on accuracy)
Counting and (i.e., long operations) only
Pivoting: row decision amongst k rows = k ratios
First row:
n ratios (for choice of pivot row)
n 1 multipliers
(n 1)
2
multiplications
total: n
2
operations

forward elimination operations (for large n)


n

k=2
k
2
=
n
6
(n +1)(2n +1) 1
n
3
3
How about the work on b?
Copyright c 2011 by A. E. Naiman NM Slides Linear Systems, p. 18
Rest of the Work
Forward elimination work on RHS:
n

k=2
(k 1) =
n(n 1)
2
Backward substitution:
n

k=1
k =
n(n +1)
2
Total: n
2
operations
O(n) fewer operations than forward elimination on A
Important for multiple RHSs known from the start
do not repeat O
_
n
3
_
work for each
rather, line them up, and process simultaneously
Can we do better at times?
Copyright c 2011 by A. E. Naiman NM Slides Linear Systems, p. 19
Sparse Systems
_
_
_
_
_
_
_
_
_
_
_
_
0 0

.
.
.
.
.
.
.
.
.
0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0
_
_
_
_
_
_
_
_
_
_
_
_
Above, e.g., tridiagonal system (half bandwidth = 1)
note: a
ij
= 0 for |i j| > 1
Opportunities for savings
storage
computations
Both are O(n)
Copyright c 2011 by A. E. Naiman NM Slides Linear Systems, p. 20
Linear Systems
Introduction
Naive Gaussian Elimination
Limitations
Operation Counts
Additional Notes
Copyright c 2011 by A. E. Naiman NM Slides Linear Systems, p. 21
Pivot-Free Guarantee
When are we guaranteed non-zero pivots?
Diagonal dominance (just like it sounds):
|a
i i
| >
n

j = 1
j = i

a
i j

, i = 1, . . . , n
(Or > in one row, and in remaining)
Many nite dierence and nite element problems
diagonally dominant systems
Occurs often enough to justify individual study.
Copyright c 2011 by A. E. Naiman NM Slides Linear Systems, p. 22
LU Decomposition
E.g.: same A, many bs of time-dependent problem
not all bs are known from the start
Want A = LU for decreased work later
Then dene y: L Ux
..
y
= b
solve Ly = b for y
solve Ux = y for x
U is upper triangular, result of Gaussian elimination
L is unit lower triangular, 1s on diagonal and Gaussian
multipliers below
For small systems, verify (even by hand): A = LU
Each new RHS is n
2
work, instead of O
_
n
3
_
Copyright c 2011 by A. E. Naiman NM Slides Linear Systems, p. 23
Approximation by Splines
Motivation
Linear Splines
Quadratic Splines
Cubic Splines
Summary
Copyright c 2011 by A. E. Naiman NM Slides Approximation by Splines, p. 1
Motivation
-40
-20
0
20
40
60
-4 -2 0 2 4
f
u
n
c
t
i
o
n
v
a
l
u
e

Given: set of many points, or perhaps very involved function


Want: simple representative function for analysis or
manufacturing
Any suggestions?
Copyright c 2011 by A. E. Naiman NM Slides Approximation by Splines, p. 2
Lets Try Interpolation
-40
-20
0
20
40
60
-4 -2 0 2 4
f
u
n
c
t
i
o
n
v
a
l
u
e

Disadvantages:
Values outside x-range diverge quickly (interp(10) = 1592)
Numerical instabilities of high-degree polynomials
Copyright c 2011 by A. E. Naiman NM Slides Approximation by Splines, p. 3
Runge FunctionTwo Interpolations
0
1
c
0
c
1
c
2
c
3 0
x
5
x
6
x
7
x
8
f
u
n
c
t
i
o
n
v
a
l
u
e
(= 5) (= x
4
= c
4
) (= 5)

Chebyshev
uniform
More disadvantages:
Within x-range, often high oscillations
Even Chebyshev points often uncharacteristic oscillations
Copyright c 2011 by A. E. Naiman NM Slides Approximation by Splines, p. 4
Splines
Given domain [a, b], a spline S(x)
Is dened on entire domain
Provides a certain amount of smoothness
partition of knots (= where spline can change form)
{a = t
0
, t
1
, t
2
, . . . , t
n
= b}
such that
S(x) =
_

_
S
0
(x), x [t
0
, t
1
],
S
1
(x), x [t
1
, t
2
],
.
.
.
.
.
.
S
n1
(x), x
_
t
n1
, t
n

is piecewise polynomial
Copyright c 2011 by A. E. Naiman NM Slides Approximation by Splines, p. 5
Interpolatory Splines
Note: splines split up range [a, b]
opposite of CTR CSR GQ development
Spline implies no interpolation, not even any y-values
If given points
{(t
0
, y
0
), (t
1
, y
1
), (t
2
, y
2
), . . . , (t
n
, y
n
)}
interpolatory spline traverses these as well
Splines = nice, analytical functions
Copyright c 2011 by A. E. Naiman NM Slides Approximation by Splines, p. 6
Approximation by Splines
Motivation
Linear Splines
Quadratic Splines
Cubic Splines
Summary
Copyright c 2011 by A. E. Naiman NM Slides Approximation by Splines, p. 7
Linear Splines
Given domain [a, b], a linear spline S(x)
Is dened on entire domain
Provides continuity, i.e., is C
0
[a, b]
partition of knots
{a = t
0
, t
1
, t
2
, . . . , t
n
= b}
such that
S
i
(x) = a
i
x +b
i
P
1
__
t
i
, t
i+1
__
, i = 0, . . . , n 1
Recall: no y-values or interpolation yet
Copyright c 2011 by A. E. Naiman NM Slides Approximation by Splines, p. 8
Linear SplineExamples
a
b
f
u
n
c
t
i
o
n
v
a
l
u
e
undened part
discontinuous
nonlinear part
linear spline
Denition outside of [a, b] is arbitrary
Copyright c 2011 by A. E. Naiman NM Slides Approximation by Splines, p. 9
Interpolatory Linear Splines
Given points
{(t
0
, y
0
), (t
1
, y
1
), (t
2
, y
2
), . . . , (t
n
, y
n
)}
spline must interpolate as well
Are the S
i
(x) (with no additional knots) unique?
Coecients: a
i
x +b
i
, i = 0, . . . , n 1 total = 2n
Conditions: 2 prescribed interpolation points for S
i
(x),
i = 0, . . . , n 1 (includes continuity condition)
total = 2n
Obtain
S
i
(x) = a
i
x +(y
i
a
i
t
i
), a
i
=
y
i+1
y
i
t
i+1
t
i
, i = 0, . . . , n 1
Copyright c 2011 by A. E. Naiman NM Slides Approximation by Splines, p. 10
Interpolatory Linear SplinesExample
-40
-20
0
20
40
60
-4 -2 0 2 4
f
u
n
c
t
i
o
n
v
a
l
u
e

Discontinuous derivatives at knots are unpleasing, . . .


Copyright c 2011 by A. E. Naiman NM Slides Approximation by Splines, p. 11
Approximation by Splines
Motivation
Linear Splines
Quadratic Splines
Cubic Splines
Summary
Copyright c 2011 by A. E. Naiman NM Slides Approximation by Splines, p. 12
Quadratic Splines
Given domain [a, b], a quadratic spline S(x)
Is dened on entire domain
Provides continuity of zeroth and rst derivatives, i.e., is
C
1
[a, b]
partition of knots
{a = t
0
, t
1
, t
2
, . . . , t
n
= b}
such that
S
i
(x) = a
i
x
2
+b
i
x +c
i
P
2
__
t
i
, t
i+1
__
, i = 0, . . . , n 1
Again no y-values or interpolation yet
Copyright c 2011 by A. E. Naiman NM Slides Approximation by Splines, p. 13
Quadratic SplineExample
f(x) =
_

_
x
2
, x 0,
x
2
, 0 x 1,
1 2x, x 1,
f(x)
?
= quadratic spline
Dened on domain (, )

Continuity (clearly okay away from x = 0 and 1):
Zeroth derivative:
f
_
0

_
= f
_
0
+
_
= 0
f
_
1

_
= f
_
1
+
_
= 1
First derivative:
f

_
0

_
= f

_
0
+
_
= 0
f

_
1

_
= f

_
1
+
_
= 2

Each part of f(x) is P
2

Copyright c 2011 by A. E. Naiman NM Slides Approximation by Splines, p. 14


Interpolatory Quadratic Splines
Given points
{(t
0
, y
0
), (t
1
, y
1
), (t
2
, y
2
), . . . , (t
n
, y
n
)}
spline must interpolate as well
Are the S
i
(x) unique (same knots)?
Coecients: a
i
x
2
+b
i
x +c
i
, i = 0, . . . , n 1
total = 3n
Conditions:
2 prescribed interpolation points for S
i
(x),
i = 0, . . . , n 1 (includes continuity of function
condition)
(n 1) C
1
continuities
total = 3n 1
Copyright c 2011 by A. E. Naiman NM Slides Approximation by Splines, p. 15
Interpolatory Quadratic Splines (cont.)
Underdetermined system need to add one condition
Dene (as yet to be determined) z
i
= S

(t
i
), i = 0, . . . , n
Write
S
i
(x) =
z
i+1
z
i
2
_
t
i+1
t
i
_
(x t
i
)
2
+z
i
(x t
i
) +y
i
therefore
S

i
(x) =
z
i+1
z
i
t
i+1
t
i
(x t
i
) +z
i
Need to
verify continuity and interpolatory conditions
determine z
i
Copyright c 2011 by A. E. Naiman NM Slides Approximation by Splines, p. 16
Checking Interpolatory Quadratic Splines
Check four continuity (and interpolatory) conditions:
(i) S
i
(t
i
)

= y
i
(ii) S
i
_
t
i+1
_
= (below)
(iii) S

i
(t
i
)

= z
i
(iv) S

i
_
t
i+1
_

= z
i+1
(ii) S
i
_
t
i+1
_
=
z
i+1
z
i
2
_
t
i+1
t
i
_
+z
i
_
t
i+1
t
i
_
+y
i
=
z
i+1
+z
i
2
_
t
i+1
t
i
_
+y
i
set
= y
i+1
therefore (n equations, n +1 unknowns)
z
i+1
= 2
y
i+1
y
i
t
i+1
t
i
z
i
, i = 0, . . . , n 1
Choose any 1 z
i
and the remaining n are determined.
Copyright c 2011 by A. E. Naiman NM Slides Approximation by Splines, p. 17
Interpolatory Quadratic SplinesExample
-40
-20
0
20
40
60
-4 -2 0 2 4
f
u
n
c
t
i
o
n
v
a
l
u
e

z
0
:= 0
Okay, but discontinuous curvature at knots, . . .
Copyright c 2011 by A. E. Naiman NM Slides Approximation by Splines, p. 18
Approximation by Splines
Motivation
Linear Splines
Quadratic Splines
Cubic Splines
Summary
Copyright c 2011 by A. E. Naiman NM Slides Approximation by Splines, p. 19
Cubic Splines
Given domain [a, b], a cubic spline S(x)
Is dened on entire domain
Provides continuity of zeroth, rst and second derivatives,
i.e., is C
2
[a, b]
partition of knots
{a = t
0
, t
1
, t
2
, . . . , t
n
= b}
such that for i = 0, . . . , n 1
S
i
(x) = a
i
x
3
+b
i
x
2
+c
i
x +d
i
P
3
__
t
i
, t
i+1
__
,
In general: spline of degree k . . . C
k1
. . . P
k
. . .
Copyright c 2011 by A. E. Naiman NM Slides Approximation by Splines, p. 20
Why Stop at k = 3?
Continuous curvature is visually pleasing
Usually little numerical advantage to k > 3
Technically, odd ks are better for interpolating splines
Natural (dened later) cubic splines
best in an analytical sense (stated later)
Copyright c 2011 by A. E. Naiman NM Slides Approximation by Splines, p. 21
Interpolatory Cubic Splines
Given points
{(t
0
, y
0
), (t
1
, y
1
), (t
2
, y
2
), . . . , (t
n
, y
n
)}
spline must interpolate as well
Are the S
i
(x) unique (same knots)?
Coecients: a
i
x
3
+b
i
x
2
+c
i
x +d
i
, i = 0, . . . , n 1
total = 4n
Conditions:
2 prescribed interpolation points for S
i
(x),
i = 0, . . . , n 1 (includes continuity of function
condition)
(n 1) C
1
+ (n 1) C
2
continuities
total = 4n 2
Copyright c 2011 by A. E. Naiman NM Slides Approximation by Splines, p. 22
Interpolatory Cubic Splines (cont.)
Underdetermined system need to add two conditions
Natural cubic spline
add: S

(a) = S

(b) = 0
Assumes straight lines (i.e., no more constraints)
outside of [a, b]
Imagine bent beam of ship hull
Dened for non-interpolatory case as well
Required matrix calculation for S
i
denitions
Linear: independent a
i
=
y
i+1
y
i
t
i+1
t
i
diagonal
Quadratic: two-term z
i
denition bidiagonal
Cubic: . . . tridiagonal
Copyright c 2011 by A. E. Naiman NM Slides Approximation by Splines, p. 23
Interp. Natural Cubic SplinesExample
-40
-20
0
20
40
60
-4 -2 0 2 4
f
u
n
c
t
i
o
n
v
a
l
u
e

Now the curvature is continuous as well.


Copyright c 2011 by A. E. Naiman NM Slides Approximation by Splines, p. 24
Optimality of Natural Cubic Spline
Theorem: If
f C
2
[a, b],
knots: {a = t
0
, t
1
, t
2
, . . . , t
n
= b}
interpolation points: (t
i
, y
i
) : y
i
= f(t
i
), i = 0, . . . , n
S(x) is the natural cubic spline which interpolates f(x)
then
_
b
a
_
S

(x)
_
2
dx
_
b
a
_
f

(x)
_
2
dx
Bottom line
average curvature of S that of f
compare with interpolating polynomial
Copyright c 2011 by A. E. Naiman NM Slides Approximation by Splines, p. 25
Approximation by Splines
Motivation
Linear Splines
Quadratic Splines
Cubic Splines
Summary
Copyright c 2011 by A. E. Naiman NM Slides Approximation by Splines, p. 26
Interpolation vs. SplinesSerpentine Curve
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
f
u
n
c
t
i
o
n
v
a
l
u
e

interpolator
linear spline
Vs. oscillatory interpolatoreven linear spline is better.
Copyright c 2011 by A. E. Naiman NM Slides Approximation by Splines, p. 27
Three Splines
-20
0
20
40
60
80
-4 -2 0 2 4
f
u
n
c
t
i
o
n
v
a
l
u
e

linear
quadratic
natural cubic
Increased smoothness with increase of degree.
Copyright c 2011 by A. E. Naiman NM Slides Approximation by Splines, p. 28
Ordinary Dierential Equations
Introduction
Euler Method
Higher Order Taylor Methods
Runge-Kutta Methods
Summary
Copyright c 2011 by A. E. Naiman NM Slides Ordinary Dierential Equations, p. 1
Ordinary Dierential EquationDenition
ODE = an equation
involving one or more derivatives of x(t)
x(t) is unknown and the desired target
somewhat opposite of numerical dierentiation
E.g.:
_
x

_
3
7
(t) +37t e
x
2
(t)
sin
4
_
x

(t) log
1
t
= 42
Which x(t)s fulll this behavior?
Ordinary (vs. partial) = one independent variable t
Order = highest (composition of) derivative(s) involved
Linear = derivatives, including zeroth, appear in linear form
Homogeneous = all terms involve some derivative
(including zeroth)
Copyright c 2011 by A. E. Naiman NM Slides Ordinary Dierential Equations, p. 2
Analytical Approach
Good luck with previous equation, but others . . .
Shorthand: x = x(t), x

=
d(x(t))
dt
, x

=
d
2
(x(t))
dt
2
, . . .
Analytically solvable
x

x = e
t
x(t) = t e
t
+c e
t
x

+9x = 0 x(t) = c
1
sin3t +c
2
cos 3t
x

+
1
2x
= 0 x(t) =

c t
c, c
1
and c
2
are arbitrary constants
Need more conditions/information to pin down constants
Initial value problems (IVP)
Boundary value problems (BVP)
Here: IVP for rst-order ODE.
Copyright c 2011 by A. E. Naiman NM Slides Ordinary Dierential Equations, p. 3
First-Order IVP
General form:
x

= f(t, x), x(a) given


Note: non-linear, non-homogeneous; but, x

not on RHS
Examples
x

= x +1, x(0) = 0 x(t) = e


t
1
x

= 6t 1, x(1) = 6 x(t) = 3t
2
t +4
x

=
t
x+1
, x(0) = 0 x(t) =
_
t
2
+1 1
Physically: e.g., t is time, x is distance and f = x

is
speed/velocity
Another optimistic scenario . . .
Copyright c 2011 by A. E. Naiman NM Slides Ordinary Dierential Equations, p. 4
RHS Independence of x
f = f(t) but f = f(x)
E.g.
_
_
_
x

= 3t
2
4t
1
+
_
1 +t
2
_
1
x(5) = 17
Perform indenite integral
x(t) =
_
d(x(t))
dt
dt =
_
f(t) dt
Obtain
_
x(t) = t
3
4lnt +arctant +C
C = 17 5
3
+4ln5 arctan5
And now for the bad news . . .
Copyright c 2011 by A. E. Naiman NM Slides Ordinary Dierential Equations, p. 5
Numerical Techniques
Source of need
Usually analytical solution is not known
Even if known, perhaps very complicated, expensive to
compute
Numerical techniques
Generate a table of values for x(t)
Usually equispaced in t, stepsize = h

!
with small h, and far from initial value
roundo error can accumulate and kill
Copyright c 2011 by A. E. Naiman NM Slides Ordinary Dierential Equations, p. 6
Ordinary Dierential Equations
Introduction
Euler Method
Higher Order Taylor Methods
Runge-Kutta Methods
Summary
Copyright c 2011 by A. E. Naiman NM Slides Ordinary Dierential Equations, p. 7
Euler Method
First-order IVP: given x

= f(t, x), x(a), want x(b)


Use rst 2 terms of Taylor series (i.e., n = 1) to get from
x(a) to x(a +h)
x(a +h) = x(a) +hx

(a)
. .
use f(a, x(a))
+
truncation error
..
O
_
h
2
_
Repeat to get from x(a +h) to x(a +2h), . . .
Total n =
ba
h
steps until x(b)
Note: units of time/distance/speed are consistent
Copyright c 2011 by A. E. Naiman NM Slides Ordinary Dierential Equations, p. 8
Euler MethodExample
x
(
a
)
a
f
u
n
c
t
i
o
n
v
a
l
u
e
,
x
(
t
)
t

h
actual function
Euler
B
!
E '
When will the slopes match up at the points?
Okay, but not great. What is the accuracy?
Copyright c 2011 by A. E. Naiman NM Slides Ordinary Dierential Equations, p. 9
Euler MethodPros and Cons
Note: straight lines connecting points
from Euler construction (linear in h)
can be used for subsequent linear interpolation
Advantages
Accurate early on: O
_
h
2
_
for rst step
Only need to calculate given function f(t, x(t))
Only one evaluation of f(t, x(t)) needed
Disadvantages
Pretty inaccurate at b
Cumulative truncation error: n O
_
h
2
_
= O(h)
This is aside from (accumulative) roundo error
How about more terms of the Taylor series?
Copyright c 2011 by A. E. Naiman NM Slides Ordinary Dierential Equations, p. 10
Ordinary Dierential Equations
Introduction
Euler Method
Higher Order Taylor Methods
Runge-Kutta Methods
Summary
Copyright c 2011 by A. E. Naiman NM Slides Ordinary Dierential Equations, p. 11
Taylor Method of Order 4
First-order IVP: given x

= f(t, x), x(a), want x(b)


Use rst 5 terms of Taylor series (i.e., n = 4) to get from
x(a) to x(a +h)
x(a +h) = x(a)+hx

(a)
. .
use f(a, x(a))
+
h
2
2!
x

(a)+
h
3
3!
x

(a)+
h
4
4!
x
(iv)
(a)+O
_
h
5
_
Use f

, f

and f

for x

, x

and x
(iv)
, respectively
Repeat to get from x(a +h) to x(a +2h), . . .
Note: units of time/distance/speed still are consistent
Order 4 is a standard order used.
Copyright c 2011 by A. E. Naiman NM Slides Ordinary Dierential Equations, p. 12
Taylor MethodNumerical Example
First-order IVP: x

= 1 +x
2
+t
3
, x(1) = 4, want x(2)
Derivatives of f(t, x)
x

= 2x x

+3t
2
x

= 2x x

+2
_
x

_
2
+6t
x
(iv)
= 2x x

+6x

+6
Solution values of x(2), n = 100
actual: 4.3712 (5 signicant digits)
Euler: 4.2358541
Taylor
4
: 4.3712096
How about the earlier graphed example?
Copyright c 2011 by A. E. Naiman NM Slides Ordinary Dierential Equations, p. 13
Taylor Method of Order 4Example
x
(
a
)
a
f
u
n
c
t
i
o
n
v
a
l
u
e
,
x
(
t
)
t

h
actual function
Euler
Taylor
4
B
!
X
E '
Single step truncation error of O
_
h
5
_
excellent match.
Copyright c 2011 by A. E. Naiman NM Slides Ordinary Dierential Equations, p. 14
Taylor Method of Order 4Larger Step
x
(
a
)
a
f
u
n
c
t
i
o
n
v
a
l
u
e
,
x
(
t
)
t

h
actual function
Euler
Taylor
4
, h 7h
B
!
z
E '
Even single Taylor
4
step beats Euler.
Copyright c 2011 by A. E. Naiman NM Slides Ordinary Dierential Equations, p. 15
Taylor MethodPros and Cons
Note: graphs connecting points: from construction (P
4
in h)
Advantages
Very accurate
Cumulative truncation error: n O
_
h
5
_
= O
_
h
4
_
Disadvantages
Need derivatives of f(t, x(t)) which might be
analytically: dicult
numerically: expensivecomputationally and/or
accuracy-wise
just plain impossible
Four new evaluations each step (Euler was just one)
How to avoid the extra derivatives?
Copyright c 2011 by A. E. Naiman NM Slides Ordinary Dierential Equations, p. 16
Ordinary Dierential Equations
Introduction
Euler Method
Higher Order Taylor Methods
Runge-Kutta Methods
Summary
Copyright c 2011 by A. E. Naiman NM Slides Ordinary Dierential Equations, p. 17
Motivation
We want to avoid calculating derivatives of f(t, x(t))
Similar to Newtonsecant motivation
Also, recall dierent approaches for higher accuracy
Taylor series: more derivatives at one point
Numerical dierentiation: more function evaluations, at
various points
Runge-Kutta (RK) of order m: for each step of size h
evaluate f(t, x(t)) at m interim stages
arrive at accuracy order similar to Taylor method of
order m
Copyright c 2011 by A. E. Naiman NM Slides Ordinary Dierential Equations, p. 18
Runge-Kutta Methods: RK2 and RK4
Each f(t, x(t)) evaluation builds on previous
Weighted average of evaluations produces x(t +h)
Error for order m is O
_
h
m+1
_
for each step of size h
Note: units of time/distance/speedokay
RK2:
x(t +h) = x(t) +
1
2
(F
1
+F
2
)
_
F
1
= hf(t, x)
F
2
= hf(t +h, x +F
1
)
RK4:
x(t +h) = x(t)+
1
6
(F
1
+2F
2
+2F
3
+F
4
)
_

_
F
1
= hf(t, x)
F
2
= hf
_
t +
1
2
h, x +
1
2
F
1
_
F
3
= hf
_
t +
1
2
h, x +
1
2
F
2
_
F
4
= hf(t +h, x +F
3
)
Copyright c 2011 by A. E. Naiman NM Slides Ordinary Dierential Equations, p. 19
Ordinary Dierential Equations
Introduction
Euler Method
Higher Order Taylor Methods
Runge-Kutta Methods
Summary
Copyright c 2011 by A. E. Naiman NM Slides Ordinary Dierential Equations, p. 20
SummaryFirst-Order IVP Solvers
Complex and complicated IVPs require numerical methods
Usually generate table of values, at constant stepsize h
Euler: simple, but not too accurate
High-order Taylor: very accurate, but requires derivatives of
f(t, x(t))
Runge-Kutta: same order of accuracy as Taylor, without
derivative evaluations
Error sources
Local truncation (of Taylor series approximation)
Local roundo (due to nite precision)
Accumulations and combinations of previous two
Copyright c 2011 by A. E. Naiman NM Slides Ordinary Dierential Equations, p. 21
Least Squares Method
Motivation and Approach
Linearly Dependent Data
General Basis Functions
Polynomial Regression
Function Approximation
Copyright c 2011 by A. E. Naiman NM Slides Least Squares Method, p. 1
Source of Data
Have the following tabulated data:
x x
0
x
1
x
m
y y
0
y
1
y
m
y
i
v
a
l
u
e
s
x
i
values




E.g., data from experiment
Assume known dependence, e.g. linear, i.e., y = ax +b
What a and b do we choose to represent the data?
Copyright c 2011 by A. E. Naiman NM Slides Least Squares Method, p. 2
Most Probable Line
For each point, consider the equation y
i
= ax
i
+b with the
two unknowns a and b
One point solutions
Two points (dierent x
i
) one unique solution
> two points in general no solution
> two points What is most probable line?
Copyright c 2011 by A. E. Naiman NM Slides Least Squares Method, p. 3
Estimate Error
Assume estimates a and

b
error at (x
k
, y
k
): e
k
= ax
k
+

b y
k
y
i
v
a
l
u
e
s
x
i
values




(x
k
, y
k
)
height is e
k
y = ax +

b
Note:
vertical error, not distance to line (a much harder
problem)
|e
k
| no preference to error direction
How do we minimize all of the |e
k
|?
Copyright c 2011 by A. E. Naiman NM Slides Least Squares Method, p. 4
Vector Minimizations
Minimize:
largest component: min
a,b
max
0km
|e
k
|, min-max
component sum: min
a,b
m

k=0
|e
k
|, linear programming
Note: || wont allow errors to cancel
component squared sum: min
a,b
m

k=0
e
2
k
. .
(a,b)
, least squares
Why use least squares?
Copyright c 2011 by A. E. Naiman NM Slides Least Squares Method, p. 5

p
Norms
Denition: ||v||
p

_
_
m

k=0
|v
k
|
p
_
_
1
p
Minimizing

,
1
and
2
norms, resp., in 2D (m = 1):
-
6
v
0
v
1
1
1
1
1
||v||

= 1
||v||

= max (|v
0
|, |v
1
|)
7
v
-
6
v
0
v
1
1
1
1
1
||v||
1
= 1
||v||
1
= |v
0
| +|v
1
|
7
v
-
6
v
0
v
1
1
1
1
1
||v||
2
= 1
||v||
2
=
_
v
2
0
+v
2
1
7
v
Why use
2
?
Can more easily use calculus (see below)
If error is normally distributed
get maximum likelihood estimator
Copyright c 2011 by A. E. Naiman NM Slides Least Squares Method, p. 6
(a, b) Minimization
How do we minimize (a, b) =
m

k=0
e
2
k
wrt a and b?
Standard calculus:

a
set
= 0 and

b
set
= 0
two equations with two unknowns
If dependence of y on a and b is linear (and consequently,
dependence of (a, b) is quadratic)
minimization leads to linear system for a and b
(linear least squares)
Example also had linearly dependent data, i.e., y linear in x
Non-linear LS, e.g.: a sinb x +c cos d x, ae
b x
+ce
d x
Minimization of our example, . . .
Copyright c 2011 by A. E. Naiman NM Slides Least Squares Method, p. 7
Least Squares Method
Motivation and Approach
Linearly Dependent Data
General Basis Functions
Polynomial Regression
Function Approximation
Copyright c 2011 by A. E. Naiman NM Slides Least Squares Method, p. 8
LLS for Linearly Dependent DataMethod
Function to minimize:
(a, b) =
m

k=0
e
2
k
=
m

k=0
(ax
k
+b y
k
)
2
lead to two dierentiations:
2
m

k=0
(ax
k
+b y
k
)x
k
= 0, and 2
m

k=0
(ax
k
+b y
k
) = 0
or as a system of linear equations in a and b:
_
_
m

k=0
x
2
k
_
_
a +
_
_
m

k=0
x
k
_
_
b =
_
_
m

k=0
x
k
y
k
_
_
_
_
m

k=0
x
k
_
_
a + (m+1)b =
_
_
m

k=0
y
k
_
_
Coecient matrix = cross-products of a and b coecients.
Copyright c 2011 by A. E. Naiman NM Slides Least Squares Method, p. 9
LLS for Linearly Dependent DataSolution
We obtain:
a =
1
d
_
_
(m+1)
m

k=0
x
k
y
k

k=0
x
k
m

k=0
y
k
_
_
and
b =
1
d
_
_
m

k=0
x
2
k
m

k=0
y
k

k=0
x
k
m

k=0
x
k
y
k
_
_
where d is the determinant:
d = (m+1)
m

k=0
x
2
k

_
_
m

k=0
x
k
_
_
2
What does this look like?
Copyright c 2011 by A. E. Naiman NM Slides Least Squares Method, p. 10
LLS Solution for Sample Data
y
i
v
a
l
u
e
s
x
i
values


What about non-linearly dependent data?
Copyright c 2011 by A. E. Naiman NM Slides Least Squares Method, p. 11
Least Squares Method
Motivation and Approach
Linearly Dependent Data
General Basis Functions
Polynomial Regression
Function Approximation
Copyright c 2011 by A. E. Naiman NM Slides Least Squares Method, p. 12
Non-Linearly Dependent Data
Linear least squaresfor linear combination of any functions,
e.g.:
y = a lnx +b cos x +ce
x
Minimization of : three dierentiations:

a
set
= 0,

b
set
= 0 and

c
set
= 0
Elements of matrix: sums of cross-products of functions:
m

k=0
lnx
k
e
x
k
,
m

k=0
(cos x
k
)
2
, . . .
A more general form, . . .
Copyright c 2011 by A. E. Naiman NM Slides Least Squares Method, p. 13
Linear Combinations of General Functions
m+1 points {(x
0
, y
0
), (x
1
, y
1
), . . . , (x
m
, y
m
)}
n +1 basis functions g
0
, g
1
, . . . , g
n
, such that
g(x) =
n

j=0
c
j
g
j
(x)
Error function
(c
0
, c
1
, . . . , c
n
) =
m

k=0
_
_
n

j=0
c
j
g
j
(x
k
) y
k
_
_
2
Minimization:

c
i
= 2
m

k=0
_
_
n

j=0
c
j
g
j
(x
k
) y
k
_
_
g
i
(x
k
)
set
= 0, i = 0, . . . , n
Pulling it together, . . .
Copyright c 2011 by A. E. Naiman NM Slides Least Squares Method, p. 14
Normal Equations
Normal equations:
n

j=0
_
_
m

k=0
g
i
(x
k
) g
j
(x
k
)
_
_
c
j
=
m

k=0
y
k
g
i
(x
k
), i = 0, . . . , n
Note: n +1 equations (i.e., rows) and n +1 columns
(Coecient matrix)
i j
=
m

k=0
g
i
(x
k
) g
j
(x
k
)
Possible solution method: Gaussian elimination
Require of g
j
(x) for any solution method
linear independence (lest there be no solution)
appropriateness (e.g., not sins for linear data)
well-conditioned matrix (opposite of ill-conditioned)
Copyright c 2011 by A. E. Naiman NM Slides Least Squares Method, p. 15
Choice of Basis Functions
What if basis functions are unknown?
Choose them for numerically good coecient matrix (at
least not ill-conditioned)
Orthogonality diagonal matrix, would be nice
Orthonormality identity matrix, would be best, i.e.,
m

k=0
g
i
(x
k
) g
j
(x
k
) =
i j
and compute coecients directly
c
i
=
m

k=0
y
k
g
i
(x
k
), i = 0, . . . , n
Can be done with Gram-Schmidt process
Another method for choosing basis functions, . . .
Copyright c 2011 by A. E. Naiman NM Slides Least Squares Method, p. 16
Chebyshev Polynomials
Assume that the basis functions are P
n
, x
i
[1, 1]
1, x, x
2
, x
3
, . . . are too alike to describe varying behavior
Use Chebyshev polynomials: 1, x, 2x
2
1, 4x
3
3x, . . .
-1
1
0 0.2 0.4 0.6 0.8 1
f
u
n
c
t
i
o
n
v
a
l
u
e
x
j
T
1
T
2
T
3
T
4
T
5
. . . with Gaussian elimination produces accurate results.
Copyright c 2011 by A. E. Naiman NM Slides Least Squares Method, p. 17
Least Squares Method
Motivation and Approach
Linearly Dependent Data
General Basis Functions
Polynomial Regression
Function Approximation
Copyright c 2011 by A. E. Naiman NM Slides Least Squares Method, p. 18
Motivation and Denition
Want to smooth out data to a polynomial p
N
(x)
Problem: what degree N polynomial?
For m+1 points, certainly N < m, as N = m is interpolation
Dene variance
2
n

2
n
=
1
mn
m

k=0
[y
k
p
n
(x
k
)]
2
(n < m)
Note: summation part (i.e., without leading fraction) of
2
n
is monotonic non-increasing with n (why?)
Copyright c 2011 by A. E. Naiman NM Slides Least Squares Method, p. 19
Regression Theory
Statistical theory: if data (with noise) is really of p
N
(x), then:

2
0
>
2
1
>
2
2
> >
2
N

2
N+1

2
N+2

2
m1

with reasonable noise, stop when


2
N

2
N+1

2
N+2

2
N
N 4
N
N +4

2 n
polynomial degree, n









Copyright c 2011 by A. E. Naiman NM Slides Least Squares Method, p. 20
Least Squares Method
Motivation and Approach
Linearly Dependent Data
General Basis Functions
Polynomial Regression
Function Approximation
Copyright c 2011 by A. E. Naiman NM Slides Least Squares Method, p. 21
Continuous Data
Given f(x) on [a, b], perhaps from experiment
Replace complicated or numerically expensive f(x) with
g(x) =
n

j=0
c
j
g
j
(x)
Continuous analog of error function
(c
0
, c
1
, . . . , c
n
) =
_
b
a
[g(x) f(x)]
2
dx
Can also weight parts of the interval dierently
(c
0
, c
1
, . . . , c
n
) =
_
b
a
[g(x) f(x)]
2
w(x) dx
Copyright c 2011 by A. E. Naiman NM Slides Least Squares Method, p. 22
Normal Equations and Basis Functions
Dierentiating, we get the normal equations
n

j=0
_
_
b
a
g
i
(x) g
j
(x) w(x) dx
_
c
j
=
_
b
a
f(x) g
i
(x) w(x) dx, i = 0, . . . , n
Want orthogonality of (coecient matrix)
i j
_
b
a
g
i
(x) g
j
(x) w(x) dx = 0, i = j
For weighting interval ends, use Chebyshev polynomials since
_
1
1
T
i
(x) T
j
(x)
1
_
1 x
2
dx =
_

_
0, i = j,

2
, i = j > 0
, i = j = 0
Copyright c 2011 by A. E. Naiman NM Slides Least Squares Method, p. 23
Simulation
Random Numbers
Monte Carlo Integration
Problems and Games
Copyright c 2011 by A. E. Naiman NM Slides Simulation, p. 1
Motivation
Typical problem: trac lights (sans clover leaf)
given trac ow parameters . . .
how to determine the optimal period
how to distribute the time per period
note: these are all inter-dependent
Analytically very hard (or impossible)
Empirical simulation can approach the problem
Need to implement randomization for modeling various
conditions
Less mathematical, but not less important.
Copyright c 2011 by A. E. Naiman NM Slides Simulation, p. 2
Random NumbersUsage
With simulation assist understanding of
standard/steady state conditions
various perturbations
Monte Carlo: running a process many times with
randomization
help draw statistics
Copyright c 2011 by A. E. Naiman NM Slides Simulation, p. 3
Random NumbersRequirements
Not ordered, e.g., monotonic or other patterns
Equal distribution
Often RNG produce x [0, 1)
Desired (demanded!): P(a, a +h) = h; independent of a
Low or no periodicity
No easy generating function from one number to the next
can be deceivingly random-looking
e.g.: digits of
Copyright c 2011 by A. E. Naiman NM Slides Simulation, p. 4
Random Number Generators
Computers are deterministic not an easy problem
Current computer
1
100
of secondsnot good
for requests every <
1
100
second
for any requests with periodicity of
1
100
second
Often based on Mersenne primes (so far, 45 of them)
denition: 2
k
1, for some k
e.g.: k = 31 2,147,483,647
largest (as of 3 August 2008): k = 43, 112, 609
12,978,189 decimal digits!
other usages: cryptology
Copyright c 2011 by A. E. Naiman NM Slides Simulation, p. 5
Testing and Using a RNG
Not all RNG were created equal!
One can (and should) histogram a RNG
Not obvious (nor necessarily known)
number of trials necessary for testing a RNG
number of trials necessary when using a RNG
Require better RNG, for higher usage of RNG
For ranges other than [0, 1): apply obvious mapping
Copyright c 2011 by A. E. Naiman NM Slides Simulation, p. 6
Incorrect UsageIn an Ellipse
-1
-0.5
0
0.5
1
-2 -1 0 1 2
f
u
n
c
t
i
o
n
v
a
l
u
e
+
+ +
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ +
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ +
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ +
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ +
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ +
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ +
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ +
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ +
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Equation: x
2
+4y
2
= 4
Generation algorithm:
x
i
rng(2, 2), y
i
rng(1, 1)
y
i
correction: y
i
(y
i
/2)
_
4 x
2
i
Points bunch up at ends non-uniformity.
Copyright c 2011 by A. E. Naiman NM Slides Simulation, p. 7
Incorrect UsageIn a Circle
-1
-0.5
0
0.5
1
-1 -0.5 0 0.5 1
f
u
n
c
t
i
o
n
v
a
l
u
e
+
+
+
+ +
+
+
+
+
+
+
+
+
+
+
+
+ +
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ +
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ +
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ +
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ +
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
++
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ +
+
+
+
Generation algorithm:
i
rng(0, 2), r
i
rng(0, 1)
Points bunch in the middle non-uniformity.
Copyright c 2011 by A. E. Naiman NM Slides Simulation, p. 8
correct UsageIn an Ellipse
-1
-0.5
0
0.5
1
-2 -1 0 1 2
f
u
n
c
t
i
o
n
v
a
l
u
e
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ +
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ +
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ +
+
+
+
+
+
+
+
+
+
+ +
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ +
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ +
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ +
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Generate extra points, discarding exterior ones
Copyright c 2011 by A. E. Naiman NM Slides Simulation, p. 9
Correct UsageIn a Circle
-1
-0.5
0
0.5
1
-1 -0.5 0 0.5 1
f
u
n
c
t
i
o
n
v
a
l
u
e
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ +
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ +
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ +
+
+
+
+
+
+
+
+
+
+ +
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ +
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ +
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Generate extra Cartesian points, discarding exterior ones
Copyright c 2011 by A. E. Naiman NM Slides Simulation, p. 10
Simulation
Random Numbers
Monte Carlo Integration
Problems and Games
Copyright c 2011 by A. E. Naiman NM Slides Simulation, p. 11
Numerical Integration
Motivation: to solve
_
1
0
f(x)dx
Possible solutions
Composite Trapeziod Rule
Composite Simpsons Rule
Romberg Algorithm
Gaussian Quadrature
Problem: sometimes things are more dicult, particularly in
higher dimensions
Monte Carlo solution: for x
i
rng(0, 1)
_
1
0
f(x)dx
1
n
n

i=1
f(x
i
)
Error (from statistical analysis): O
_
1/

n
_
Copyright c 2011 by A. E. Naiman NM Slides Simulation, p. 12
Higher Dimensions and Non-Unity Domains
In 3D: for (x
i
, y
i
, z
i
) rng(0, 1)
_
1
0
_
1
0
_
1
0
f(x, y, z)dx dy dz
1
n
n

i=1
f(x
i
, y
i
, z
i
)
Non-unity domain: for x
i
rng(a, b)
_
b
a
f(x)dx (b a)
1
n
n

i=1
f(x
i
)
In general:
_
A
f (size of A) (average of f for n random points in A)
Copyright c 2011 by A. E. Naiman NM Slides Simulation, p. 13
Sample Integration Problem
0
0.2
0.4
0.6
0.8
1
0
0.2
0.4
0.6
0.8
1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Integral:
_ _

sin
_
ln(x +y +1) dx dy
Domain:
=
_
x
1
2
_
2
+
_
y
1
2
_
2

1
4
Copyright c 2011 by A. E. Naiman NM Slides Simulation, p. 14
Sample Integration Solution
0
0.2
0.4
0.6
0.8
1
0
0.2
0.4
0.6
0.8
1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Solution:

4n
n

i=1
f(p
i
), p
i
chosen properly (how?)
Copyright c 2011 by A. E. Naiman NM Slides Simulation, p. 15
Computing Volumes
Problem: determine the volume of the region which satises:
_

_
0 x 1 0 y 1 0 z 3
x
2
+siny z
x +e
y
+z 4
Solution
generate random points in (0, 0, 0) . . . (1, 1, 3)
determine percentage which satises constraints
Copyright c 2011 by A. E. Naiman NM Slides Simulation, p. 16
Geometric Interpretation
0
0.2
0.4
0.6
0.8
1
0
0.2
0.4
0.6
0.8
1
0
0.5
1
1.5
2
2.5
3
Desired volume is on the left hand side, between the graphs
Copyright c 2011 by A. E. Naiman NM Slides Simulation, p. 17
Simulation
Random Numbers
Monte Carlo Integration
Problems and Games
Copyright c 2011 by A. E. Naiman NM Slides Simulation, p. 18
Probability/Chance of Dice and Cards
Dice
12, for 2 die, 24 throws
19, for many die
loaded die
Cards
shuing in general
straight ush
royal ush
4 of a kind
Can be calculated exactly, or approximated by simulation.
Copyright c 2011 by A. E. Naiman NM Slides Simulation, p. 19
Miscellaneous Problems
How many people for probable coinciding birthdays?
Buons Needle
lined paper
needle of inter-line length
probability of dropped needle crossing a line?
Monty Hall problem
Neutron shielding (random walk)
n tennis players how many matches?
100 light switches, all o
person i switches multiples of i, i = 1, . . . , 100
which remain on?
Problems with somewhat dicult analytic solutions.
Copyright c 2011 by A. E. Naiman NM Slides Simulation, p. 20

Вам также может понравиться