Вы находитесь на странице: 1из 73

CSE 541 Numerical Methods

Roger Crawfis
Ohio State University

CSE 541 Numerical Methods


Root Finding

Root Finding Topics

Bi-section Method
Newtons method
Uses of root finding for sqrt() and reciprocal sqrt()
Secant Method
Generalized Newtons method for systems of nonlinear equations
The Jacobian matrix

Fixed-point formulas, Basins of Attraction and


Fractals.
July 4, 2016

OSU/CIS 541

Motivation
Many problems can be re-written into a
form such as:
f(x,y,z,) = 0
f(x,y,z,) = g(s,q,)

July 4, 2016

OSU/CIS 541

Motivation
A root, r, of function f occurs when f(r) = 0.
For example:
f(x) = x2 2x 3
has two roots at r = -1 and r = 3.
f(-1) = 1 + 2 3 = 0
f(3) = 9 6 3 = 0

We can also look at f in its factored form.


f(x) = x2 2x 3 = (x + 1)(x 3)
July 4, 2016

OSU/CIS 541

Factored Form of Functions


The factored form is not limited to
polynomials.
Consider:
f(x)= x sin x sin x.

A root exists at x = 1.
f(x) = (x 1) sin x

Or,
f(x) = sin x => x (x 1) (x 2)
July 4, 2016

OSU/CIS 541

Examples
Find x, such that
xp = c, xp c = 0
Calculate the sqrt(2)

x 22 22=00 x 2 x 2

Ballistics
Determine the horizontal distance at which the
projectile will intersect the terrain function.

July 4, 2016

OSU/CIS 541

Root Finding Algorithms


Closed or Bracketed techniques
Bi-section
Regula-Falsi

Open techniques
Newton fixed-point iteration
Secant method

Multidimensional non-linear problems


The Jacobian matrix

Fixed-point iterations
Convergence and Fractal Basins of Attraction
July 4, 2016

OSU/CIS 541

Bisection Method
Based on the fact that the function will
change signs as it passes thru the root.
f(a)*f(b) < 0

Once we have a root bracketed, we simply


evaluate the mid-point and halve the
interval.

July 4, 2016

OSU/CIS 541

Bisection Method
c=(a+b)/2
f(a)>0

f(c)>0

f(b)<0
July 4, 2016

OSU/CIS 541

10

Bisection Method
Guaranteed to converge to a root if one
exists within the bracket.

a=c
f(a)>0

f(c)<0
July 4, 2016

OSU/CIS 541

f(b)<0
11

Bisection Method
Slowly converges to a root

b=c
f(b)<0
a

July 4, 2016

OSU/CIS 541

12

Bisection Method
Simple algorithm:
Given: a and b, such that f(a)*f(b)<0
Given: error tolerance, err
c=(a+b)/2.0; // Find the midpoint
While( |f(c)| > err ) {
if( f(a)*f(c) < 0 ) // root in the left half
b = c;
else
// root in the right half
a = c;
c=(a+b)/2.0; // Find the new midpoint
}
return c;
July 4, 2016

OSU/CIS 541

13

Relative Error
We can develop an upper bound on the
relative error quite easily.
ba
x c

,a x
a
x

July 4, 2016

OSU/CIS 541

14

Absolute Error
What does this mean in binary mode?
err0 |b-a|
erri+1 erri/2 = |b-a|/2i+1

We gain an extra bit each iteration!!!


To reach a desired absolute error tolerance:
erri+1 errtol

July 4, 2016

ba
errtol
n
2
b a
n log 2

err
tol

OSU/CIS 541

15

Absolute Error
The bisection method converges linearly or
first-order to the root.
If we need an accuracy of 0.0001 and our
initial interval (b-a)=1, then:
2-n < 0.0001 14 iterations

Not bad, why do I need anything else?

July 4, 2016

OSU/CIS 541

16

A Note on Functions
Functions can be simple, but I may need to
evaluate it many many times.
Or, a function can be extremely
complicated. Consider:
Interested in the configuration of air vents (position,
orientation, direction of flow) that makes the
temperature in the room at a particular position
(teachers desk) equal to 72.
Is this a function?
July 4, 2016

OSU/CIS 541

17

A Note on Functions
This function may require a complex threedimensional heat-transfer coupled with a
fluid-flow simulation to evaluate the
function. hours of computational time on
a supercomputer!!!
May not necessarily even be computational.
Techniques existed before the Babylonians.
July 4, 2016

OSU/CIS 541

18

Root Finding Algorithms


Closed or Bracketed techniques
Bi-section
Regula-Falsi

Open techniques
Newton fixed-point iteration
Secant method

Multidimensional non-linear problems


The Jacobian matrix

Fixed-point iterations
Convergence and Fractal Basins of Attraction
July 4, 2016

OSU/CIS 541

19

Regula Falsi
In the book under computer problem 16 of
section 3.3.
Assume the function is linear within the
bracket.
Find the intersection of the line with the xaxis.

July 4, 2016

OSU/CIS 541

20

Regula Falsi
f (a ) f (b)
y ( x) f b
( x b)
a b
f (a ) f (b)
y (c ) 0 f b
(c b )
a b
a b
0 f (b)
c b
f (a ) f (b)
f (b) a b
c b
f (a ) f (b)
f(c)<0
a

July 4, 2016

OSU/CIS 541

21

Regula Falsi
Large benefit when the root is much closer
to one side.
Do I have to worry about division by zero?

July 4, 2016

OSU/CIS 541

22

Regula Falsi
More generally, we can state this method as:
c=wa + (1-w)b

For some weight, w, 0w 1.


If |f(a)| >> |f(b)|, then w < 0.5
Closer to b.

July 4, 2016

OSU/CIS 541

23

Bracketing Methods

Bracketing methods are robust


Convergence typically slower than open methods
Use to find approximate location of roots
Polish with open methods
Relies on identifying two points a,b initially such
that:
f(a) f(b) < 0

Guaranteed to converge
July 4, 2016

OSU/CIS 541

24

Root Finding Algorithms


Closed or Bracketed techniques
Bi-section
Regula-Falsi

Open techniques
Newton fixed-point iteration
Secant method

Multidimensional non-linear problems


The Jacobian matrix

Fixed-point iterations
Convergence and Fractal Basins of Attraction
July 4, 2016

OSU/CIS 541

25

Newtons Method
Open solution, that requires only one
current guess.
Root does not need to be bracketed.
Consider some point x0.
If we approximate f(x) as a line about x0, then
we can again solve for the root of the line.

l ( x) f ( x0 )( x x0 ) f ( x0 )

July 4, 2016

OSU/CIS 541

26

Newtons Method
Solving, leads to the following iteration:
l ( x) 0
f ( x0 )
x1 x0
f ( x0 )
f ( xi )
xi 1 xi
f ( xi )
July 4, 2016

OSU/CIS 541

27

Newtons Method
This can also be seen from Taylors Series.
Assume we have a guess, x0, close to the
actual root. Expand f(x) about this point.
x xi x
x 2
f ( xi x ) f ( xi ) xf ( xi )
f ( xi ) L 0
2!

If dx is small, then dxn quickly goes to zero.


x xi 1 xi
July 4, 2016

f ( xi )
f ( xi )

OSU/CIS 541

28

Newtons Method
Graphically, follow the tangent vector down
to the x-axis intersection.

xi

July 4, 2016

OSU/CIS 541

xi+1

29

Newtons Method
Problems

diverges
x0

2
July 4, 2016

OSU/CIS 541

30

Newtons Method
Need the initial guess to be close, or, the
function to behave nearly linear within the
range.

July 4, 2016

OSU/CIS 541

31

Finding a square-root
Ever wonder why they call this a squareroot?
Consider the roots of the equation:
f(x) = x2-a

This of course works for any power:


p

July 4, 2016

a x a 0, p R
p

OSU/CIS 541

32

Finding a square-root
Example: 2 = 1.4142135623730950488016887242097
Let x0 be one and apply Newtons method.
f ( x) 2 x

x 2
xi 1 xi i

2 xi

1
xi
2

xi

x0 1
1
2
3
x1 1 1.5000000000
2
1
2
1 3 4
17
x2
1.4166666667
2 2 3
12
July 4, 2016

OSU/CIS 541

33

Finding a square-root
Example: 2 = 1.4142135623730950488016887242097
Note the rapid convergence
1 17 24
577
x3

1.414215686
2 12 17
408
x4 1.4142135623746
((
x5 1.4142135623730950488016896
((
x6 1.4142135623730950488016887242097

Note, this was done with the standard


Microsoft calculator to maximum precision.
July 4, 2016

OSU/CIS 541

34

Finding a square-root
Can we come up with a better initial guess?
Sure, just divide the exponent by 2.
Remember the bias offset
Use bit-masks to extract the exponent to an
integer, modify and set the initial guess.

For 2, this will lead to x0=1 (round down).

July 4, 2016

OSU/CIS 541

35

Convergence Rate of Newtons


en x xn or x xn en
0 f ( x ) f ( xn en )
1
f ( xn en ) f ( xn ) en f ( xn ) en2 f ( n ), for some n x , xn
2
1
f ( xn ) en f ( xn ) en2 f ( n )
2

Now,

en 1 x xn 1 x xn

f ( xn )
f ( xn )
en
f ( xn )
f ( xn )

en f ( xn ) f ( xn )
f ( xn )

1 f ( n ) 2
en 1
en
2 f ( xn )
July 4, 2016

OSU/CIS 541

36

Convergence Rate of Newtons


Converges quadratically.

if en 10

then,

en 1 c10

July 4, 2016

2 k

OSU/CIS 541

37

Newtons Algorithm
Requires the derivative function to be evaluated,
hence more function evaluations per iteration.
A robust solution would check to see if the
iteration is stepping too far and limit the step.
Most uses of Newtons method assume the
approximation is pretty close and apply one to
three iterations blindly.

July 4, 2016

OSU/CIS 541

38

Division by Multiplication
Newtons method has many uses in
computing basic numbers.
For example, consider the equation:
1
a 0
x

Newtons method gives the iteration:


1
a
x
xk 1 xk k
xk xk axk2
1
2
xk
xk 2 axk

July 4, 2016

OSU/CIS 541

39

Reciprocal Square Root


Another useful operator is the reciprocalsquare root.
Needed to normalize vectors
Can be used to calculate the square-root.
1
a
a
a

July 4, 2016

OSU/CIS 541

40

Reciprocal Square Root


1
Let f ( x) 2 a 0
x
2

f ( x) 3
x

Newtons iteration yields:


xk
xk3
xk 1 xk a
2
2
1
xk 3 axk2
2
July 4, 2016

OSU/CIS 541

41

1/Sqrt(2)
Lets look at the convergence for the
reciprocal square-root of 2.
x0 1

x1 0.5 1 3 2 12 0.5
x2 0.5 0.5 3 2 0.5

0.625

x3 0.693359375
x4 0.706708468496799468994140625

If we could
only start
here!!

x 5 0.707106444695907075511730676593228
x 6 0.707106781186307335925435931237738
x 7 0.7071067811865475244008442397
2481
(
July 4, 2016

OSU/CIS 541

42

1/Sqrt(x)
What is a good choice for the initial seed
point?
Optimal the root, but it is unknown
Consider the normalized format of the number:

2e 127 (1.m) 2

What is the reciprocal?


What is the square-root?
July 4, 2016

OSU/CIS 541

43

1/Sqrt(x)
Theoretically,

1
1
1

127
2
x 2 1.m 2 2

e
1
127
3127

1.m 2 2 2

1.m

1
2

127 e
2

New bit-pattern
for the exponent

Current GPUs provide this operation in as little as


2 clock cycles!!! How?
How many significant bits does this estimate have?
July 4, 2016

OSU/CIS 541

44

1/Sqrt(x)
GPUs such as nVidias FX cards provide a
23-bit accurate reciprocal square-root in
two clock cycles, by only doing 2 iterations
of Newtons method.
Need 24-bits of precision =>
Previous iteration had 12-bits of precision
Started with 6-bits of precision
July 4, 2016

OSU/CIS 541

45

1/Sqrt(x)
Examine the mantissa term again (1.m).
Possible patterns are:
1.000, 1.100, 1.010, 1.110,

Pre-compute these and store the results in a table.


Fast and easy table look-up.
A 6-bit table look-up is only 64 words of on chip
cache.
Note, we only need to look-up on m, not 1.m.
This yields a reciprocal square-root for the first
seven bits, giving us about 6-bits of precision.
July 4, 2016

OSU/CIS 541

46

1/Sqrt(x)
Slight problem:

The 1.m produces a result between 1 and 2.


Hence, it remains normalized, 1.m.
1
For x , we get a number between and 1.
Need to shift the exponent.

July 4, 2016

OSU/CIS 541

47

Root Finding Algorithms


Closed or Bracketed techniques
Bi-section
Regula-Falsi

Open techniques
Newton fixed-point iteration
Secant method

Multidimensional non-linear problems


The Jacobian matrix

Fixed-point iterations
Convergence and Fractal Basins of Attraction
July 4, 2016

OSU/CIS 541

48

Secant Method
What if we do not know the derivative of
f(x)?
Secant line
Tangent vector

xi
July 4, 2016

OSU/CIS 541

xi-1
49

Secant Method
As we converge on the root, the secant line
approaches the tangent.
Hence, we can use the secant line as an
estimate and look at where it intersects the
x-axis (its root).

July 4, 2016

OSU/CIS 541

50

Secant Method
This also works by looking at the definition of the
f ( x h) f ( x)
derivative: f ( x)
lim
h
h 0

f ( xk )

f ( xk ) f ( xk 1 )
xk xk 1

Therefore, Newtons method gives:

xk xk 1
xk 1 xk
f ( xk )
f ( xk ) f ( xk 1 )

Which is the Secant Method.


July 4, 2016

OSU/CIS 541

51

Convergence Rate of Secant


Using Taylors Series, it can be shown
(proof is in the book) that:
ek 1 x xk 1
1 f ( k )

ek ek 1 c ek ek 1
2 f ( k )

July 4, 2016

OSU/CIS 541

52

Convergence Rate of Secant


This is a recursive definition of the error
term. Expressed out, we can say that:
ek 1 C ek

Where =1.62.
We call this super-linear convergence.
July 4, 2016

OSU/CIS 541

53

Root Finding Algorithms


Closed or Bracketed techniques
Bi-section
Regula-Falsi

Open techniques
Newton fixed-point iteration
Secant method

Multidimensional non-linear problems


The Jacobian matrix

Fixed-point iterations
Convergence and Fractal Basins of Attraction
July 4, 2016

OSU/CIS 541

54

Higher-dimensional Problems
Consider the class of functions
f(x1,x2,x3,,xn)=0,
where we have a mapping from n.
We can apply Newtons method separately
for each variable, xi, holding the other
variables fixed to the current guess.

July 4, 2016

OSU/CIS 541

55

Higher-dimensional Problems
This leads to the iteration:

f x1 , x2 ,K , xn
xi xi
f xi x1 , x2 ,K , xn

Two choices, either I keep of complete set of old


guesses and compute new ones, or I use the new ones
as soon as they are updated.
Might as well use the more accurate new guesses.
Not a unique solution, but an infinite set of solutions.
July 4, 2016

OSU/CIS 541

56

Higher-dimensional Problems
Example:
x+y+z=3

Solutions:
x=3, y=0, z=0
x=0, y=3, z=0

July 4, 2016

OSU/CIS 541

57

Systems of Non-linear Equations


Consider the set of equations:
f1 x1 , x2 ,K , xn 0

f 2 x1 , x2 ,K , xn 0
M
f n x1 , x2 ,K , xn 0

July 4, 2016

OSU/CIS 541

58

Systems of Non-linear Equations


Example:
x yz 3
x2 y 2 z 2 5

Plane intersected with a


sphere, intersected with a
more complex function.

e x xy xz 1

Conservation of mass coupled with


conservation of energy, coupled with
solution to complex problem.
July 4, 2016

OSU/CIS 541

59

Vector Notation
We can rewrite this using vector notation:
r r r
f ( x) 0

f f1 , f 2 ,K , f n
x x1 , x2 ,K , xn

July 4, 2016

OSU/CIS 541

60

Newtons Method for


Non-linear Systems
Newtons method for non-linear systems
can be written as:
x

( k 1)

(k )

f x
(k )

f x( k )

(k )

where f x is the Jacobian matrix

July 4, 2016

OSU/CIS 541

61

The Jacobian Matrix


The Jacobian contains all the partial derivatives of
the set of functions.
f1
x
1

f1
x2

f 2
J x1
M

f n
x1

f 2
x2

M O
f n
x2

f1
xn

f 2
xn

f n
xn

Note, that these are all functions and need to be


evaluated at a point to be useful.
July 4, 2016

OSU/CIS 541

62

The Jacobian Matrix


Hence, we write
f1 ( i )
x x
1

f1 ( i )
x L
x2

f 2 ( i )
x

(i )
J (x ) x1

f n x (i )
x1

f 2 ( i )
x L

x2

July 4, 2016

f n ( i )
x L

x2

f1 (i )
x
xn

f 2 (i )
x

xn

f n (i )
x

xn

OSU/CIS 541

63

Matrix Inverse
We define the inverse of a matrix, the same
as the reciprocal.
a

1
1
a

1
0
AA 1 I
M

July 4, 2016

0
1
M
0

L
L
O
L

0
0

OSU/CIS 541

64

Newtons Method
If the Jacobian is non-singular, such that its
inverse exists, then we can apply this to
Newtons method.
We rarely want to compute the inverse, so
instead we look at the problem.
x

July 4, 2016

( i 1)

x f x
x(i ) h(i )
(i )

(i )

f x(i )

OSU/CIS 541

65

Newtons Method
Now, we have a linear system and we solve
for h.
J x ( k ) h ( k ) f x ( k )

x (i 1) x (i ) h ( i )

Repeat until h goes to zero.


We will look at solving linear systems later in the course.
July 4, 2016

OSU/CIS 541

66

Initial Guess
How do we get an initial guess for the root
vector in higher-dimensions?
In 2D, I need to find a region that contains
the root.
Steepest Decent is a more advanced topic
not covered in this course. It is more stable
and can be used to determine an
approximate root.
July 4, 2016

OSU/CIS 541

67

Root Finding Algorithms


Closed or Bracketed techniques
Bi-section
Regula-Falsi

Open techniques
Newton fixed-point iteration
Secant method

Multidimensional non-linear problems


The Jacobian matrix

Fixed-point iterations
Convergence and Fractal Basins of Attraction
July 4, 2016

OSU/CIS 541

68

Fixed-Point Iteration
Many problems also take on the specialized
form: g(x)=x, where we seek, x, that
satisfies this equation.
f(x)=x
g(x)

July 4, 2016

OSU/CIS 541

69

Fixed-Point Iteration
Newtons iteration and the Secant method
are of course in this form.
In the limit, f(xk)=0, hence xk+1=xk

July 4, 2016

OSU/CIS 541

70

Fixed-Point Iteration
Only problem is that that assumes it converges.
The pretty fractal images you see basically encode
how many iterations it took to either converge (to
some accuracy) or to diverge, using that point as
the initial seed point of an iterative equation.
The book also has an example where the roots
converge to a finite set. By assigning different
colors to each root, we can see to which point the
initial seed point converged.
July 4, 2016

OSU/CIS 541

71

Fractals
Images result
when we deal
with 2dimensions.
Such as
complex
numbers.
Color
indicates how
quickly it
converges or
diverges.
July 4, 2016

OSU/CIS 541

72

Fixed-Point Iteration
More on this when we look at iterative
solutions for linear systems (matrices).

July 4, 2016

OSU/CIS 541

73

Вам также может понравиться