Вы находитесь на странице: 1из 105

 At the end of the period, you should be

able to:
 Determine the roots of an equation
using bisection method.
 Determine the roots of an equation
using Newton-Rhapson method.
 Determine the roots of an equation
using secant method.
 Determine the roots of an equation
using fixed-point iteration method.
 At the end of the period, you should be
able to:
 Determine the roots of an equation
using false-position method.
 Create algorithms to find the roots of
equation.
 Determine the advantages and
disadvantages of each algorithm.
 A root or solution of equation f(x)=0 are
the values of x for which the equation
holds true.
 Methods for finding roots of equations are
basic numerical methods and the subject is
generally covered in books on numerical
methods or numerical analysis.
 Numerical methods for finding roots of
equations can often be easily programmed
and can also be found in general numerical
libraries.
 Make sure you aren’t confused by the
terminology.
All of these are the same:
Solving a polynomial equation p(x) = 0
Finding roots of a polynomial equation
p(x) = 0
Finding zeroes of a polynomial function
p(x)
Factoring a polynomial function p(x)
 In math, the bisection method is a root-
finding algorithm which repeatedly bisects
an interval then selects a subinterval in
which a root must lie for further
processing.
 It is a very simple and robust method, but it
is also relatively slow.
 The bisection method is applicable when we
wish to solve the equation for the variable
x, where f is a continuous function.
 The bisection method requires two initial
points a and b such that f(a) and f(b) have
opposite signs.
 This is called a bracket of a root, for by the
IVT the continuous function f must have at
least one root in the interval (a, b).
 The bisection method uses the IVT.
If a function is continuous and f(a) and
f(b) have different signs then the
function has at least one zero in the
interval [a , b].
f(a)

a b
f(b)
 Theorem:
 If f is a continuous function on the interval
[a, b] and f(a)f(b) < 0, then the bisection
method converges to a root of f.
 The absolute error is halved at each step.
 Thus, the method converges linearly, which
is quite slow.
 On the other hand, the method is
guaranteed to converge if f(a) and f(b) have
different signs.
 Theorem: An equation f(x)=0, where f(x) is
a real continuous function, has at least one
root between xl and xu if f(xl) f(xu) < 0.
f(x)

x
x
xu

Figure 1: At least one root exists


between the two points if the function
is real, continuous, and changes sign.
 Figure 2: If function does not change sign
between two points, roots of the equation
may still exist between the two points.
f(x)

x x
xu
 Figure 3: If the function does not change
sign between two points, there may not be
any roots for the equation between the two
points.
f(x) f(x)

x xu
x x
x xu
 Figure 4: If the function changes sign
between two points, more than one root for
the equation may exist between the two
points.
f(x)

xu x
x
 Algorithm for Bisection Method:
 Step 1: Choose xl and xu as two guesses for
the root such that f(xl) f(xu) < 0, or in other
words, f(x) changes sign between xl and xu.
f(x)

x
x
xu
 Step 2: Estimate the root, xm of the
equation f (x) = 0 as the mid point between
xl and xu as
f(x)

x  xu
xm =
2
x xm
x
xu
 Step 3: Check the following

a) If f xl  f xm   0 , then the root lies


between xl and xm; then xl = xl ; xu = xm.

b) If f xl  f xm   0 , then the root lies


between xm and xu; then xl = xm; xu = xu.

c) If f xl  f xm   0 ; then the root is xm. Stop


the algorithm if this is true.
 Step 4:
Find the new estimate of the root
x  xu
xm =
2
Find the absolute relative approximate
error
x new  x old

a  100
m m
new
xm

where xmold  previous estimate of root


xmnew  current estimate of root
 Step 5:
 Compare the absolute relative approximate error
with the pre-specified error tolerance.

 Note one should also check whether the number of


iterations is more than the maximum number of
iterations allowed. If so, one needs to terminate
the algorithm and notify the user about it.
 Assumptions:
f(x) is continuous on [a,b]
f(a) f(b) < 0
 Algorithm:
Loop
1. Compute the mid point c=(a+b)/2
2. Evaluate f(c)
3. If f(a) f(c) < 0 then new interval
is [a, c] while if f(a) f(c) > 0 then
new interval [c, b]
End loop
 Geometrically
+ + -

+ - -

+ + -
 Flow Chart of Bisection Method
Start: Given a,b and ε

u = f(a) ; v = f(b)

c = (a+b) /2 ; w = f(c) no

yes
is no is
Stop
yes (b-a) /2<ε
u w <0

b=c; v= w a=c; u= w
 Two common stopping criteria
Stop after a fixed number of iterations :
Stop when the absolute error is less than
a specified value

How are these criteria related?


 Stopping criteria:
cn : is the midpoint of the interval at the n th iteration
( cn is usually used as the estimate of the root).
r: is the zero of the function.

After n iterations :
b  a x 0
error  r - cn E  n  n
a
n

2 2

 width of initial interval  b  a 


n  log2    log2  
 desired error    
 Example:
Consider the equation:

x  x 1 0
6

How many roots does the equation


have?
What are the intervals that contains
the roots?
Solve for the roots using bisection
method with error of less than 1%.
 Example:
Consider the equation: x6  x  1  0

x  0.77809
x  1.13472
 Exercise:
Consider the equation:

x  6x  4  0
3

How many roots does the equation


have?
What are the intervals that contains
the roots?
Solve for the roots using bisection
method with error of less than 1%.
 Example:
Consider the equation:
x
x  3x  1  0
3
 ex  0
1  sin x
How many roots does the equation
have?
What are the intervals that contains
the roots?
Solve for the roots using bisection
method with error of less than 1%.
 Example:
Consider the equation: x 3
 3x  1  0

x  1.53209

x  0.34370

x  1.87939
 Example:
Consider the equation:
5 sin2 x   2 x 2  4 x  0

How many roots does the equation


have?
What are the intervals that contains
the roots?
Solve for the roots using bisection
method with error of less than 1%.
 Advantages :
Always convergent
The root bracket gets halved with each
iteration - guaranteed.
 Drawbacks:
Slow convergence
If one of the initial guesses is close to
the root, the convergence is slower
 Drawbacks:
If a function f(x) is such that it just
touches the x-axis it will be unable to
find the lower and upper guesses.
f(x)

x
 Drawbacks:
Function changes sign but root does not
exist

f(x)

x
 Note:
The number of iterations n has to satisfy

to ensure that the error is smaller than


the tolerance ε.
NEWTON-RAPHSON
METHOD
 In numerical analysis, Newton's method (also
known as the Newton–Raphson method), named
after Isaac Newton and Joseph Raphson , is
perhaps the best known method for finding
successively better approximations to the zeroes
(or roots) of a real-valued function.
 Newton's method can often converge remarkably
quickly, especially if the iteration begins
"sufficiently near" the desired root.
 Just how near "sufficiently near" needs to be, and
just how quickly "remarkably quickly" can be,
depends on the problem.
 Given a function ƒ(x) and its derivative ƒ '(x), we
begin with a first guess x0.
 Provided the function is reasonably well-behaved a
better approximation x1 is

 The process is repeated until a sufficiently


accurate value is reached:
 An illustration of one iteration of Newton's
method (the function ƒ is shown in blue and the
tangent line is in red).
 We see that xn+1 is a better approximation than xn
for the root x of the function f.
 The idea of the method is as follows: one starts
with an initial guess which is reasonably close to
the true root, then the function is approximated
by its tangent line, and one computes the x-
intercept of this tangent line.
 This x-intercept will typically be a better
approximation to the function's root than the
original guess, and the method can be iterated.
 Derivation:

f(x)
AB
tan(  
AC
f(xi) B
f ( xi )
f ' ( xi ) 
xi  xi 1

f ( xi )
xi 1  xi 
C  A X f ( xi )
xi+1 xi
 Derivation:
 Algorithm for Newton’s Method
 Step 1:
 Evaluate f’(x) symbolically.
 Step 2:
 Use an initial guess of the root, xi , to
estimate the new value of the root, xi+1 , as
f xi 
xi 1 = xi -
f xi 
 Step 3:
 Find the absolute relative approximate error
xi 1- xi
a =  100
xi 1
 Step 4:
 Compare the absolute relative approximate error
with the pre-specified error tolerance.

 Note one should also check whether the number of


iterations is more than the maximum number of
iterations allowed. If so, one needs to terminate
the algorithm and notify the user about it.
 Exercise:
Consider the equation:

x  6x  4  0
3

How many roots does the equation


have?
What are the intervals that contains
the roots?
Solve for the roots using
with error of less than
0.00001.
 Example:
 Consider the problem of finding the positive
number x with cos(x) = x3.
 We can rephrase that as finding the zero of f(x) =
cos(x) − x3.
 We have f'(x) = −sin(x) − 3x2.
 Since cos(x) ≤ 1 for all x and x3 > 1 for x > 1, we know that
our zero lies between 0 and 1.
 We try a starting value of x0 = 0.5. (Note that a starting
value of 0 will lead to an undefined result.).
 Answer:
 Example:
 Consider the equation:

0.25 x  7 x  3  0
5

How many roots does the equation have?


What are the intervals that contains the
roots?
Solve for the roots using Newton’s method
with error of less than 0.000001.
 Answer:

-2.17769,
-0.42909,
2.39690
 Example:
 Consider the equation:
 x
cos    x 2  3 x  0
4
How many roots does the equation have?
What are the intervals that contains the
roots?
Solve for the roots using Newton’s method
with relative error of less than 0.00001.
 Answer:

0.37995,
2.71298
 Advantages :
 Converges fast (quadratic convergence), if it
converges.
 Requires only one guess.
 Drawbacks: Divergence at inflection points
 Selection of the initial guess or an iteration value of the root
that is close to the inflection point of the function may start
diverging away from the root in the Newton-Raphson method

f x   x  1  0.512  0
3
 Drawbacks:
 Divergence at inflection Iteration xi
point for Number
f  x    x  13  0.512  0 0 5.0000
1 3.6560
2 2.7465
3 2.1084
x i 1  x i 

x i3 3
 1  0.512
4 1.6000
3 x i  1
2
5 0.92589
6 −30.119
7 −19.746
18 0.2000
 Drawbacks: Division by zero
 For the equation
f x   x3  0.03x 2  2.4 106  0
 the Newton-Raphson method reduces to
xi3  0.03xi2  2.4 106
xi 1  xi 
3xi2  0.06 xi
 For x=0 or x=0.02, the denominator will equal zero.
 Drawbacks:
 Oscillations near local maximum and minimum
 Results obtained from the Newton-Raphson method may
oscillate about the local maximum or minimum without
converging on a root but converging on the local maximum
or minimum.
 Eventually, it may lead to division by a number close to zero
and may diverge.
 For example for f x   x 2  2  0 the equation has no real
roots.
 Drawbacks: Oscillations around local minima for f x   x 2  2  0
Iteration
Number 6
f(x)
0 –1.0000 3.00 5
1 0.5 2.25 300.00
2 –1.75 5.063 128.571 4

3 –0.30357 2.092 476.47 3


3
4 3.1423 11.874 109.66
2
5 1.2529 3.570 150.80 2

6 –0.17166 2.029 829.88 11


4
7 5.7395 34.942 102.99 x
0
8 2.6955 9.266 112.93 -2
-1.75
-1
-0.3040
0
0.5
1 2 3
3.142
9 0.97678 2.954 175.96 -1
 Drawbacks: Root Jumping
 In some cases where the function f(x) is
oscillating and has a number of roots, one may
choose an initial guess close to a root. However,
the guesses may jump and converge to some
other root.
 Drawbacks: Root Jumping Example
 For f x   sin x  0
 Choose x0  2.4  7.539822
 It will converge to x  0
 instead of x  2  6.2831853

1.5
f(x)
1

0.5

x
0
-2 0 2 4 6 8 10
-0.06307 0.5499 4.461 7.539822
-0.5

-1

-1.5
SECANT METHOD
 In numerical analysis, the secant method is a root-
finding algorithm that uses a succession of roots
of secant lines to better approximate a root of a
function f.

• The first two iterations


of the secant method.
• The red curve shows the
function f and the blue
lines are the secants.
 The secant method is defined by the recurrence
relation

 As can be seen from the recurrence relation, the


secant method requires two initial values, x0 and x1,
which should ideally be chosen to lie close to the
root.
 Derivation:
 From Newton’s Method
f(x i )
x i 1 = xi -
f ' (x i )

 Approximate the derivative


f ( x i )  f ( x i 1 )
f ( x i ) 
x i  x i 1

 The Secant method


f ( x i )( x i  x i 1 )
x i 1  xi 
f ( x i )  f ( x i 1 )
 Derivation:
 Similar Triangles f(x)

AB DC

AE DE f(xi) B

f ( xi ) f ( xi 1 )

xi  xi 1 xi 1  xi 1
 The Secant method f(xi-1) C

f ( xi )( xi  xi 1 )
E D A X
 xi  xi+1 xi-1 xi
xi 1
f ( xi )  f ( xi 1 )
 Algorithm for Secant Method:
 Step 1:
 Calculate the next estimate of the root from two
initial guesses
f ( xi )( xi  xi 1 )
xi 1  xi 
f ( xi )  f ( xi 1 )
 Step 2:
 Find the absolute relative approximate error

xi 1- xi
a =  100
xi 1
 Step 3:
 Find if the absolute relative approximate error
is greater than the prespecified relative error
tolerance
 If so, go back to step 1, else stop the algorithm.
 Also check if the number of iterations has
exceeded the maximum number of iterations
 Example:
 Consider the problem of finding the positive
number x with cos(x) = x3.
 We can rephrase that as finding the zero of f(x) =
cos(x) − x3.
 We try starting values of 0 and 1.
 Answer:
 For cos(x) = x3.
 Example:
 Consider the equation:

x  x 1 0
6

How many roots does the equation have?


What are the intervals that contains the
roots?
Solve for the roots using secant method with
error of less than 0.000001.
 Example:
 Consider the equation:

5 sin2 x   2 x 2  4 x  0
How many roots does the equation have?
What are the intervals that contains the
roots?
Solve for the roots using secant method with
error of less than 0.000001.
 Advantages:
 Converges fast, if it converges
 Requires two guesses that do not need to bracket
the root
 Drawbacks:
 Division by zero
2
2

f ( x)
0
f ( x) 0
f ( x)

2 2
10 5 0 5 10
 10 x x guess1 x guess2 10
f(x)
prev. guess
new guess
 Drawbacks:
 Root Jumping 2
2

f ( x)

f ( x)
0
f ( x) 0
secant( x)

f ( x)

2 2
10 5 0 5 10
 10 x x 0  x 1'  x x 1 10
f(x)
x'1, (first guess)
x0, (previous guess)
Secant line
x1, (new guess)
FIXED-POINT ITERATION
METHOD
 Start from f(x) =0 and derive a relation x = g(x)

 The fixed-point method is simply given by


 Example:
 Compute zero for f x  e x  4  2 x

 Answer:
 Derive a relation x = g(x)

 The fixed-point method is simply given by


 When does it converge?
 Example:
 Compute zero for f x  e x  4  2 x

 Answer:
 Example:
 Compute zero for f x  e x  4  2 x

 Answer:
 Example:
 Compute zero for f x  e x  4  2 x

 Answer:
 Example:
 Compute zero for

 Answer:
 Example:
 Consider the equation:
 x
cos    x 2  3 x  0
4
How many roots does the equation have?
What are the intervals that contains the
roots?
Solve for the roots using fixed-point
iteration method with error of less than
0.000001.
 Answer:

0.37995,
2.71298
FALSE-POSITION METHOD
 The false position method or regula falsi method is
a root-finding algorithm that combines features
from the bisection method and the secant method.
 Like the bisection method, the false position
method starts with two points a0 and b0 such that
f(a0) and f(b0) are of opposite signs, which implies
by the IVT that the function f has a root in the
interval [a0, b0].
 The method proceeds by producing a sequence of
shrinking intervals [ak, bk] that all contain a root of
f.
 The first two iterations of the false position
method.

 The red curve shows the function f and the blue


lines are the secants.
 The graph used in this method is shown in the
figure.
 Advantage:
 Convergence is faster than bisection method.
 Disadvantages:
 1. It requires a and b.
 2. The convergence is generally slow.
 3. It is only applicable to f(x) of certain fixed
curvature in [a, b].
 4. It cannot handle multiple zeros.
 Alternative formula:
 At iteration number k, the number

is computed.
 ck is the root of the secant line through (ak, f(ak))
and (bk, f(bk)).
 If f(ak) and f(ck) have the same sign, then we set
ak+1 = ck and bk+1 = bk, otherwise we set ak+1 = ak and
bk+1 = ck.
 This process is repeated until the root is
approximated sufficiently well.
 If the initial end-points a0 and b0 are chosen such
that f(a0) and f(b0) are of opposite signs, then one
of the end-points will converge to a root of f.
 Asymptotically, the other end-point will remain
fixed for all subsequent iterations while the
converging endpoint becomes updated.
 As a result, unlike the bisection method, the width
of the bracket does not tend to zero.
 As a consequence, the linear approximation to f(x),
which is used to pick the false position, does not
improve in its quality.
 While it is a misunderstanding to think that the
method of false position is a good method, it is
equally a mistake to think that it is unsalvageable.
 The failure mode is easy to detect (the same end-
point is retained twice in a row) and easily remedied
by next picking a modified false position, such as

or

 down-weighting one of the endpoint values to force


the next ck to occur on that side of the function.
 The factor of 2 above looks like a hack, but it
guarantees superlinear convergence (asymptotically,
the algorithm will perform two regular steps after
any modified step).
 There are other ways to pick the rescaling which
give even better superlinear convergence rates.
 Example:
 Consider the equation:

0.25 x  7 x  3  0
5

How many roots does the equation have?


What are the intervals that contains the
roots?
Solve for the roots using False position
method with error of less than 0.000001.
 Answer:

-2.17769,
-0.42909,
2.39690
MULLER’S METHOD
 Muller's method is an extension of the Secant
method.
 The Secant method begins with two initial
approximations xo and x1 and determines the next
approximation x2 as the intersection of the x-axis
with the line through (xo, f(xo)) and (x1, f(x1)) .
 Muller's method uses three initial approximations,
xo, x1, and x2, and determines the next
approximation x3 by considering the intersection
of the x-axis with the parabola through (xo, f(xo)),
(x1, f(x1)) and (x2, f(x2)).
 Graphically:
 The derivation of Muller's method begins by
considering the quadratic polynomial
P(x) = a(x – x2)2 + b(x - X2) + c
 that passes through (xo, f(xo)), (x1, f(x1)) and (x2,
f(x2)).
 The constants a, b, and c can be determined from
the conditions
f(xo) = a(x0 - X2)2 + b(x0 - X2) + c,
f(Xl) = a(x1 - X2)2 + b(X1 - X2) + c
 And
f (X2) = a . 02 + b . 0 + c = c
 to be
 To determine X3, a zero of P, we apply the
quadratic formula to P (x) = O.
 However, because of roundoff error problems
caused by the subtraction of nearly equal numbers,
we apply the formula

 This formula gives two possibilities for X3,


depending on the sign preceding the radical term.
 In Muller's method, the sign is chosen to agree
with the sign of b.
 Example:
 Find the zeroes of the polynomial
f(x) = 16x4 - 40x3 + Sx2 +20x +6.

 Answer:
 Example:
 Find the zeroes of the polynomial
f(x) = 16x4 - 40x3 + Sx2 +20x +6.

 Answer:
 Example:
 Find the zeroes of the polynomial
f(x) = 16x4 - 40x3 + Sx2 +20x +6.

 Answer:
 Alternatively:
 Example:
Resources
• Numerical Methods Using Matlab, 4th Edition,
2004 by John H. Mathews and Kurtis K. Fink
• Holistic Numerical Methods Institute by Autar
Kaw and Jai Pau.
• Numerical Methods for Engineers by Chapra
and Canale

Вам также может понравиться