Академический Документы
Профессиональный Документы
Культура Документы
4.5
Newton-Raphson Method
f (x1 )
f 0 (x1 )
f (x2 )
f 0 (x2 )
f (xk )
f 0 (xk )
This formula provides a method of going from one guess xk to the next guess
xk+1 . This may also be viewed as an application of Taylors series, using only the
first two terms.
Newtons method when it works is fine. The method does not always converge;
it may jump to another root or oscillate around the desired root. The three sketches
show some of the troubles that can occur when the method is used carelessly.
Thus, in practice, unless the local structure of the function is well understood,
Newtons method is to be avoided.
63
xk+1 = xk
= xk
f (xk )
f 0 (xk )
f (xk )
f (xk+1 )f (xk )
xk+1 xk
f (xk )(xk+1 xk )
f (xk+1 ) f (xk )
xk f (xk+1 ) xk+1 f (xk )
=
f (xk+1 ) f (xk )
= xk
Secant method
False-position method
Newtons method also works for complex roots. If we give it a complex value
for the starting value.
Newtons method is widely used because; at least in the near neighborhood
of a root, it is more rapidly convergent than any of the methods so far discussed.
This method is quadratically convergent, it tends on each step almost to double the
number of decimal places that are accurate; thus, from two accurate figures we get
in one step almost four figures accurate, in the next step almost eight figures, and
so on. This is why, when it can be made to work, it is a good method. However,
offsetting this is the need for two function evaluations at each step, f (xn ) and f 0 (xn ).
Example # 3: Find a formula for Newtons method for the function
y = xex 1.
Solution:
y 0 = ex + xex = (x + 1)ex
(xk exk 1)
xk+1 = xk
(xk + 1)exk
(xk exk )
= xk
xk + 1
2
xk
(xk + e )
=
xk + 1
Example # Apply Newtons method to
f (x) = 3x + sin x ex = 0.
64
x2 = x1
x3
x4
After three iterations, the root is correct to seven significant digits. The rule of
doubling the number of correct digits would indicate six correct digits in the last
result because three are repeated from the previous one.
4.6
Mullers Method
Most of the roots finding methods that we have considered so far have approximated
the function in the neighborhood of the root by a straight line. Obviously, this is
never true; if the function were linear, finding the root would take practically no
effort.
Mullers method is based on approximating the function in the neighborhood of
the root by a quadratic polynomial. This gives a much closer match to the actual
curve.
A second degree polynomial is made to fit three points near a root,
(x , f (x )), (x1 , f (x1 )), (x2 , f (x2 )),
and the proper zero of this quadratic, using the quadratic formula, is used as the
improved estimate of the root. The process is then repeated using the three points
nearest the root being evaluated.
The procedure for Mullers method is developed by writing a quadratic equation
that fits through three points in the vicinity of the root, in the form
P2 (v) = av 2 + bv + c.
The development is simplified if we transform axes to pass through the middle point,
by letting v = x x .
Let
h2 = x x2
h 1 = x1 x ,
We evaluate the coefficients by evaluating P2 (v) at the three points
= f
v = 0 a(0)2 + b(0) + c
2
v = h1 a(h1 ) + b(h1 ) + c
= f1
2
v = h2 a(h2 ) + b(h2 ) + c = f2
65
c = f
2
ah1 + bh1 + c = f1
ah22 bh2 + c = f2
Let
a=
h2
= , then
h1
a=
f1 + f2 f ( + 1)
h21 (1 +
f1 f ah21
,
h1
c = f
Now
av 2 + bv + c = 0
b b2 4ac
v =
2a
b b2 4ac
x x =
2a
b b2 4ac
x = x +
2a
Since
b2 4ac
=
2a
b2 4ac
2a
2c
=
b b2 4ac
b
!
+b b2 4ac
+b b2 4ac
Therefore,
2c
b b2 4ac
with the sign in the denominator taken to give the largest absolute value of the
denominator (i.e., if b > 0, choose plus; if b < 0, choose minus; if b = 0, choose
either).
To take the root of the polynomial as one of the set of three points for the next
approximation, taking three points that are most closely spaced (i.e., if the root is
to the right of x , take x , x1 , and the root; if to the left, take x , x2 , and the root).
Always reset the subscripts to make x be the middle of the three values.
Example # Find a root between 0 and 1 of the transcendental function
x = x
f (x) = 3x + sin x ex = 0.
66
Let
then
and
x
= 0.5,
x1 = 1.0,
x2 = 0.0
= 0.5,
h2 = 0.5,
= 1.0,
h1
f (x ) = 0.330704, f (x1 ) = 1.123189, f (x2 ) = 1.0
a =
root = 0.5
2.12319
= 0.330704,
2(0.330704)
(2.12319)2 4(1.07644)(0.330704)
x
= 0.354914,
= 0.145086,
h1
f (x ) = 0.0138066,
x1 = 0.5,
h2 = 0.354914,
f (x1 ) = 0.330704,
x2 = 0.0
= 2.44623,
f (x2 ) = 1.0
root = 0.354914
2.49180
2(0.0138066)
(2.49180)2 4(0.808314)(0.0138066)
= 0.360465,
After third iteration, we get root = 0.3604217, which is identical to that from Newtons method after three iterations, but it does not require the evaluation of derivatives, however, needs only one function evaluation per iteration after obtaing the
starting values.
4.7
Fixed-Position Iteration
Fixed point iteration is a possible method for obtaining a root of the equation
f (x) = 0.
67
(4.1)
(4.2)
so that any solution of (??) i.e., any fixed point of g(x), is a solution of (??). This
may be accomplished in many ways. If, e.g.,
f (x) = x2 x 2
(4.3)
Then among the possible choices for g(x) are the following:1. g(x) = x2 2
2. g(x) = x + 2
3. g(x) = 1 +
2
x
4. g(x) = x
x2 x 2
m
Each such g(x) is called an iteration function for solving (??) (with f (x) given
by (??)). Once an iteration function g(x) for solving (??) is chosen, one carries out
the following algorithm.
Algorithm
Given an iteration function g(x) and a starting point x .
Iterate for n = 1, 2, 3, until satisfied.
Calculate xn+1 = g(xn )
For this algorithm to be useful, we must prove,
1. For the given starting point x , we can calculate successively x1 , x2 , x3 ,
2. The sequence x1 , x2 , x3 ,
for some n between and xn1 by the mean value theorem for derivatives. Hence
|en | k|en1 |
It follows from induction on n
|en | k|en1 | k 2 |en2 | k 3 |en3 | k n |e |
since
0k<1
lim k n = 0
lim |en | = n
lim k n |e | = 0
c
x
x2 + c
ex
x = g1 (x) = 2x + 3
or
xk+1 =
2xk + 3
69
If we start with x = 4 and iterate with the fixed-point algorithm, successive values
of x are
x =
4,
x1 =
11 = 3.31662,
x2 = 9.63325 = 3.10375,
x3 = 9.20750 = 3.03439,
x4 =
9.06877 = 3.01144,
and it appears that the values are converging on the root at x = 3.
Other Rearrangements
Another rearrangement of f (x) is
x = g2 (x) =
3
.
(x 2)
Let us start the ieration again with x = 4. Successive values then are
x
x1
x2
x3
x4
x5
x6
x7
x8
=
=
=
=
=
=
=
=
=
4,
1.5,
6,
0.375,
1.263158,
0.919355,
1.02762,
0.990876,
1.00305,
and it seems that we now converge to the other root, at x = 1. We note that the
convergence is oscillatory rather than monotonic.
Consider a third rearrangement:
x = g3 (x) =
x2 3
.
2
x
x1
x2
x3
=
=
=
=
4,
6.5,
19.625,
191.070,
70
71