Вы находитесь на странице: 1из 9

Exercise # 7: Apply the modified false position method to y = x2 2 and

draw a complete flow diagram for this problem.

4.5

Newton-Raphson Method

One of the most widely used methods of solving equations is Newton-Raphson


Method. The idea behind the method is starting from an initial estimate which
is not too far from a root, x1 , we extrapolate along the tangent to its intersection
with the xaxis, and take that as the next approximation. This is continued until
either the successive xvalues are sufficiently close, or the value of the function is
sufficiently near zero.
Assume we have a preliminary estimate x1 of a root of
f (x) = 0
Assume f (x) differentiable in the region of interest.
The line through (x2 , 0) is the tangent to the curve at (x1 , f (x1 )). Thus from
the geometry we have
f (x2 ) f (x1 )
f 0 (x1 ) = lim
x2 x1
x2 x1
0
f (x2 ) = f (x1 ) + f (x1 )(x2 x1 )
and the value where f (x2 ) = 0 is
x2 = x1

f (x1 )
f 0 (x1 )

We continue the calculation scheme by comupting


x3 = x2

f (x2 )
f 0 (x2 )

or, in more general terms,


xk+1 = xk

f (xk )
f 0 (xk )

This formula provides a method of going from one guess xk to the next guess
xk+1 . This may also be viewed as an application of Taylors series, using only the
first two terms.
Newtons method when it works is fine. The method does not always converge;
it may jump to another root or oscillate around the desired root. The three sketches
show some of the troubles that can occur when the method is used carelessly.
Thus, in practice, unless the local structure of the function is well understood,
Newtons method is to be avoided.
63


xk+1 = xk
= xk

f (xk )
f 0 (xk )
f (xk )
f (xk+1 )f (xk )
xk+1 xk

f (xk )(xk+1 xk )
f (xk+1 ) f (xk )
xk f (xk+1 ) xk+1 f (xk )
=
f (xk+1 ) f (xk )
= xk

Secant method
False-position method
Newtons method also works for complex roots. If we give it a complex value
for the starting value.
Newtons method is widely used because; at least in the near neighborhood
of a root, it is more rapidly convergent than any of the methods so far discussed.
This method is quadratically convergent, it tends on each step almost to double the
number of decimal places that are accurate; thus, from two accurate figures we get
in one step almost four figures accurate, in the next step almost eight figures, and
so on. This is why, when it can be made to work, it is a good method. However,
offsetting this is the need for two function evaluations at each step, f (xn ) and f 0 (xn ).
Example # 3: Find a formula for Newtons method for the function
y = xex 1.
Solution:
y 0 = ex + xex = (x + 1)ex
(xk exk 1)
xk+1 = xk
(xk + 1)exk
(xk exk )
= xk
xk + 1
2
xk
(xk + e )
=
xk + 1
Example # Apply Newtons method to
f (x) = 3x + sin x ex = 0.
64

If we begin with x1 = 0.0, we have:


1.0
f (x1 )
= 0.0
= 0.33333;
0
f (x1 )
3.0
0.068418
f (x2 )
= 0.33333
= 0.36017;
= x2 0
f (x2 )
2.54934
6.279 104
f (x3 )
= 0.36017
= 0.3604217.
= x3 0
f (x3 )
2.50226

x2 = x1
x3
x4

After three iterations, the root is correct to seven significant digits. The rule of
doubling the number of correct digits would indicate six correct digits in the last
result because three are repeated from the previous one.

4.6

Mullers Method

Most of the roots finding methods that we have considered so far have approximated
the function in the neighborhood of the root by a straight line. Obviously, this is
never true; if the function were linear, finding the root would take practically no
effort.
Mullers method is based on approximating the function in the neighborhood of
the root by a quadratic polynomial. This gives a much closer match to the actual
curve.
A second degree polynomial is made to fit three points near a root,
(x , f (x )), (x1 , f (x1 )), (x2 , f (x2 )),
and the proper zero of this quadratic, using the quadratic formula, is used as the
improved estimate of the root. The process is then repeated using the three points
nearest the root being evaluated.
The procedure for Mullers method is developed by writing a quadratic equation
that fits through three points in the vicinity of the root, in the form
P2 (v) = av 2 + bv + c.
The development is simplified if we transform axes to pass through the middle point,
by letting v = x x .
Let
h2 = x x2
h 1 = x1 x ,
We evaluate the coefficients by evaluating P2 (v) at the three points
= f
v = 0 a(0)2 + b(0) + c
2
v = h1 a(h1 ) + b(h1 ) + c
= f1
2
v = h2 a(h2 ) + b(h2 ) + c = f2
65

c = f
2
ah1 + bh1 + c = f1
ah22 bh2 + c = f2

ah21 h2 + bh1 h2 + ch2 = f1 h2


ah22 h1 bh2 h1 + ch1 = f2 h1

Let

a=

a(h21 h2 + h1 h22 ) + c(h1 + h2 ) = f1 h2 + f2 h1


f1 h2 + f2 h1 f (h2 + h1 )
h21 h2 + h1 h22

h2
= , then
h1

a=

f1 + f2 f ( + 1)
h21 (1 +

Similarly, we can find


b=

f1 f ah21
,
h1

c = f

Now
av 2 + bv + c = 0

b b2 4ac
v =
2a
b b2 4ac
x x =
2a
b b2 4ac
x = x +
2a
Since

b2 4ac
=
2a

b2 4ac
2a
2c

=
b b2 4ac
b

!
+b b2 4ac

+b b2 4ac

Therefore,
2c
b b2 4ac
with the sign in the denominator taken to give the largest absolute value of the
denominator (i.e., if b > 0, choose plus; if b < 0, choose minus; if b = 0, choose
either).
To take the root of the polynomial as one of the set of three points for the next
approximation, taking three points that are most closely spaced (i.e., if the root is
to the right of x , take x , x1 , and the root; if to the left, take x , x2 , and the root).
Always reset the subscripts to make x be the middle of the three values.
Example # Find a root between 0 and 1 of the transcendental function

x = x

f (x) = 3x + sin x ex = 0.
66

Let
then
and

x
= 0.5,
x1 = 1.0,
x2 = 0.0
= 0.5,
h2 = 0.5,
= 1.0,
h1
f (x ) = 0.330704, f (x1 ) = 1.123189, f (x2 ) = 1.0

(1.0)(1.123189) 0.330704(2.0) + (1)


= 1.07644,
1.0(0.5)2 (2.0)
1.123189 0.330704 (1.07644)(0.5)2
= 2.12319,
b =
0.5
c = 0.330704,

a =

root = 0.5
2.12319
= 0.330704,

2(0.330704)

(2.12319)2 4(1.07644)(0.330704)

For the next iteration, we have


then
and

x
= 0.354914,
= 0.145086,
h1
f (x ) = 0.0138066,

x1 = 0.5,
h2 = 0.354914,
f (x1 ) = 0.330704,

x2 = 0.0
= 2.44623,
f (x2 ) = 1.0

(2.44623)(0.330704) (0.0138066)(3.44623) + (1)


= 0.808314,
2.44623(0.145086)2(3.44623)
0.330704 (0.0138066) (0.808314)(0.145086)2
= 2.49180,
b =
0.145086
c = 0.0138066,
a =

root = 0.354914
2.49180

2(0.0138066)
(2.49180)2 4(0.808314)(0.0138066)

= 0.360465,
After third iteration, we get root = 0.3604217, which is identical to that from Newtons method after three iterations, but it does not require the evaluation of derivatives, however, needs only one function evaluation per iteration after obtaing the
starting values.

4.7

Fixed-Position Iteration

Fixed point iteration is a possible method for obtaining a root of the equation
f (x) = 0.
67

(4.1)

In this method, we rearrange equation (??) of the form


x = g(x)

(4.2)

so that any solution of (??) i.e., any fixed point of g(x), is a solution of (??). This
may be accomplished in many ways. If, e.g.,
f (x) = x2 x 2

(4.3)

Then among the possible choices for g(x) are the following:1. g(x) = x2 2

2. g(x) = x + 2
3. g(x) = 1 +

2
x

4. g(x) = x

x2 x 2
m

for some non-zero constant m.

Each such g(x) is called an iteration function for solving (??) (with f (x) given
by (??)). Once an iteration function g(x) for solving (??) is chosen, one carries out
the following algorithm.
Algorithm
Given an iteration function g(x) and a starting point x .
Iterate for n = 1, 2, 3, until satisfied.
Calculate xn+1 = g(xn )
For this algorithm to be useful, we must prove,
1. For the given starting point x , we can calculate successively x1 , x2 , x3 ,
2. The sequence x1 , x2 , x3 ,

converges to some point .

3. The limit is a fixed point of g(x) i.e., = g().


Some iterative formulations may not converge, and if the equation has multiple roots,
different iterate formulations may converge to different roots. Thus the definition of
the conditions for convergence is of fundamental importance. Let x1 , x2 , x3 , of
fixed point iteration method all lying in I(interval), let Ean , is the error in the nth
iteration, then
n = 0, 1, 2, 3,
en = xn
then since = g() and xn = g(xn1 ), we have
en = xn = g() g(xn1 ) = g 0 (n )en1
68

for some n between and xn1 by the mean value theorem for derivatives. Hence
|en | k|en1 |
It follows from induction on n
|en | k|en1 | k 2 |en2 | k 3 |en3 | k n |e |
since
0k<1

lim k n = 0

lim |en | = n
lim k n |e | = 0

regardless of the initial error e . But this says that x1 , x2 , x3 , converges to .


It also proves that is the only fixed point of g(x) in I. For if, also is a fixed point
of g(x) in I, then with x = , we should have x1 = g(x ) = , hence
|e | = |e1 | k|e |.
Since k < 1, then this implies |e | = 0 or = .
Corollary If g(x) is continuously differentiable in some open interval containing the fixed point and if |g 0 ()| < 1, then there exists an  > 0 so that fixed point
iteration with g(x) converges whenever |x | .
Example # 4:
x2 xex + c = 0.

Choices for g(x) are


1. g(x) = ex
2. g(x) =

c
x

x2 + c
ex

3. g(x) = x2 + x(1 ex ) + c etc.


Example #
Consider an example
f (x) = x2 2x 3 = 0.
Roots of this equation are x = 1 and x = 3.
Suppose we rearrange to give this equivalent form:

x = g1 (x) = 2x + 3
or
xk+1 =

2xk + 3

69

If we start with x = 4 and iterate with the fixed-point algorithm, successive values
of x are
x =
4,
x1 =
11 = 3.31662,
x2 = 9.63325 = 3.10375,
x3 = 9.20750 = 3.03439,
x4 =
9.06877 = 3.01144,
and it appears that the values are converging on the root at x = 3.
Other Rearrangements
Another rearrangement of f (x) is
x = g2 (x) =

3
.
(x 2)

Let us start the ieration again with x = 4. Successive values then are
x
x1
x2
x3
x4
x5
x6
x7
x8

=
=
=
=
=
=
=
=
=

4,
1.5,
6,
0.375,
1.263158,
0.919355,
1.02762,
0.990876,
1.00305,

and it seems that we now converge to the other root, at x = 1. We note that the
convergence is oscillatory rather than monotonic.
Consider a third rearrangement:
x = g3 (x) =

x2 3
.
2

Starting again with x = 4, we get

x
x1
x2
x3

=
=
=
=

4,
6.5,
19.625,
191.070,
70

and the iterates are obviously diverging.


This difference in three arrangements is because, the fixed point of x = g(x) is
the intersection of the line y = x and the curve y = g(x) plotted against x.
insert fig
Observe that with the successive iterates, the points on the curve converge to
a fixed point or else diverge. It appears that the different behaviors depend on
whether the slope of the curve is greater, less, or of opposite sign to the slope of the
line (which equal +1).

71

Вам также может понравиться