Вы находитесь на странице: 1из 44

Numerical Methods

Roots of Nonlinear Equations

Find the solutions of f(x) = 0, where the function f is given.


The solutions (values of x) are known as the roots of the equation f (x) = 0, or the zeroes of the function f (x).

y = f(x)
Contains three elements: an input value x, an output value y and the rule f for computing y. The function is said to be given if the rule f is specified. In numerical computing, the rule is invariably a computer algorithm. As long as the algorithm produces an output y for each input x, it qualifies as a function.

Roots or Zeroes
May be real. May be complex (seldom computed, for the case of polynomials where it is most significant, as in damped vibrations). The approximate locations of the roots are best determined by plotting the function.

Extraction of Roots
All methods of finding roots are iterative procedures that require a starting point, i.e., an estimate of the root. An estimate can be crucial; a bad starting value may fail to converge, or it may converge to the wrong root. highly advisable to go a step further and bracket the root (determine its lower and upper bounds) before passing the problem to a root-finding algorithm.

Stopping Criteria
a s
actual error vs. significant error where x present x previous a = 100 x present

s = 0.5 10

2 n

xpresent = estimated root at present iteration xprevious = estimated root at previous iteration n = no. of significant figures

Stopping Criteria (simplified)


For simplicity, one may stop iterating the algorithm until such time that the convergence satisfies the condition,

f ( x present ) 0.00005

Methods for Roots of Nonlinear Equations


Direct or Incremental Search Method Interval-Halving Method Method of False-Position Newton-Raphson 1st Method Newton-Raphson 2nd Method Secant Method Fixed-Point Iteration Bairstows Method

Direct or Incremental Search Method


if f(x1) and f(x2) have opposite signs, then there is at least one root in the interval (x1 xroot x2) If the interval is small enough, it is likely to contain a single root. the zeroes of f(x) can be detected by evaluating the function at intervals x and looking for change in sign.

Direct or Incremental Search Method


Algorithm: 1. Estimate the initial interval (xi, xi+1) that will bound the root (f(xi) and f(xi+1) should be opposite in signs), where xi+1 = xi + x 2. Evaluate f(xi) and f(xi+1). 3. Check the product of f(xi) and f(xi+1): if (+): xi+1 becomes the new lower bound xi. Then generate a new xi+1 using same x if (-): estimate a smaller x and generate a new xi+1. 4. Repeat 2 & 3 until stopping criteria is achieved.

Example
Test function: f(x) = e-x cosx
1.2 1 0.8 0.6 f(x) 0.4 0.2 0 -0.2 -0.4

0.5

1.5 x

2.5

3.5

Initial estimates: xi = 1, xi+1 = 1.5 and x = 0.5

Simulation
Iteration (i) 1 2 3 4 5 6 7 8 9 10 Root xi 1 1 1.25 1.25 1.25 1.25 1.28125 1.28125 1.28125 1.28907 x 0.5 0.25 0.25 0.125 0.0625 0.03125 0.03125 0.01563 0.00782 0.00782 xi+1 1.5 1.25 1.5 1.375 1.3125 1.28125 1.3125 1.29688 1.28907 1.29689 f(xi) -0.17242 -0.17242 -0.02882 -0.02882 -0.02882 -0.02882 -0.00783 -0.00783 -0.00783 -0.00249 Function f(xi+1) 0.15239 -0.02882 0.15239 0.05829 0.01371 -0.00783 0.01371 0.00288 -0.00249 0.00289 f(xi)f(xi+1) + + + -

Simulation (part 2)
Iteration (i) xi 11 12 13 14 15 16 17 18 1.28907 1.28907 1.29103 1.29103 1.29201 1.29201 1.29250 1.29250 Root x 0.00391 0.00196 0.00196 0.00098 0.00098 0.00049 0.00049 0.00025 xi+1 1.29298 1.29103 1.29299 1.29201 1.29299 1.29250 1.29299 1.29275 f(xi) -0.00249 -0.00249 -0.00114 -0.00114 -0.00047 -0.00047 -0.00013 -0.00013 Function f(xi+1) 0.00020 -0.00114 0.00020 -0.00047 0.00020 -0.00013 0.00020 0.00004 f(xi)f(xi+1) + + + -

Bisection Method of Bolzano


uses the same principle as incremental search: if there is a root in the interval (x1, x2), then f(x1)f(x2) < 0. In order to halve the interval, we compute f(x3),where x3 = (x1 + x2) is the midpoint of the interval If f(x2)f(x3) < 0, then the root must be in (x2, x3) and we record this by replacing the original bound x1 by x3 . Otherwise, the root lies in (x1, x3), in which case x2 is replaced by x3.

Bisection Method of Bolzano


Algorithm: 1. Estimate the initial interval (xi, xi+1) that will bound the root (f(xi) and f(xi+1) should be opposite in signs). 1 xr = ( xi + xi +1 ) 2. Compute for xr , where 2 3. Evaluate f(xi), f(xi+1) and f(xr). 4. Check the product of f(xi) and f(xr): if (+): xr becomes the new lower bound xi. Then generate a new xr using step 2. if (-): xr becomes the new upper bound xi+1. Then generate a new xr using step 2. 5. Repeat 3 & 4 until stopping criteria is achieved.

Example
Test function: f(x) = e-x cosx
1.2 1 0.8 0.6 f(x) 0.4 0.2 0 -0.2 -0.4

0.5

1.5 x

2.5

3.5

Initial estimates: xi = 1, xi+1 = 1.5 and xr = 1.25

Simulation
Iteration (i) 1 2 3 4 5 6 7 8 9 10 11 xi 1 1.25 1.25 1.25 1.28125 1.28125 1.28907 1.28907 1.29103 1.29201 1.29250 Root xi+1 1.5 1.5 1.375 1.3125 1.3125 1.29688 1.29688 1.29298 1.29298 1.29298 1.29298 xr 1.25 1.375 1.3125 1.28125 1.29688 1.28907 1.29298 1.29103 1.29201 1.29250 1.29274 f(xi) -0.17242 -0.02882 -0.02882 -0.02882 -0.00783 -0.00783 -0.00249 -0.00249 -0.00114 -0.00047 -0.00013 Function f(xi+1) 0.15239 0.15239 0.05829 0.01371 0.01371 0.00288 0.00288 0.00020 0.00020 0.00020 0.00020 f(xr) -0.02882 0.05829 0.01371 -0.00783 0.00288 -0.00249 0.00020 -0.00114 -0.00047 -0.00013 0.00003 f(xi)f(xr) + + + + + +

Method of False Position (Regula Falsi)


Similarly to the bisection method, the false position or regula falsi method starts with the initial solution interval [xi, xi+1] that is believed to contain the solution of f(x) = 0. Approximating the curve of f(x) on [xi, xi+1] by a straight line connecting the two points (xi, f(xi)) and (xi+1, f(xi+1)), it guesses that the solution may be the point at which the straight line crosses the x axis:

xi +1 f ( xi ) xi f ( xi +1 ) xr = f ( xi ) f ( xi +1 )

Algorithm: 1. Estimate the initial interval (xi, xi+1) that will bound the root. 2. Compute for new root xr , where x = xi+1 f (xi ) xi f (xi+1)
r

Method of False Position (Regula Falsi)

3. Evaluate f(xi), f(xi+1) and f(xr). 4. Check the product of f(xi) and f(xr): if (+): xr becomes the new lower bound xi. Then generate a new xr using step 2. if (-): xr becomes the new upper bound xi+1. Then generate a new xr using step 2. 5. Repeat 3 & 4 until stopping criteria is achieved.

f (xi ) f (xi+1)

Example
Test function: f(x) = e-x cosx
1.2 1 0.8 0.6 f(x) 0.4 0.2 0 -0.2 -0.4

0.5

1.5 x

2.5

3.5

Initial estimates: xi = 1, xi+1 = 1.5 and xr = 1.265416357

Simulation
Iteration (i) 1 2 3 4 Root xi 1 1.26542 1.29085 1.29257 xi+1 1.5 1.5 1.5 1.5 xr 1.26542 1.29085 1.29257 1.29269 f(xi) -0.17242 -0.01853 -0.00127 -0.00008 Function f(xi+1) 0.15239 0.15239 0.15239 0.15239 f(xr) -0.01853 -0.00127 -0.00008 -0.00001 f(xi)f(xr) + + +

Newton-Raphson 1st Method (Method of Tangents)


best-known method of finding roots for a good reason: it is simple and fast. only drawback of the method is that it uses the derivative f(x) of the function as well as the function f(x) itself derived from the Taylor Series expansion:

f ( xi ) xi +1 = xi f '( xi )

Newton-Raphson 1st Method (Method of Tangents)


Algorithm: 1. Estimate the initial root approximation xi. 2. Compute for new root xi+1, where

f ( xi ) xi +1 = xi f '( xi )
This will be the next iterations xi. 3. Evaluate f(xi) and f(xi+1). 4. Repeat 2 & 3 until stopping criteria is achieved.

Example
Test function: f(x) = e-x cosx
1.2 1 0.8 0.6 f(x) 0.4 0.2 0 -0.2 -0.4

Initial estimate: xi = 1, f(x) = e-x cosx and f(x) = - e-x + sinx xi+1 = 1.36407505

0.5

1.5 x

2.5

3.5

Simulation
Iteration (i) xi 1 2 3 1 1.36408 1.29442 Root xi+1 1.36408 1.29442 1.29270 f(xi) -0.17242 0.05036 0.00119 Function f'(xi) 0.47359 0.72309 0.68800 f(xi+1) 0.05036 0.00119 0.00000

Newton-Raphsons 2nd Method


Calculus based, derived from the 1st method and truncated Taylors series expansion 1st and 2nd order derivatives are used to improve the approximation of the estimated root for f(x) = 0.

Newton-Raphsons 2nd Method


Algorithm: 1. Estimate the initial root approximation xi. 2. Compute for new root xi+1, where

f "( xi ) f '( xi ) xi +1 = xi + 2 f '( xi ) f ( xi )

This will be the next iterations xi. 3. Evaluate f(xi) and f(xi+1). 4. Repeat 2 & 3 until stopping criteria is achieved.

Example
Test function: f(x) = e-x cosx
1.2 1 0.8 0.6 f(x) 0.4 0.2 0 -0.2 -0.4

x Initial estimate: xi = 1, f(x) = e-x cosx, f(x) = - e-x + sinx and f (x) = e-x + cosx xi+1 = 1.320290083

0.5

1.5

2.5

3.5

Simulation
Iteration (i) xi 1 2 1 1.26987 Root xi+1 1.26987 1.29269 f(xi)
-0.17242

Function f '(xi)
0.47359

f "(xi)
0.90818

f(xi+1)
-0.01554

-0.01554

0.67419

0.57728

0.00000

Secant Method
Newton-Raphson method involve nonelementary forms (integral, derivative, etc) so it is desirable to have an algorithm that converges just as fast yet only involves evaluation of f(x) and not f(x) Similar to Regula Falsi using two initial points

f ( xi )( xi xi 1 ) xi +1 = xi f ( xi ) f ( xi 1 )

Secant Method
Algorithm: 1. Estimate the two initial points xi and xi-1. 2. Evaluate f(xi) and f(xi-1). 3. Compute for new root xi+1, where

f ( xi )( xi xi 1 ) xi +1 = xi f ( xi ) f ( xi 1 )

This will be the next iterations xi. The current xi value will be transferred to the next iterations xi-1. 4. Repeat 2 & 3 until stopping criteria is achieved.

Example
Test function: f(x) = e-x cosx
1.2 1 0.8 0.6 f(x) 0.4 0.2 0 -0.2 -0.4

0.5

1.5 x

2.5

3.5

Initial estimate: xi = 1, xi-1 = 0.5 xi+1 = 1.874097878

Simulation
Iteration (i) Root Function

xi-1
1 2 3 4 5 0.5 1 1.87410 1.24130 1.28623

xi
1 1.87410 1.24130 1.28623 1.29284

xi+1
1.87410 1.24130 1.28623 1.29284 1.29270

f(xi-1)
-0.27105 -0.17242 0.45217 -0.03456 -0.00443

f(xi)
-0.17242 0.45217 -0.03456 -0.00443 0.00010

f(xi+1)
0.45217 -0.03456 -0.00443 0.00010 0.00000

Fixed-Point Iteration
also known as Picard iteration/functional iteration/contraction theorem The objective of this method is to construct an auxiliary function g(x) from f(x) where we find the root x0 such that f(x0) = 0. The auxiliary function has the form x0 = g(x0). This methods convergence is indifferent of its initial estimate as long as the fixed point exists in the interval x1 x0 x2

Example
Evaluate:
x = 3 6 + 3 6 + 3 6 +L

This equation is the same as


x3 6 = 3 6 + 3 6 + 3 6 + +L

or The auxiliary function is then

x3 6 = x

g ( x) = 3 x + 6

Simulation
Iteration (i) 1 2 3 4 5 6 7 Root xi 0 1.81712 1.98464 1.99872 1.99989 1.99999 2.00000 Function g(xi) 1.81712 1.98464 1.99872 1.99989 1.99999 2.00000 2.00000

Bairstows Method
A special problem associated with polynomials Pn(x) is the possibility of complex roots. Newtons method, the secant method, and Mullers method all can find complex roots if complex arithmetic is used and complex initial approximations are specified. When polynomials with real coefficients have complex roots, they occur in conjugate pairs, which corresponds to a quadratic factor of the polynomial Pn(x). This method extracts quadratic factors from a polynomial using only real arithmetic.

Bairstows Method
Consider the general nth-degree polynomial, Pn(x):
Pn ( x ) = an x n + an 1 x n 1 + L + a0

Lets factor out a quadratic factor from Pn(x). Thus,


Pn ( x) = ( x 2 rx s )Qn 2 ( x ) + remainder
Pn ( x ) = ( x 2 rx s )(bn x n 2 + bn 1 x n 3 + L + b3 x + b2 ) + remainder

or

Bairstows Method
When the remainder is zero, (x2 rx s) is an exact factor of Pn(x). This is best understood by synthetic division done iteratively until r and s converge to a specific value. Roots are extracted two at a time when r and s are determined. The remaining polynomial becomes reduced in two degrees lower, hence, the method can be repeated until all roots are solved.

Example
Find the roots of From the given equation,
f ( x) = x3 3x 2 + 4 x 2 = 0
a3 = 1.0 a2 = 3.0 a1 = 4.0 a0 = 2.0

We initially guess values for r and s. For this problem, well have r1 = 1.5 and s1 = -2.5

Simulation
1 r1 = 1.5 s1 = -2.5 1 r1 = 1.5 s1 = -2.5 1 0 -1.5 1.5 -3 1.5 4 -2.25 -2.5 -0.75 0 -2.5 -3.25 -2 -1.125 3.75 0.625

(0.0) r + (1.0) s = 0.75 3.25r + (0.0) s = 0.625

(-)

(-)

Simulation
Solving simultaneously
r = 0.192308 and s = 0.75

And

r2 = r1 + r = 1.5 + 0.192308 = 1.692308 s2 = s1 + s = 2.5 + 0.75 = 1.75

Then we use the new set of values of r and s until such time that

Simulation
i
1 2 3 4 5 6

r
1.5 1.692308 1.97066 2.005304 1.999987 1.999999

s
-2.5 -1.75 -1.894041 -2.004132 -1.999959 -2

r
0.192308 0.278352 0.034644 -0.005317 0.000012 0

s
0.75 -0.144041
r6 = 1.999999 s6 = -2

-3 1.999999

4 -2.000001 -2

-2 0 2 0

-1.000001 1.999999

0 2 -2

-0.110091 0.004173

r6 = 1.999999 s6 = -2

-0.000041
1 0.999998 0

Simulation
Then the quadratic factor is
x 2 rx s = x 2 2.0 x + 2.0 = 0

Where the two roots are determined by the quadratic formula


b b 2 4ac (2.0) (2.0)2 4(1.0)(2.0) x= = = 1 + j ,1 j 2a 2(1.0)

The remaining Qn-2(x) polynomial can still be reduced by continuous Bairstows algorithm until all roots have been determined.

Вам также может понравиться