Вы находитесь на странице: 1из 55

Contents

General
Blind Search Method
Bisection Method
Newton-Raphson Method
Secant Method **
RegulaFalsi Method
Other Reference Materials

The Lecture Notes are based on Dr Peter Tsang's Lecture Notes & S C Chapra 1
4.1 General

In this Chapter, various iteration methods have been introduced to find


the solutions or the roots of a function.

It is quite easy to find the solutions of the following 2nd order equation:
0,
but how about the solutions of the following Nth order equation?
0.

For example, find the value of x such that


x 1905

Convert the problem into a equation, and solve its root. In this example,
we have
f x x 2 1905

The root of the equation is the value of x when f(x) = 0.

2
4.1 General
One of the simple technique is to find the root by using
Graphical
graphical method.
Method
% Graphic Technique to solve f(x)=x^2-1905

x=linspace(0,100,1001); % in 0.1 step


fx=x.^2-1905;
plot(x,fx);
grid
xlabel('x value')
ylabel('f(x) value')
title('Graphical Method to solve f(x)=0 ')

Apparently, the root is between


40 to 50.

What would happen if 2nd code line was replaced with fx=x^2-1905 ?

3
4.1 General
A better accuracy solution is obtained if the curve is zoomed into a
Graphical
smaller scale.
Method

Problems in graphical method are:


(a) Normally the curve is constructed by connecting the points linearly. The
linear value between two points may not be the exact value of f(x).
(b) Many trial and error attempts (depends on the eye inspection on the curve)
will be made before an accurate solution can be found.
However, graphical method, if it is available, can give a good initial
guess of the root !! 4
4.1 General
Lets plot the following sin(x) function as an example Graphical
Method
clear
x=[0:1:2*pi]
y=sin(x)
plot(x,y,x,y,'ro')
title('Sine function y = sin(x)')
xlabel('x')
ylabel(y')

In Matlab plot, it will only


connect the two data points
linearly. The linear curve
between the two neighbor
points may not be an accurate
representation to the true value
of the function.

5
4.2 Blind Search Method

Consider the previous example 1905

Try a random search and fit into the equation. Normally it starts
with a initial guess and updates the guess with a for every trial
until the root (or the approximate root) has been found. Consider
the following example starts with the initial guess of x = 0.

Yes
Set x = 0 Find f(x) f x 0 ? x is the solution

No

x=x+

6
4.2 Blind Search Method

The blind search

function blind_search( )
x=0; %Initial solution
delta=0.01; %Incremental step
num=1905; %Data to compute the square root
fx=x*x-num; %Compute initial value of f(x)
iteration=1; %This is the first iteration

while fx < 0 %Iterate blind search, here we use sign change as the criterion
x=x+delta; %Increase the solution by delta,
fx=x*x-num; %and test the result
iteration=iteration+1; %One more iteration added
end

display('-----------------------------------------------------------') %print result


display(['Number of iterations = ', num2str(iteration)])
display(['x = ', num2str(x)])
display(['Absolute true percentage relative error = ',num2str(abs((sqrt(num)-x)/sqrt(num))*100)])
display('-----------------------------------------------------------') %print result

What will happen if we use while fx ~=0 as the criterion in the loop to find the solution ? 7
4.2 Blind Search Method

After running the programme, we get


-------------------------------------------------
Number of iterations = 4366
x = 43.65
Absolute true relative percentage error = 0.0084642
-------------------------------------------------

Note: | 100%| true value = 43.64630569

What is the true error ?


How do we increase the accuracy of the answer?
In this programme, is the true percentage relative error always 0 for
finding the square-root of any positive number ?

If we re-express the function as: 1905


How do we modify the programme to find f(x)0 of the above function?

8
4.2 Blind Search Method

If a similar programme is used to find the square-root of 2e8, after


running the programme, we get
-----------------------------------------------------------
Number of iterations = 1414215
x = 14142.14
Absolute true relative percentage error = 3.0947e-005
-----------------------------------------------------------
What is the maximum absolute error of this algorithm for any positive
number?

Why is the absolute true relative percentage error


much less than that in the previous example?

Can you draw any conclusions on this algorithm


from these two examples and from the above
questions?

9
4.2 Blind Search Method

Key Characteristics:

It is simple but not efficient and time consuming. The number of iteration
increases a lot when the constant changes from 1905 to 2e8 in the
equation.
The absolute true error relates to the step size and the maximum
absolute true % relative error increases with the step size and is inversely
proportional to the absolute value of the root.
The number of computation depends on the initial guess and inversely
proportional the step size . There is a compromise between the accuracy
and the computation complexity.
It can not guarantee to find the root. Consider the following example:

f x 1905 x 3
The previous program doesn't work here because the solution of x is a negative
value. If the initial guess remains at x = 0, then a negative value of should be
used, OR (any other suggestion ??)

10
4.3 Bisection Method
4.3.1 General
If the function has a root, it will cross zero at certain point. This implies
that if there is a root between ak and bk , f(ak) and f(bk) will be in
opposite sign. This forms the basics of Bisection Method.

0 0
OR 0
b 0 0

ak
x
bk

11
4.3 Bisection Method

4.3.1 General
ak bk
General Cases

For continuous function, in (a) and (c), both


f(ak) and f(bk) have the same sign. In general,
either there is no root or there are even
number of roots within the interval [ak, bk].

In (b) and (d), f(ak) and f(bk) have the opposite


sign. In general, there is odd number of roots
within the interval [ak, bk].

General
cases

12
4.3 Bisection Method

4.3.1 General

Special Cases

For some special cases like in (a), there are bk


only two roots within the interval [ak, bk]. The ak
function is tangential to the x-axis. This
makes a single root at the tangent point.

In (b), the number of roots is not well defined if


there exists discontinuity within the interval [ak,
bk].

Special
Cases

13
4.3 Bisection Method
In blind search example in section 4.2. we use the change sign in f(x) to
find the root. Here we use the same principle to find the root of other
functions.
4.3.2 Algorithm
Take two points ak and bk, with f(ak) and f(bk) in the opposite sign. Since we
have no idea what is the exact solution, a good approximation is to use the
mean value of /2 as a trial. Compute the mid value ck, i.e. (ak + bk)/2,
now use some imaginations.

f x
ak bk
ck
2
ak
x
ck bk

If f(ck) is zero or within a given threshold, ck is the solution. If not, one or more
iteration(s) is/are required. 14
4.3 Bisection Method

4.3.2 Algorithm

If f(bk) and f(ck) have the same sign, it is unlikely that there is a root
between them (but there may be even number of roots between them).

Similarly, if f(ak) and f(ck) have the same sign, it is unlikely the root will
be in between. So in the following curve, which interval should we
consider, [ak, ck] or [ck,bk] for the next iteration?
f x

ak
x
ck bk

15
4.3 Bisection Method
4.3.2 Algorithm

Logically, we should consider interval [ak, ck] because f(ak) and f(ck)
have opposite sign.
f x In the next iteration,
i.e. k 1

( = ak ) 2
ak+1 ck+1
x
bk+1 ( = ck)

In the k+2 iteration, we should consider interval [bk+1, ck+1] because


f(bk+1) and f(ck+1) have opposite sign. Repeating the iteration will
ultimately move the bisection point to a location which is close to, or
equal to the root.
16
4.3 Bisection Method
4.3.2 Algorithm f x

Algorithm

ak
x
ck
bk

1. Start with k = 1. 4. If the sign of f(ak) and f(ck)


are different, set ak+1 = ak and
2. Estimate two points ak and bk such that the bk+1 = ck .
sign of f(ak) and f(bk) are different.
5. Otherwise, set ak+1 = ck and
3. Determine the midpoint ck between ak and
bk+1= bk .
bk, i.e. ck = (ak + bk)/2. If f(ck) is zero or within
a given threshold, ck is the solution, 6. Go to step 3 to find the new
otherwise go to step 4. value

17
4.3 Bisection Method

4.3.3 Termination Criteria and Error Estimates


In normal applications, the true value is unknown, i.e. the true
percentage relative error is unknown. The error estimate is then based on
approximated values from the iterations. The absolute approximated
percentage relative error at the kth iteration is:
approximation error c c
| a |k | | x100% | k k 1 | x100%
approximation ck
What is the possible problem of setting the termination criteria as
(where is a predefined bound) ?

Example
Determine the root of the following by using 1e-3 :
39.24 sinh( x) e x e x
f ( x) 39.24 x tanh( ) 36 [ Note : tanh(x) ]
x cosh(x) e x e x
Assume the root is between 50 and 200.
18
Example (cont.)
% Bisection Method using Matlab (bisection_ex1.m)
clear
true_ans=142.7376; a=50; b=200; c=(a+b)/2;
iteration=1;
err_bound=1e-3; % Set the bound at 1e-3.
fx_c=sqrt(39.24*c)*tanh(sqrt(39.24/c))-36;
while abs(fx_c)>err_bound
fx_a=sqrt(39.24*a)*tanh(sqrt(39.24/a))-36;
if fx_a*fx_c<0;
b=c;
else
a=c;
end;
c=(a+b)/2;
fx_c=sqrt(39.24*c)*tanh(sqrt(39.24/c))-36;
atr_error=abs((true_ans-c)/true_ans);
iteration=iteration+1;
end;
display(['The estimated x by using Bisection Method = ',num2str(c)])
display(['The |f(c)| value by using Bisection Method = ',num2str(abs(fx_c))])
display(['Number of iterations by using Bisection Method = ',num2str(iteration)])
display(['The absolute true relative error by using Bisection Method = ',num2str(atr_error)])

19
4.3 Bisection Method

4.3.3 Termination Criteria and Error Estimates

Example (cont.)

After running the programme bisection_ex1 in MATLAB, we get the


following results.

>> bisection_ex1
The estimated x by using Bisection Method = 142.7246
The |f(c)| value by using Bisection Method = 0.00026643
Number of iterations by using Bisection Method = 10
The absolute true relative error by using Bisection Method = 9.1011e-005

What else should be changed in the programme if the absolute true


relative error is used as the termination criteria (with the same error
bound)? Here we assume that the true ans is known and equals
142.7376.

20
Example (cont.)
% Bisection Method using Matlab (bisection_ex2.m)
clear
true_ans=142.7376; a=50; b=200; c=(a+b)/2;
iteration=1;
err_bound=1e-3; % Set the bound at 1e-3.
atr_error=abs((true_ans-c)/true_ans);
fx_c=sqrt(39.24*c)*tanh(sqrt(39.24/c))-36;
while atr_error>err_bound
fx_a=sqrt(39.24*a)*tanh(sqrt(39.24/a))-36;
if fx_a*fx_c<0;
b=c;
else
a=c;
end;
c=(a+b)/2;
fx_c=sqrt(39.24*c)*tanh(sqrt(39.24/c))-36;
atr_error=abs((true_ans-c)/true_ans);
iteration=iteration+1;
end;
display(['The estimated x by using Bisection Method = ',num2str(c)])
display(['The |f(c)| value by using Bisection Method = ',num2str(abs(fx_c))])
display(['Number of iterations by using Bisection Method = ',num2str(iteration)])
display(['The absolute true relative error by using Bisection Method = ',num2str(atr_error)])

21
4.3 Bisection Method

4.3.3 Termination Criteria and Error Estimates

Example (cont.)

After running the programme bisection_ex2 in MATLAB, we get the


following results (assume that the error bound for the absolute true
relative error remains the same, i.e. 1e-3):
>> bisection_ex2
The estimated x by using Bisection Method = 142.8711
The |f(c)| value by using Bisection Method = 0.0027277
Number of iterations by using Bisection Method = 9
The absolute true relative error by using Bisection Method = 0.00093524

How does it affect the result(s) if the following changes are made, i.e.
reduces one codeline of fx_c=sqrt(39.24*c)*tanh(sqrt(39.24/c))-36 ?

22
Example (cont.)
% Bisection Method using Matlab (bisection_ex2.m)
clear
true_ans=142.7376; a=50; b=200; c=(a+b)/2;
iteration=1;
err_bound=1e-3; % Set the bound at 1e-3.
atr_error=abs((true_ans-c)/true_ans);
fx_c=sqrt(39.24*c)*tanh(sqrt(39.24/c))-36;
while atr_error>err_bound
fx_a=sqrt(39.24*a)*tanh(sqrt(39.24/a))-36;
fx_c=sqrt(39.24*c)*tanh(sqrt(39.24/c))-36;
if fx_a*fx_c<0;
b=c;
else
a=c;
end;
c=(a+b)/2;
fx_c=sqrt(39.24*c)*tanh(sqrt(39.24/c))-36;
atr_error=abs((true_ans-c)/true_ans);
iteration=iteration+1;
end;
display(['The estimated x by using Bisection Method = ',num2str(c)])
display(['The |f(c)| value by using Bisection Method = ',num2str(abs(fx_c))])
display(['Number of iterations by using Bisection Method = ',num2str(iteration)])
display(['The absolute true relative error by using Bisection Method = ',num2str(atr_error)])

23
4.3 Bisection Method

4.3.3 Termination Criteria and Error Estimates

Example (cont.)

After running the programme, we get the following results:


>>
The estimated x by using Bisection Method = 142.8711
The |f(c)| value by using Bisection Method = 0.0086995
Number of iterations by using Bisection Method = 9
The absolute true relative error by using Bisection Method = 0.00093524

The display |f(c)| ans is different because it hasnt been updated with
the latest c value. All other display answers remain the same.

24
Example (cont.) Here absolute approximation relative error is used
as the termination criteria
% Bisection Method using Matlab (bisection_ex3.m)
clear
true_ans=142.7376; a=50; b=200; c=(a+b)/2;
iteration=1;
err_bound=1e-3; % Set the bound at 1e-3.
aa_error=err_bound+1; % Force to do at least 1 iteration
fx_c=sqrt(39.24*c)*tanh(sqrt(39.24/c))-36;
while aa_error>err_bound
fx_a=sqrt(39.24*a)*tanh(sqrt(39.24/a))-36;
if fx_a*fx_c<0;
b=c;
else
a=c;
end;
c_old=c;
c=(a+b)/2;
fx_c=sqrt(39.24*c)*tanh(sqrt(39.24/c))-36;
aa_error=abs((c-c_old)/c); % Get the absolute approx. relative error
atr_error=abs((true_ans-c)/true_ans); % Get the absolute true relative error
iteration=iteration+1;
end;
display(['The estimated x by using Bisection Method = ',num2str(c)])
display(['The |f(c)| value by using Bisection Method = ',num2str(abs(fx_c))])
display(['Number of iterations by using Bisection Method = ',num2str(iteration)])
display(['The absolute approximated relative error by using Bisection Method = ',num2str(aa_error)])
display(['The absolute true relative error by using Bisection Method = ',num2str(atr_error)]) 25
4.3 Bisection Method

4.3.3 Termination Criteria and Error Estimates

Example (cont.)

After running the programme bisection_ex3 in MATLAB, we get the


following results (assume that the error bound for the absolute
approximate relative error is set at 1e-3):
>> bisection_ex3
The estimated x by using Bisection Method = 142.7979
The |f(c)| value by using Bisection Method = 0.0012313
Number of iterations by using Bisection Method = 11
The absolute approximated relative error by using Bisection Method = 0.00051291
The absolute true relative error by using Bisection Method = 0.00042211

Compare with bisection_ex1, please note that the absolute true relative
error increases even the number of iteration increases from 10 to 11.

26
4.3 Bisection Method

4.3.3 Termination Criteria and Error Estimates

Advanced Level

There is a hidden programme bug in bisection_ex3 for a special case. If


the 1st c is the exact ans, a2=c1 and c2=(a2+b2)/2. In the 2nd iteration,
fx_a*fx_c=0, a3=c2 and c3=(a3+b3)/2, both a3 and c3 are far away from
the true solution and itll never converge to the true solution. How to fix
the bug ?

27
4.3 Bisection Method
4.3.3 Termination Criteria and Error Estimates
Example (Cont.)

ak bk ck

The true root is 142.7376.


In bisection method in this example, | a || t |

After 21 iterations, ck = 142.7377 with an absolute approximate


relative error of 5.011 10

28
4.3 Bisection Method
4.3.3 Termination Criteria and Error Estimates
Example (Cont.)

The ragged nature of the true error is due to the fact that the true root can lie
anywhere within the bracketing interval. It will be large when the true root is ~ at
the middle of the interval and it will be small when the true root falls at either end
of the interval.

If both and are


+ve number or are ve
number, than it can be
shown that

29
4.3 Bisection Method

4.3.4 Key Characteristics:

The algorithm is simple. The convergence rate is not fast but


reasonable.
It requires two initial guesses and the function of them must be in
opposite sign.
It can guarantee the iteration will converge to a value which is
close to the root.
The final solution is between the two initial guesses.
The absolute true error after N iteration is
| |
where and are the two initial guesses

i.e. it can be predicable !!

30
4.4 Newton-Raphson Method

Tangent at xk
f x

xk+1

x
xk

If the initial guess is only ONE point at xk,, a tangent at f(xk) crosses
the x-axis usually represents an improved estimate of the root. Note
xk+1 is closer to the actual root compare with that from xk. By
continuously iterations, a more precise root can be found.
http://en.wikipedia.org/wiki/Newton's_method
31
4.4 Newton-Raphson Method

f xk 0 Tangent at xk
f ' xk
xk xk 1
f x f x
i.e. xk 1 xk ' k
f xk
xk+1
f xk 1
xk 2 xk 1
f ' xk 1
x
xk
1. Starting with k = 1
2. Estimate the initial point xk
3. Calculate f(xk)
4. If f(xk) ) [or the approx error] is within
err_bound, then xk is the solution,
otherwise go to step 5
5. Find the next point xk+1 and then go to
step 3
x1 x2 x3 ..... 32
4.4 Newton-Raphson Method

Graphical representation of the algorithm for another example

i.e.

For each iteration, the movement of


for the new x value (i.e.
is

e.g.

In this example, is moving


towards the true root.

This example is from Dr S C Chans Lecture Notes 33


4.4 Newton-Raphson Method
Flow diagram of Newton Raphson Method

34
4.4 Newton-Raphson Method

Newton-Raphson method (NR)

% Use newton-Raphson to find the square-root of 2e8


x=1; %Initial solution
err_bound=1e-3; %Set error bound
num=2.0e8; %Data to compute the square root (2.0e8=200000000)
fx=x*x-num; %value of f(x)
dfx=2*x; %value of the derivative of f(x), i.e. f'(x)
iteration=1;
while abs(fx) > err_bound %Set error bound and iterate Newton-Raphson search
x=x-fx/dfx; %update solution,
fx=x*x-num; %update new value of f(x)
dfx=2*x; %update new value of f'(x)
iteration=iteration+1;
end
display('-----------------------------------------------------------') %print result
display(['Number of iterations = ', num2str(iteration)])
display(['x = ', num2str(x)])
display(['Absolute true percentage relative error = ',num2str(abs((sqrt(num)-x)/sqrt(num))*100)])
display('-----------------------------------------------------------') %print result

35
4.4 Newton-Raphson Method

Newton-Raphson method (NR)

After running the programme, we get


-----------------------------------------------------------
Number of iterations = 19
x = 14142.1356
Absolute true percentage relative error = 1.2862e-014
-----------------------------------------------------------
Compare the number
In the programme, abs(fx) is used instead of fx. of iterations with blind
Is it necessary in Newtons method? search

In the programme, dfx appears twice. How do 19 versus 1414215 !!


we modify the programme such that it only and with better
appears once. accuracy
Note: If one more iteration is performed, the
absolute true percentage relative error
becomes zero in Matlab environment.
36
4.4 Newton-Raphson Method

Consider an example as in Page 158 of S C Chapra, determine the


positive root of f(x) = x10 1.
x10 1 x k 1
10

The NR formula for this case is: xk 1 xk k


x
d ( x10 1)
k
10 xk9
| x xk
dx
After the 1st poor guess at 0.5, x2 move to 51.65 !! Eventually it
converges on the true root of 1 but at a slow rate.

k xk Error = 100% X: true value

1 0.5 50%
2 51.65 5,065%
3 46.485 4,549%
4 41.8365 4,084%
: : :
: : :
41 1.002316 0.2316%
42 1.000024 0.0024%
43 1.000000 0.0000%

If 0.1 1.0000 10 the situation becomes worse than before


If 1.5 1.3526 the situation becomes better than before 37
4.4 Newton-Raphson Method
Problems

Examples of Poor Convergence [From p.159 of S C Chapra] 38


4.4 Newton-Raphson Method
Problems

Examples of Poor Convergence [From p.159 of S C Chapra] 39


4.4 Newton-Raphson Method

Key Characteristics:

Only ONE guess is necessary.


It has fast convergence when the initial guess is close to the exact
solution.
It is most widely used because of its simplicity and fast convergence.
The 1st derivative of a function, i.e. f (x), may be difficult to find.
Poor convergence property when f (x) close to zero.
Convergence is not guaranteed, particular under poor initial guess.

40
4.5 Secant Method **

The Secant Method is the most popular among the many variants of
Newton Method. Secant is used to approximate the tangent to a curve.
Here, we use TWO points xk-1 and xk, and extrapolate them to cut zero.
f x

xk+1

x
xk xk-1

In the next iteration, we extrapolate the newly found point xk+1 from the
previous points xk and xk-1. Repeating the iteration may ultimately move
the estimated point to a location which is close to, or equal to the root.

41
4.5 Secant Method **

The secant method assumes that the function is approximately linear


in the local region of interest and uses the zero-crossing of the line
connecting the limits of the interval as the new reference point. To
deduce the zero-crossing point, we have
xk 1 xk xk xk 1

0 f ( xk ) f ( xk ) f ( xk 1 )

( xk xk 1 ) f ( xk ) ( xk xk 1 ) f ( xk )
xk 1 xk or xk 1 xk
f ( xk ) f ( xk 1 ) f ( xk ) f ( xk 1 )

As the secant method does not always bracket the root, the algorithm
may not converge for functions that are not sufficiently smooth.
Otherwise repeating the iteration could move the value of x towards the
root.

42
4.5 Secant Method **

The Secant Method can also be derived from Newton Raphson


Method. In Newton Raphson Method,

f xk
xk 1 xk '
f xk
If f(x) is approximately linear within the region between f(xk) and f(xk-1),
then
f ( xk ) f ( xk 1 )
f ' ( xk )
xk xk 1

( xk xk 1 ) f ( xk )
i.e. xk 1 xk
f ( xk ) f ( xk 1 )

43
4.5 Secant Method **

The first two iterations of the secant method. The red curve shows
the function f(x) and the blue lines are the secants
http://en.wikipedia.org/wiki/Secant_method
44
4.5 Secant Method **
Example:
Find the square root of 2e8, i.e. to find the solution of 2 8 0
With the initial guess at: 1, 0; same error bound as that in
Newton Raphson Method, i.e. err_bound =1e-3

-----------------------------------------------------------
Number of iterations = 27 (19) [1414215]

x = 14142.1356 (14142.1356) [14142.14]

Absolute true percentage relative error = 6.5083e-012


(1.2862e-014) [3.0947e-005]
-----------------------------------------------------------
>>

The results from Newton Raphson are in red and the results from Blind
search are in blue. It appears that the results from Secant Method are
similar to those from Newton Raphson.

45
4.5 Secant Method **

Key Characteristics:
The Secant Method is the most popular among the many variants of
NR (Newton Raphson) Method. Instead of using the exact tangent of
the curve, i.e. f (x), as that in NR Method, the secant of the curve (i.e.
the approximate tangent) is used.
TWO initial guesses are necessary but there is no restriction on
the sign of the function for these two points.
The final solution may not be within the two initial guesses.
The behavior is similar to NR Method and the convergence is not
guaranteed.
There is no derivative is required, the computation is a bit simpler.
For some cases, it is more stable when compare with NR Method.
In general, the convergence rate is slower than NR Method by
~40%.

46
4.6 Regula-Falsi Method

It is also referred as False Position method. It combines the features


from the Bisection Method and the Secant Method. Interpolate two
points ak and bk, where f(ak) and f(bk) are opposite in sign, i.e. at least
one root will be included within ak and bk. Locate the point ck where the
interpolated line cuts zero. In other words, ck will bias towards the end
point with smaller function value. For example, if |f(ak)| is smaller than
|f(bk)|, the choice of ck is closer to ak. According to Regula-Falsi, it
assumes that the solution is closer to ak. [Note: In Bisection Method, ck
= (ak + bk)/2.]
f x

ak
x
bk

ck
47
4.6 Regula-Falsi Method

Consider the two similar triangles f(bk) bk ck and f(ak) ak ck , we get


bk ck c ak
k
f (bk ) 0 0 f (ak )

f (ak )(bk ck ) f (bk )(ak ck )


f x
f (ak )bk f (bk ) ak
i.e. ck
f (ak ) f (bk )

ak
x
bk

ck

48
4.6 Regula-Falsi Method

If f(ck) is close to zero within a given threshold, ck is the solution. If not,


one or more iteration(s) is/are required.
If f(ak) and f(ck) are opposite in sign, set the interval to [ak, ck] in the 2nd
iteration. Otherwise set interval to [ck,bk]. Repeat the process until the
function is equal to or close to zero. The overall algorithm is similar to
that of Bisection method, the difference is on the calculation for ck.

f x

ak+1
x
bk

bk+1
49
4.6 Regula-Falsi Method

The first two iterations of the Regula-False (False position) Method.


The red curve shows the function f(x) and the blue lines are the secants.

http://en.wikipedia.org/wiki/False_position_method
50
4.6 Regula-Falsi Method

However the convergence rate may not always faster than Bisection method.
Consider the following example to find the root of f(x) = x10 1 between 0 and 1.3.
By using Bisection method, the following result can be obtained after 5 iterations.

ak bk ck

ak bk ck

51
4.6 Regula-Falsi Method

In Bisection method, each iteration


will reduce the distance between
the two end points by half (i.e.
|bkak|/2). By using Regula-Falsi
method in this example, the
distance between the two end
points reduces slowly and result in
poor convergence behavior. It is
because one of the end points (i.e.
1.3) will tend to stay fixed, as
shown in the figure. Unfortunately,
this end point has a larger |function
value|, i.e. |f(1.3)| compare with the
|function value| of the other end
point, but the actual correct root is
closer to 1.3 which violates Regula-
Falsis assumption.

52
4.6 Regula-Falsi Method

Key Characteristics:

The algorithm is similar to Bisection method. It requires two


initial guesses and the function of them should be in opposite
sign.

The final solution is between the two initial guesses.


Compare with Bisection method, a weighting factor has been
added to the choice of ck, it will bias to the end point with smaller
magnitude of the function.
It can guarantee the iteration will converge to a value which is
close to the root.
The convergence rate may be slower than Bisection method if the
same end point (ak or bk) has been used many times (i.e. it violates
the assumption of Regula-Falsi). For example in the previous example,
the value of bk doesnt change in the 1st few iterations.

53
Comparison

Note: This are typical


convergence plots. As
mentioned in the lecture
notes. Newton Raphson &
Secant methods are not
guaranteed to converge, and
False position (Regula-Falsi)
method may be slower than
Bisection method.

Comparison of the convergence rate of the true percentage relative


error for various methods to find the root of f(x)=ex x (S C Chapra). 54
4.7 Other Reference Materials
Other Reference materials in Internet/youtube
Other
http://mathforcollege.com/nm/videos/youtube/03nle/bisection/bisection_03nle_background.html References
http://mathforcollege.com/nm/videos/youtube/03nle/bisection/bisection_03nle_algorithm.html
http://mathforcollege.com/nm/videos/youtube/03nle/bisection/bisection_03nle_example.html
http://mathforcollege.com/nm/videos/youtube/03nle/bisection/bisection_03nle_advantages.html
http://mathforcollege.com/nm/videos/youtube/03nle/newtonraphson/newtonraphson_03nle_derivation.html
http://mathforcollege.com/nm/videos/youtube/03nle/newtonraphson/newtonraphson_03nle_example.html
http://mathforcollege.com/nm/videos/youtube/03nle/newtonraphson/newtonraphson_03nle_advantages1.html
http://mathforcollege.com/nm/videos/youtube/03nle/newtonraphson/newtonraphson_03nle_advantages2.html
http://mathforcollege.com/nm/videos/youtube/03nle/newtonraphson/newtonraphson_03nle_taylor.html
http://mathforcollege.com/nm/videos/youtube/03nle/newtonraphson/newtonraphson_03nle_squarerootnewtonraphson.html
http://mathforcollege.com/nm/videos/youtube/03nle/newtonraphson/newtonraphson_03nle_squarerootexample.html
http://mathforcollege.com/nm/videos/youtube/03nle/secant/secant_03nle_derivationapproach1.html
http://mathforcollege.com/nm/videos/youtube/03nle/secant/secant_03nle_derivationapproach2.html
http://mathforcollege.com/nm/videos/youtube/03nle/secant/secant_03nle_algorithm.html
http://mathforcollege.com/nm/videos/youtube/03nle/secant/secant_03nle_example.html
http://mathforcollege.com/nm/videos/youtube/03nle/false_position/false_position_03nle_part1.html
http://mathforcollege.com/nm/videos/youtube/03nle/false_position/false_position_03nle_part2.html
http://mathforcollege.com/nm/videos/youtube/03nle/false_position/false_position_03nle_part3.html

55

Вам также может понравиться