Академический Документы
Профессиональный Документы
Культура Документы
General
Blind Search Method
Bisection Method
Newton-Raphson Method
Secant Method **
RegulaFalsi Method
Other Reference Materials
The Lecture Notes are based on Dr Peter Tsang's Lecture Notes & S C Chapra 1
4.1 General
It is quite easy to find the solutions of the following 2nd order equation:
0,
but how about the solutions of the following Nth order equation?
0.
Convert the problem into a equation, and solve its root. In this example,
we have
f x x 2 1905
2
4.1 General
One of the simple technique is to find the root by using
Graphical
graphical method.
Method
% Graphic Technique to solve f(x)=x^2-1905
What would happen if 2nd code line was replaced with fx=x^2-1905 ?
3
4.1 General
A better accuracy solution is obtained if the curve is zoomed into a
Graphical
smaller scale.
Method
5
4.2 Blind Search Method
Try a random search and fit into the equation. Normally it starts
with a initial guess and updates the guess with a for every trial
until the root (or the approximate root) has been found. Consider
the following example starts with the initial guess of x = 0.
Yes
Set x = 0 Find f(x) f x 0 ? x is the solution
No
x=x+
6
4.2 Blind Search Method
function blind_search( )
x=0; %Initial solution
delta=0.01; %Incremental step
num=1905; %Data to compute the square root
fx=x*x-num; %Compute initial value of f(x)
iteration=1; %This is the first iteration
while fx < 0 %Iterate blind search, here we use sign change as the criterion
x=x+delta; %Increase the solution by delta,
fx=x*x-num; %and test the result
iteration=iteration+1; %One more iteration added
end
What will happen if we use while fx ~=0 as the criterion in the loop to find the solution ? 7
4.2 Blind Search Method
8
4.2 Blind Search Method
9
4.2 Blind Search Method
Key Characteristics:
It is simple but not efficient and time consuming. The number of iteration
increases a lot when the constant changes from 1905 to 2e8 in the
equation.
The absolute true error relates to the step size and the maximum
absolute true % relative error increases with the step size and is inversely
proportional to the absolute value of the root.
The number of computation depends on the initial guess and inversely
proportional the step size . There is a compromise between the accuracy
and the computation complexity.
It can not guarantee to find the root. Consider the following example:
f x 1905 x 3
The previous program doesn't work here because the solution of x is a negative
value. If the initial guess remains at x = 0, then a negative value of should be
used, OR (any other suggestion ??)
10
4.3 Bisection Method
4.3.1 General
If the function has a root, it will cross zero at certain point. This implies
that if there is a root between ak and bk , f(ak) and f(bk) will be in
opposite sign. This forms the basics of Bisection Method.
0 0
OR 0
b 0 0
ak
x
bk
11
4.3 Bisection Method
4.3.1 General
ak bk
General Cases
General
cases
12
4.3 Bisection Method
4.3.1 General
Special Cases
Special
Cases
13
4.3 Bisection Method
In blind search example in section 4.2. we use the change sign in f(x) to
find the root. Here we use the same principle to find the root of other
functions.
4.3.2 Algorithm
Take two points ak and bk, with f(ak) and f(bk) in the opposite sign. Since we
have no idea what is the exact solution, a good approximation is to use the
mean value of /2 as a trial. Compute the mid value ck, i.e. (ak + bk)/2,
now use some imaginations.
f x
ak bk
ck
2
ak
x
ck bk
If f(ck) is zero or within a given threshold, ck is the solution. If not, one or more
iteration(s) is/are required. 14
4.3 Bisection Method
4.3.2 Algorithm
If f(bk) and f(ck) have the same sign, it is unlikely that there is a root
between them (but there may be even number of roots between them).
Similarly, if f(ak) and f(ck) have the same sign, it is unlikely the root will
be in between. So in the following curve, which interval should we
consider, [ak, ck] or [ck,bk] for the next iteration?
f x
ak
x
ck bk
15
4.3 Bisection Method
4.3.2 Algorithm
Logically, we should consider interval [ak, ck] because f(ak) and f(ck)
have opposite sign.
f x In the next iteration,
i.e. k 1
( = ak ) 2
ak+1 ck+1
x
bk+1 ( = ck)
Algorithm
ak
x
ck
bk
17
4.3 Bisection Method
Example
Determine the root of the following by using 1e-3 :
39.24 sinh( x) e x e x
f ( x) 39.24 x tanh( ) 36 [ Note : tanh(x) ]
x cosh(x) e x e x
Assume the root is between 50 and 200.
18
Example (cont.)
% Bisection Method using Matlab (bisection_ex1.m)
clear
true_ans=142.7376; a=50; b=200; c=(a+b)/2;
iteration=1;
err_bound=1e-3; % Set the bound at 1e-3.
fx_c=sqrt(39.24*c)*tanh(sqrt(39.24/c))-36;
while abs(fx_c)>err_bound
fx_a=sqrt(39.24*a)*tanh(sqrt(39.24/a))-36;
if fx_a*fx_c<0;
b=c;
else
a=c;
end;
c=(a+b)/2;
fx_c=sqrt(39.24*c)*tanh(sqrt(39.24/c))-36;
atr_error=abs((true_ans-c)/true_ans);
iteration=iteration+1;
end;
display(['The estimated x by using Bisection Method = ',num2str(c)])
display(['The |f(c)| value by using Bisection Method = ',num2str(abs(fx_c))])
display(['Number of iterations by using Bisection Method = ',num2str(iteration)])
display(['The absolute true relative error by using Bisection Method = ',num2str(atr_error)])
19
4.3 Bisection Method
Example (cont.)
>> bisection_ex1
The estimated x by using Bisection Method = 142.7246
The |f(c)| value by using Bisection Method = 0.00026643
Number of iterations by using Bisection Method = 10
The absolute true relative error by using Bisection Method = 9.1011e-005
20
Example (cont.)
% Bisection Method using Matlab (bisection_ex2.m)
clear
true_ans=142.7376; a=50; b=200; c=(a+b)/2;
iteration=1;
err_bound=1e-3; % Set the bound at 1e-3.
atr_error=abs((true_ans-c)/true_ans);
fx_c=sqrt(39.24*c)*tanh(sqrt(39.24/c))-36;
while atr_error>err_bound
fx_a=sqrt(39.24*a)*tanh(sqrt(39.24/a))-36;
if fx_a*fx_c<0;
b=c;
else
a=c;
end;
c=(a+b)/2;
fx_c=sqrt(39.24*c)*tanh(sqrt(39.24/c))-36;
atr_error=abs((true_ans-c)/true_ans);
iteration=iteration+1;
end;
display(['The estimated x by using Bisection Method = ',num2str(c)])
display(['The |f(c)| value by using Bisection Method = ',num2str(abs(fx_c))])
display(['Number of iterations by using Bisection Method = ',num2str(iteration)])
display(['The absolute true relative error by using Bisection Method = ',num2str(atr_error)])
21
4.3 Bisection Method
Example (cont.)
How does it affect the result(s) if the following changes are made, i.e.
reduces one codeline of fx_c=sqrt(39.24*c)*tanh(sqrt(39.24/c))-36 ?
22
Example (cont.)
% Bisection Method using Matlab (bisection_ex2.m)
clear
true_ans=142.7376; a=50; b=200; c=(a+b)/2;
iteration=1;
err_bound=1e-3; % Set the bound at 1e-3.
atr_error=abs((true_ans-c)/true_ans);
fx_c=sqrt(39.24*c)*tanh(sqrt(39.24/c))-36;
while atr_error>err_bound
fx_a=sqrt(39.24*a)*tanh(sqrt(39.24/a))-36;
fx_c=sqrt(39.24*c)*tanh(sqrt(39.24/c))-36;
if fx_a*fx_c<0;
b=c;
else
a=c;
end;
c=(a+b)/2;
fx_c=sqrt(39.24*c)*tanh(sqrt(39.24/c))-36;
atr_error=abs((true_ans-c)/true_ans);
iteration=iteration+1;
end;
display(['The estimated x by using Bisection Method = ',num2str(c)])
display(['The |f(c)| value by using Bisection Method = ',num2str(abs(fx_c))])
display(['Number of iterations by using Bisection Method = ',num2str(iteration)])
display(['The absolute true relative error by using Bisection Method = ',num2str(atr_error)])
23
4.3 Bisection Method
Example (cont.)
The display |f(c)| ans is different because it hasnt been updated with
the latest c value. All other display answers remain the same.
24
Example (cont.) Here absolute approximation relative error is used
as the termination criteria
% Bisection Method using Matlab (bisection_ex3.m)
clear
true_ans=142.7376; a=50; b=200; c=(a+b)/2;
iteration=1;
err_bound=1e-3; % Set the bound at 1e-3.
aa_error=err_bound+1; % Force to do at least 1 iteration
fx_c=sqrt(39.24*c)*tanh(sqrt(39.24/c))-36;
while aa_error>err_bound
fx_a=sqrt(39.24*a)*tanh(sqrt(39.24/a))-36;
if fx_a*fx_c<0;
b=c;
else
a=c;
end;
c_old=c;
c=(a+b)/2;
fx_c=sqrt(39.24*c)*tanh(sqrt(39.24/c))-36;
aa_error=abs((c-c_old)/c); % Get the absolute approx. relative error
atr_error=abs((true_ans-c)/true_ans); % Get the absolute true relative error
iteration=iteration+1;
end;
display(['The estimated x by using Bisection Method = ',num2str(c)])
display(['The |f(c)| value by using Bisection Method = ',num2str(abs(fx_c))])
display(['Number of iterations by using Bisection Method = ',num2str(iteration)])
display(['The absolute approximated relative error by using Bisection Method = ',num2str(aa_error)])
display(['The absolute true relative error by using Bisection Method = ',num2str(atr_error)]) 25
4.3 Bisection Method
Example (cont.)
Compare with bisection_ex1, please note that the absolute true relative
error increases even the number of iteration increases from 10 to 11.
26
4.3 Bisection Method
Advanced Level
27
4.3 Bisection Method
4.3.3 Termination Criteria and Error Estimates
Example (Cont.)
ak bk ck
28
4.3 Bisection Method
4.3.3 Termination Criteria and Error Estimates
Example (Cont.)
The ragged nature of the true error is due to the fact that the true root can lie
anywhere within the bracketing interval. It will be large when the true root is ~ at
the middle of the interval and it will be small when the true root falls at either end
of the interval.
29
4.3 Bisection Method
30
4.4 Newton-Raphson Method
Tangent at xk
f x
xk+1
x
xk
If the initial guess is only ONE point at xk,, a tangent at f(xk) crosses
the x-axis usually represents an improved estimate of the root. Note
xk+1 is closer to the actual root compare with that from xk. By
continuously iterations, a more precise root can be found.
http://en.wikipedia.org/wiki/Newton's_method
31
4.4 Newton-Raphson Method
f xk 0 Tangent at xk
f ' xk
xk xk 1
f x f x
i.e. xk 1 xk ' k
f xk
xk+1
f xk 1
xk 2 xk 1
f ' xk 1
x
xk
1. Starting with k = 1
2. Estimate the initial point xk
3. Calculate f(xk)
4. If f(xk) ) [or the approx error] is within
err_bound, then xk is the solution,
otherwise go to step 5
5. Find the next point xk+1 and then go to
step 3
x1 x2 x3 ..... 32
4.4 Newton-Raphson Method
i.e.
e.g.
34
4.4 Newton-Raphson Method
35
4.4 Newton-Raphson Method
1 0.5 50%
2 51.65 5,065%
3 46.485 4,549%
4 41.8365 4,084%
: : :
: : :
41 1.002316 0.2316%
42 1.000024 0.0024%
43 1.000000 0.0000%
Key Characteristics:
40
4.5 Secant Method **
The Secant Method is the most popular among the many variants of
Newton Method. Secant is used to approximate the tangent to a curve.
Here, we use TWO points xk-1 and xk, and extrapolate them to cut zero.
f x
xk+1
x
xk xk-1
In the next iteration, we extrapolate the newly found point xk+1 from the
previous points xk and xk-1. Repeating the iteration may ultimately move
the estimated point to a location which is close to, or equal to the root.
41
4.5 Secant Method **
( xk xk 1 ) f ( xk ) ( xk xk 1 ) f ( xk )
xk 1 xk or xk 1 xk
f ( xk ) f ( xk 1 ) f ( xk ) f ( xk 1 )
As the secant method does not always bracket the root, the algorithm
may not converge for functions that are not sufficiently smooth.
Otherwise repeating the iteration could move the value of x towards the
root.
42
4.5 Secant Method **
f xk
xk 1 xk '
f xk
If f(x) is approximately linear within the region between f(xk) and f(xk-1),
then
f ( xk ) f ( xk 1 )
f ' ( xk )
xk xk 1
( xk xk 1 ) f ( xk )
i.e. xk 1 xk
f ( xk ) f ( xk 1 )
43
4.5 Secant Method **
The first two iterations of the secant method. The red curve shows
the function f(x) and the blue lines are the secants
http://en.wikipedia.org/wiki/Secant_method
44
4.5 Secant Method **
Example:
Find the square root of 2e8, i.e. to find the solution of 2 8 0
With the initial guess at: 1, 0; same error bound as that in
Newton Raphson Method, i.e. err_bound =1e-3
-----------------------------------------------------------
Number of iterations = 27 (19) [1414215]
The results from Newton Raphson are in red and the results from Blind
search are in blue. It appears that the results from Secant Method are
similar to those from Newton Raphson.
45
4.5 Secant Method **
Key Characteristics:
The Secant Method is the most popular among the many variants of
NR (Newton Raphson) Method. Instead of using the exact tangent of
the curve, i.e. f (x), as that in NR Method, the secant of the curve (i.e.
the approximate tangent) is used.
TWO initial guesses are necessary but there is no restriction on
the sign of the function for these two points.
The final solution may not be within the two initial guesses.
The behavior is similar to NR Method and the convergence is not
guaranteed.
There is no derivative is required, the computation is a bit simpler.
For some cases, it is more stable when compare with NR Method.
In general, the convergence rate is slower than NR Method by
~40%.
46
4.6 Regula-Falsi Method
ak
x
bk
ck
47
4.6 Regula-Falsi Method
ak
x
bk
ck
48
4.6 Regula-Falsi Method
f x
ak+1
x
bk
bk+1
49
4.6 Regula-Falsi Method
http://en.wikipedia.org/wiki/False_position_method
50
4.6 Regula-Falsi Method
However the convergence rate may not always faster than Bisection method.
Consider the following example to find the root of f(x) = x10 1 between 0 and 1.3.
By using Bisection method, the following result can be obtained after 5 iterations.
ak bk ck
ak bk ck
51
4.6 Regula-Falsi Method
52
4.6 Regula-Falsi Method
Key Characteristics:
53
Comparison
55