Академический Документы
Профессиональный Документы
Культура Документы
1
x
0
x
2
x
( )
1
x f
( )
0
x f
Secant Method
From similar triangles
we can write that,
Solving for x
2
we get:
( )
( )
( )
( ) ( )
1 0
1 0
1
2 1
x f x f
x x
x f
x x
( )
( )
( ) ( )
1 0
1 0
1 1 2
x f x f
x x
x f x x
=
Secant Method
Iteratively this is
written as:
( )
( )
( ) ( )
n n
n n
n n n
x f x f
x x
x f x x
+
1
1
1
Algorithm
Given two guesses x
0
, x
1
near the root,
If then
Swap x
0
and x
1
.
Repeat
Set
Set x
0
= x
1
Set x
1
= x
2
Until < tolerance value.
( ) ( )
1 0
x f x f <
( )
( ) ( )
1 0
1 0
1 1 2
*
x f x f
x x
x f x x
=
( )
2
x f
Because the new point should be closer the
root after the 2
nd
iteration we usually choose
the last two points.
After the first iteration there is only one
new point. However x
1
is chosen so that it is
closer to root that x
0
.
This is not a hard and fast rule!
Discussion of Secant Method
The secant method has better convergence
than the bisection method( see pg40 of
Applied Numerical Analysis).
Because the root is not bracketed there are
pathological cases where the algorithm
diverges from the root.
May fail if the function is not continuous.
Pathological Case
See Fig 1.2 page 40
False Position
False Position
False Position
The method of false position is seen as an
improvement on the secant method.
The method of false position avoids the
problems of the secant method by ensuring
that the root is bracketed between the two
starting points and remains bracketing
between successive pairs.
False Position
This technique is similar to the bisection
method except that the next iterate is taken
as the line of interception between the pair
of x-values and the x-axis rather than at the
midpoint.
False Position
Algorithm
Given two guesses x
0
, x
1
that bracket the root,
Repeat
Set
If is of opposite sign to then
Set x
1
= x
2
Else Set x
0
= x
1
End If
Until < tolerance value.
( )
( ) ( )
1 0
1 0
1 1 2
*
x f x f
x x
x f x x
=
( )
2
x f
( )
2
x f ( )
0
x f
Discussion of False Position Method
This method achieves better convergence
but a more complicated algorithm.
May fail if the function is not continuous.
Newtons Method
Newtons Method
The bisection method is useful up to a point.
In order to get a good accuracy a large
number of iterations must be carried out.
A second inadequacy occurs when there are
multiple roots to be found.
Newtons Method
The bisection method is useful up to a point.
In order to get a good accuracy a large
number of iterations must be carried out.
A second inadequacy occurs when there are
multiple roots to be found.
Newtons method is a much better
algorithm.
Newtons Method
Newtons method relies on calculus and
uses linear approximation to the function by
finding the tangent to the curve.
Newtons Method
Algorithm requires an
initial guess, x
0
, which is
close to the root.
The point where the
tangent line to the function
(f (x)) meets the x-axis is
the next approximation, x
1
.
This procedure is repeated
until the value of x is
sufficiently close to zero.
Newtons Method
The equation for
Newtons Method can
be determined
graphically!
Newtons Method
The equation for
Newtons Method can be
determined graphically!
From the diagram tan
= '(x
0
) = (x
0
)/(x
0
x
1
)
Newtons Method
The equation for
Newtons Method can be
determined graphically!
From the diagram tan
= '(x
0
) = (x
0
)/(x
0
x
1
)
Thus, x
1
=x
0
-(x
0
)/'(x
0
).
Newtons Method
The general form of
Newtons Method is:
x
n+1
= x
n
f(x
n
)/'(x
n
)
Newtons Method
The general form of
Newtons Method is:
x
n+1
= x
n
f(x
n
)/'(x
n
)
Algorithm
Pick a starting value for x
Repeat
x:= x f(x)/'(x)
Return x
Fixed point iteration
Fixed point iteration
Fixed point iteration
Newtons method is a special case of the
final algorithm: fixed point iteration.
Fixed point iteration
Newtons method is a special case of the
final algorithm: fixed point iteration.
The method relies on the Fixed point
Theorem:
If g(x) and g'(x) are continuous on an interval
containing a root of the equation g(x) = x, and if
|g'(x)| < 1 for all x in the interval then the series
x
n+1
= g(x
n
) will converge to the root.
For Newtons method g(x) = x f(x)/'(x).
However for fixed point, f(x) = 0 thus
g(x)=x.
The fixed point iteration method requires us to
rewrite the equation f(x) = 0 in the form x = g(x),
then finding x=a such that a = g(a), which is
equivalent to f(a) = 0.
The value of x such that x = g(x) is called a fixed
point of g(x).
Fixed point iteration essentially solves two
functions simultaneously: x(x) and g(x).
The point of intersection of these two functions is
the solution to x = g(x), and thus to f(x) = 0.
Fixed point iteration
Consider the example on page 54.
Fixed point iteration
Fixed point iteration
We get successive
iterates as follows:
Start at the initial x
value on the x-axis(p
0
)
Go vertically to the
curve.
Then to the line y=x
We get successive
iterates as follows:
Start at the initial x
value on the x-axis(p
0
)
Go vertically to the
curve.
Then to the line y=x
Then to the curve
Then the line. Repeat!
Fixed point iteration
Algorithm
Pick a starting value for x.
Repeat
x := g(x)
Return x
Fixed point iteration
The procedure starts from an initial guess of
x, which is improved by iteration until
convergence is achieved.
For convergence to occur, the derivative
(dg/dx) must be smaller than 1 in magnitude
for the x values that are encountered during
the iterations.
Fixed point iteration
Convergence is established by requiring that
the change in x from one iteration to the
next be no greater in magnitude than some
small quantity .
Fixed point iteration
Alternative algorithm: