Академический Документы
Профессиональный Документы
Культура Документы
MEE3005
1) Secant method:The Newton-Raphson algorithm requires the evaluation of two functions (the
function its derivative) per each iteration. If they are complicated expressions it will take
considerable amount of effort to do hand calculations or large amount of CPU time for machine
calculations. Hence it is desirable to have a method that converges (please see the section order of
the numerical methods for theoretical details) as fast as Newton's method yet involves only the
evaluation of the function. Let x0 and x1 are two initial approximations for the root 's' of f(x) = 0 and
f(x0) & f(x1) respectively, are their function values. If x 2 is the point of intersection of x-axis and the
line-joining the points (x0, f(x0)) and (x1, f(x1)) then x2 is closer to 's' than x0 and x1. The equation
relating x0, x1 and x2 is found by considering the slope 'm'
- f(x1) * (x1-x0)
x2 - x1 =
f(x1) - f(x0)
f(x1) * (x1-x0)
x2 = x1 -
f(x1) - f(x0)
This formula is similar to Regula-falsi scheme of root bracketing methods but differs in the implementation.
The Regula-falsi method begins with the two initial approximations 'a' and 'b' such that a < s < b where s is
the root of f(x) = 0. It proceeds to the next iteration by calculating c(x2) using the above formula and then
chooses one of the interval (a,c) or (c,h) depending on f(a) * f(c) < 0 or > 0 respectively. On the other hand
secant method starts with two initial approximation x0 and x1 (they may not bracket the root) and then
calculates the x2 by the same formula as in Regula-falsi method but proceeds to the next iteration without
bothering about any root bracketing.
Algorithm - Secant Method
i 0 1 2 3 4 5 6
xi 0 1 0.471 0.308 0.363 0.36 0.36
So the iterative process converges to 0.36 in six iterations.
i 0 1 2 3 4 5 6 7
xi 1 2 1.71429 1.83853 1.85778 1.85555 1.85558 1.85558
Let the given equation be f(x) = 0 and the initial approximation for the root is x0. Draw a tangent to the
curve y = f(x) at x0 and extend the tangent until x-axis. Then the point of intersection of the tangent and the
x-axis is the next approximation for the root of f(x) = 0. Repeat the procedure with x0 = x1 until it converges.
If m is the slope of the Tangent at the point x 0 and is the angle between the tangent and x-axis then
f(x0)
m = tan f '(x0 ) =
x 0-x1
(x0-x1) * f '(x0) = f(x0 )
f(x0)
x1 = x0 -
f '(x0)
This can be generalized to the iterative process as
f(xi)
xi+1= xi - i = 0, 1, 2, . . .
f '(xi)
from taylor series
The same also can be obtained from the Taylor series. Let x1= x0+ h be the root of f(x) = 0 .
h2
f(x1) = f(x0+h) = f(x0 ) + hf '(x0) + f ''(x0) + . . .
2
h2
0 = f(x0) + hf '(x0 ) + f ''(x0) + . . .
2
f(x0)
h=-
f '(x0 )
f(x0)
x1= x0 -
f '(x0 )
or in general
f(xi)
xi+1= xi - i = 0, 1, 2, . . .
f '(xi)
xi+1= xi - f(xi) i = 0, 1, 2, . . .
f '(xi)
Numerical Example :
i 0 1 2 3 4
xi 2 1.90016 1.89013 1.89003 1.89003
So the iterative process converges to 1.89003 in four iterations.
Example :
Show that the intial approximation x0 for finding 1/N where N is a + ve integer, by the Newton's method
must satisfy 0 < x0 < 2/N for convergence. Proof : Let f(x) = 1/x - N = 0
f '(x) = -1/x2
1/xi - N
xi+1= xi - = 2xi - Nx2i i = 0, 1, 2, . . .
-1/x2i
Now draw the curves y = x & y = 2x - Nx2The first curve is a straight line
passing through origin and the second one is a parabola (x - 1/N)2 = -1/N(y
- 1/N)The point of intersection of these two curves is the required value
1/N. From the figure, we find that any initial value outside the range
0 < x0 < 2/N diverges. If x0 = 0, the iterative does not converge to 1/N but
remains zero always.
3. Find the root of (cos[x])-(x * exp[x]) = 0
i 0 1 2 3 4 5 6 7
xi 2 1.34157 0.8477 0.58756 0.52158 0.51777 0.51776 0.51776
L U DECOMPOSITION METHOD
In these methods the coefficient matrix A of the given system of equatiron AX = b is written as a product
of a Lower triangulat matrix L and an Upper trigular matrix U, such that A = LU where the elements of L =
(lij = 0 for i < j) and the elements of U = (uij = 0 for i > j) that is, the matrices L and U look like
l11 0 0 ... 0
l21 l22 0 ... 0
L=
... ... ... ... ...
ln1 ln2 . . . ... lnn
i-1
uij = aij - lik ukj / lii
k=1
GAUSS - SEIDEL METHOD
In this method the (n+1)th iterative values are used as soon as they are available and the iterative scheme is
defined by
i-1 m
xi(n+1) = 1/aii {b -
i aij xj(n) - aij xj(n) }, i = 1(1)m.
j=1 j=i+1
again in the matrix notation, the coefficient matrix A of the system Ax = b, is split into D - L - U where D
has the diagonal elements of A and L & U respectively have the lower diagonal and upper diagonal elements
of A with a negative sign then the Gauss-Seidel scheme can be written as
Dx = (L + U)x + b
or (D-L)x(n+1) = Ux(n) + b,
giving x(n+1) = (D-L)-1 Ux(n) + (D-L)-1b.
i.e., the Jacobi iterative matrix QGS = (D-L)-1 U and CGS = (D-L)-1b.
THOMAS ALGORITHM
POWER METHOD TO FIND THE EIGEN VALUES:
We first assume that the matrix A has a dominant eigenvalue with the corresponding dominant eigenvectors. As stated
before, the power method for approximating eigenvalues is iterative. Hence, we start with an initial approximation of
the dominant eigenvector of A, which must be non-zero. Thus, we obtain a sequence of eigenvectors given by
When k is large, we can obtain a good approximation of the dominant eigenvector of A by properly scaling the
sequence.
EXAMPLE:
EXAMPLE:
INTERPOLATION WITH CUBIC SPLINE:
EXAMPLE: