Академический Документы
Профессиональный Документы
Культура Документы
Abstract
We study a class of third-order iterative methods for nonlinear equations on Banach spaces. A characterization of the convergence under Kantorovich type conditions and optimal estimates of the error are found.
Though, in general, these methods are not very extended due to their computational costs, we will show
some examples in which they are competitive and even cheaper than other simpler methods. We center our
analysis in both, analytic and computational, aspects.
2007 Elsevier Inc. All rights reserved.
Keywords: Third-order iterative methods; Convergence; Banach spaces
1. Introduction
One of the most important (and, hence, most studied) problems in numerical analysis is the
solution of nonlinear equations. Usually, the original equation
F (x) = 0
(where F is a nonlinear function ranging from a Banach space X into another Y ) has to be discretized in order to be solved computationally. Discretization will produce a system of nonlinear
scalar equations which increases its complexity as the discretization is finer. Thus, we approx
* Corresponding author.
244
imate the solution of the original equation by means of the solution of the discretized one, xm ,
verifying
Fm (xm ) = 0
(m is the order of the discretization). An analysis of the equations will relate both solutions (original and discretized) with properties of the functions and their derivatives. Most properties of Fm
(differentiability, bounds of its norm and its derivatives and even the existence and uniqueness of
solution, for example) are usually inherited from those of F . It seems a sensible strategy to analyze the equation in the Banach space in order to look for practical conditions for the discretized
equation.
One of the best known ways to solve equations is by means of iterative methods. Roughly
speaking, an iterative method starts from an (or several) initial guess x0 (called pivot), which
is improved by means of an iteration, xn+1 = (xn ). Conditions are imposed on x0 (and, eventually, on F or ) in order to assure the convergence of {xn }n to the solution x . This analysis,
usually known as Kantorovich type (following Kantorovich results on the convergence of Newton methods), are based on a relationship between the problem in a Banach space and a single
nonlinear scalar equation which leads the behavior of the problem [1]. A priori error estimates
(depending only on the initial conditions, and, hence, the order of convergence (for a definition
of order we refer to [31])), can be obtained by using Kantorovich type theorems.
Success of Newtons method and similar second or less order methods have led to the wrong
idea that higher-order methods are no more than theoretical rarities with little or none practical
interest. A review to the amount of literature on third-order methods in the two last decades may
reveal that this is not true: third-order methods stand by their own, only limited, as it happens
with almost every numerical technique, by the nature of the problem to be solved [6,7,9,20,28,
33,36,38]. Of course, third-order methods require more computational cost than other simpler
methods [37], which makes them disadvantageous to be used in general, but, in some cases, it
pays to be a little more elaborated. The geometric interpretation of this type of schemes can be
found in [8].
This paper will consist on two parts: in the first one we will define a class of third-order methods, we will analyze their convergence obtaining bounds of error from a theoretical point of view.
This family includes methods as: Chebyshev [13,16], Halley [2,14,15,22,29], two-step [32], methods [20], and some of their approximations using divided differences or similar techniques
[4,5,7,21,23]. In the other one, we will remark the computational features of the studied methods,
comparing them with other well proved methods.
The estimates we are going to obtain are optimal in the sense of [32]. Estimates are obtained usually by a few techniques: majorizing sequences, majorizing functions or systems of
majorizing sequences, mainly. Optimality implies that there exists at least one equation in which
the bounds are exact (that is, the bounds coincide with the actual results). In [15,16], some
bounds were obtained for two of the most studied methods, Halley and Chebyshev. These bounds,
though, are optimal in some restrictive conditions (third Frchet derivatives are zero). From then
on, estimates have been improved by different authors, and new methods have been introduced
[1012,1719,26,27,30,34]. The estimates we present here are optimal under more general conditions, and may be easily extended to a larger class of iterative methods.
The main practical difficulty related to the classical third-order iterative methods is the
evaluation of the second-order Frchet derivative. For a nonlinear system of m equations and
m unknowns, the first Frchet derivative is a matrix with m2 values, while the second Frchet
derivative has m3 values. This implies a huge amount of operations in order to evaluate every
245
iteration. Some methods overcome these difficulties by evaluating several times the function and
its first derivative. For example, in [32], this (two-step) third-order recurrence is proposed:
yn = xn F (xn )1 F (xn ),
xn+1 = yn F (xn )1 F (yn ).
This method is, in general, cheaper than any third-order methods requiring the evaluation of
the second derivative. However, in some cases, the second derivative is easy to evaluate. In equilibrium problems, for instance, the function depends on the interaction between two elements,
ui uj , and the second derivative is constant. Therefore, it must be evaluated just once all over
the process. Some integral equations may be solved in such a way that the second derivative is
also constant, and so on. In Section 5 we will show some of these examples.
The structure of this paper is as follows: in Section 2 we present the methods we are interesting in. In Section 3, we construct a third degree polynomial, depending on three parameters,
and we define a third-order sequence, from which error estimates for these methods follow. In
Section 4, we assert convergence and uniqueness theorems (Kantorovich type [1]). In Section 5,
a posteriori error estimates are obtained (they depend not only on the pivot, but also on each
iterate). Finally, some numerical experiments are presented in Section 6.
2. Third-order iterative methods
Beginning from t0 , we can consider different third-order methods1 for the equation f (t) = 0:
1. Halley
tn+1 = tn
2. Chebyshev
1
1+
1
2 Lf (tn )
f (tn )
.
f (tn )
1
f (tn )
tn+1 = tn 1 + Lf (tn )
.
2
f (tn )
Lf (x) =
f (x)f (x)
.
f (x)2
246
All these methods, and some of their approximations using divided differences [4,5,7], can be
written as (see [3]):
f (tn )
,
tn+1 = tn 1 + n + O n2
f (tn )
where
n :=
1 f (tn )f (tn )
.
2 f (tn )2
(1)
where
1
Tn := F (xn )1 F (xn )F (xn )1 F (xn ),
2
and
n := F (xn )1 F (xn ).
We will consider:
f (tn )
tn+1 = tn 1 + n + O n2
,
f (tn )
(2)
b + b2 + 2c
.
t2 :=
c
Proof. We define
247
b2 + 2c
< 0.
c
Then, g(t) verifies:
t1 :=
a
(3)
It is a direct proof. It is remarkable that the left side in (3) is positive if b, c are positive. So,
for every couple of real positive numbers b, c, we can find positive values for a in order to verify
Lemma 1. In the following, we are going to deal with t0 , in the way we have defined it in the
proof of Lemma 1. Now, we can state this proposition:
Proposition 1. Let a, b, c > 0 be real numbers such that (3) is true. Then, there exists a third
degree polynomial f (t) verifying:
(a)
(b)
(c)
(d)
(e)
f (0) = 0,
f (t0 ) = a,
f (t0 ) = 1,
f (t0 ) = b,
f (t) = c, t real.
b + ct0
,
2
and
c
:= 1 t02 bt0 ,
2
we can assert (a), (c)(e). Furthermore, (b) is a simple fact from Lemmas 1 and 2.
As a consequence of Lemmas 1 and 2, we are going to prove the following result, that we will
need in the next section.
248
1
1
= + a.
0 g(t2 ) min g1 (t): t 0 = g1
b
2b
Proposition 2. The sequence {tn }n0 (2) converges monotonically, and its limit is 0.
Proof. We are going to see that, for all t such that 0 < t < t0 ,
(i)
(ii)
(iii)
(iv)
(v)
f (t) > 0,
f (t) > 0,
f (t) > 0,
f (t) < 0,
(t)
f (t)2 > f (t)f
.
2
(4)
where
(t) :=
1 f (t)f (t)
.
2 f (t)2
And this is true, because H (0) = 0, and H (t) is increasing (see the next remark).
From (vi), {tn }n0 converges to t , a real number such that f (t ) = 0, and 0 t t0 , and
this implies t = 0, since 0 is the unique solution of f (t) = 0 in [0, t0 ]. 2
Remark 2. By the definition of the iterative function (4) and Remark 1, we have
f (t)
H (t) = 1 (t) + O 2 (t) (t)
f (t)
249
f (t)2 f (t)f (t)
1 + (t) + O (t)2
f (t)2
f (t)
1 + O 2 (t)
= 1 (t)
f (t)
f (t)2 f (t)f (t)
1 + (t) + O (t)2
f (t)2
1 f (t)f (t)2
2
1 + O 2 (t)
= 1 (t) 4 (t) +
2 f (t)3
f (t)2 f (t)f (t)
2
1 + (t) + O (t)
f (t)2
1 f (t)f (t)2
1 + O 2 (t) + 10O (t)3 + 6 (t)2 3O (t)2 .
=
3
2 f (t)
Using now that (t) 0 and f (t) 0 in [0, t0 ] (see (i)(iv) in Proposition 2) and that 0
d 2 (for all the considered schemes in this paper) we obtain the desired inequality H (t) 0,
that is, H (t) is increasing.
Remark 3. If c = 0, then g(t) = ( b2 )t 2 t + a, and the results in this section are true in this case.
2a
For instance, t2 = b1 ; the inequality (3) is equivalent to (3 ) 2ab 1; t0 =
1 , and we
2)
(1+(12ab)
can state both lemmas and proposition similarly to the given ones.
From Proposition 2, we will establish some sufficient conditions for the convergence of our
class of third-order methods on Banach spaces, as we see in the next section.
4. Convergence of third-order methods
From now on, we are going to consider a (not necessarily linear) operator, F from an open
domain D in X into Y ; X and Y Banach spaces. We will assume F is three times Frchet
differentiable, and we will denote its Frchet derivatives as F , F , F .
Before stating the main proposition in this section, we are going to need some general results,
which we include in the next three lemmas.
Lemma 3. Let x0 in D be such that F (x0 ) is invertible if we consider it is a linear operator. Let
us assume that there exists a real number c > 0 such that, for all x, y in D,
F (x0 )1 F (x) F (y) cx y,
then, for all x, y in D.
F (x0 )1 F (x) F (y) F (y)(x y) 1 F (y)(x y)(x y) c x y3 , (5)
6
2
F (x0 )1 F (x) F (y) F (x0 )1 F (y) + 1 cx y x y.
(6)
2
Proof. Without loss of generality, we can assume F (x0 ) = I (the identity operator). In other
case, we will apply the proof to the operator
F (x) := F (x0 )1 F (x).
250
(5) It follows immediately applying the mean value theorem twice (see, for instance, [31]).
(6) Under these assumptions, we have:
F (x) F (y) =
F y + t (x y) (x y) dt.
Thus,
1
F (x) F (y) F (y)(x y) dt
0
1
1
+
F y + st (x y) t (x y) ds (x y) dt
0
1
0
F (y) x y dt
1
1
+
0
F y + st (x y) tx y ds x y dt
F (y) x y +
1
ctx y2 dt
0
1
= F (y) + cx y x y.
2
Lemma 4.
1
k1
F (xn+1 ) = F (xn )n F (xn )
2(k1 k )Tn
n F (xn )
2
k2
2
1
+ F (xn ) Tn H (Tn )n F (xn )
8
1
F xn + (xn+1 xn ) F (xn ) (1 )(xn+1 xn )2 d,
+
0
where H (z) = I +
k2 2k z
k1 .
251
1
F (xn )(xn+1 xn )2
2!
F (x) F (xn ) (xn+1 x) dx.
+
xn
Since
F (x
n)
1
F (xn+1 ) = F (xn )n F (xn )H (Tn )n F (xn )
2
1
+ F (xn )n F (xn ) I + Tn H (Tn ) n F (xn )
2
2
1
+ F (xn ) Tn H (Tn )n F (xn )
8
1
F xn + (xn+1 xn ) F (xn ) (1 )(xn+1 xn )2 d.
+
0
Taking 1 =
1
2,
1
+
f tn + (tn+1 tn ) f (tn ) (1 )(tn+1 tn )2 d,
where h(t) = 1 +
k2 2k t
k1 .
252
Now, we can assert sufficient convergence conditions for third-order methods. Related results
for a general class of third-order methods can be found in [24].
We will assume the method is well defined. The methods proposed in Section 2 are well
defined under our assumptions, typically because F (xn ) and I Tn are invertible as we will see.
Theorem 1. Let us assume x0 in D is such that F (x0 ) is invertible. Furthermore, a, b, c
positive real numbers verifying (3), and for all x, y in D,
F (x0 )1 F (x0 ) a,
F (x0 )1 F (x0 ) b,
F (x0 )1 F (x) F (y) cx y.
are
(7)
(8)
(9)
Besides, if f (t), t0 and {tn }n0 are those defined in Section 2, then, for all n 0,
xn+1 xn tn tn+1
that is, {tn }n0 is a majorizing sequence of {xn }n0 .
Proof. As Lemma 1, we can assume F (x0 ) = I. (In other case, we will apply the proof to the
operator F (x) := F (x0 )1 F (x).)
Thus, inequalities (7)(9) are now:
F (x0 ) f (t0 ) a,
F (x0 ) f (t0 ) = b,
F (x) F (y) cx y.
Inductively, if we assume for n 0:
(i) F (xn )1
(ii)
1
f (tn ) .
F (xn ) f (tn ).
1 f (t)f (t)
2 f (t)2 .
Then,
1
f (t
=1
f (tn ) f (tn+1 )
n)
f (t
n+1 )
f (tn )
< 1,
F (xn )1
1 F (xn )1 (F (xn ) F (xn+1 ))
1
f (tn )[1 [1
1
f (t
n+1 )
f (tn+1 )
f (tn ) ]]
253
254
k
xn+l xn+l1
l=1
k
(tn+l1 tn+l ) = tn tn+k tn .
(10)
l=1
The convergence of {xn }n0 , and xn x tn follows from the fact that {tn }n0 converges
to 0.
The convergence of {xn }n0 implies
F x = 0.
Now, we see the uniqueness of the root. Previously, we define the divided differences for the
operator F (following for example, [32]) in this way:
For all x, y D,
1
[x, y; F ] :=
F x + t (y x) dt.
F (x0 )1 F x + t (y x) F (x0 ) dt
1
1
F (x0 ) F (x0 ) + c x + t (y x) x0 x + t (y x) x0 dt
2
0
1
b + c (1 t)x x0 + ty x0 (1 t)x x0 + ty x0 dt
2
0
1
1
1
b + ct2 t2 dt = bt2 + ct22 = 1.
<
2
2
0
2
The last step follows from the definition of t2 (t2 := b+ cb +2c ); it is a root of ct 2 2bt
2 = 0.
Thus, F (x0 )1 [x, y; F ], and, obviously, [x, y; F ] are invertible.
From [x, y; F ](x y) = F (x) F (y), we conclude:
[x, y; F ] x y = F x F y = 0,
in particular, x y = 0.
The given estimates are optimal in the sense that we state in the next corollary.
255
Corollary 3. For all a, b, c 0 satisfying (3), there exist an operator F (x) and a pivot x0 verifying Proposition 1 such that:
If {xn }n0 is the sequence obtained in (1) from x0 , then
x xn = tn .
xn+1 xn = tn tn+1 ;
(It is clear that f (t) and t0 satisfy the corollary.)
5. A posteriori error estimates
In this section, we are going to obtain a posteriori error estimates for the class of third-order
methods considered in the paper. We will use the fact, we proved in Corollary 2, that [xn , x ; F ]
is invertible. Therefore,
1
F (xn ).
xn x = x n , x ; F
Lemma 6. F (x0 )1 [x, y; F ] is invertible, and
F (x0 )1 xn , x ; F
1
1 { b2 (xn
x 0 + t0 ) +
c
6 (xn
x0 2 + t0 xn x0 + t0 2 )}
=: r.
xn x r c xn xn1 3
6
1
1
,
+ F (x0 ) F (xn1 ) T (xn1 )(xn xn1 ) (xn xn1 )
2
for Halleys method.
And
xn x r c xn xn1 3 +
6
1
F (x0 )1 F (xn1 ) (xn xn1 )(xn xn1 )
2
1
1
1
,
F (x0 ) F (xn1 ) F (xn1 ) F (xn1 ) F (xn1 ) F (xn1 )
256
6. Numerical experiments
In this section we deal with two features of the methods. In the first place, with the accuracy
and the fitness of the bounds. In a second step, we will study all these methods according to their
computational cost.
We start with the equation x 3 6x 2 + 3x = 0. These scalar equations are simple examples
which, indeed, have not a great practical interest. We include them here, though, because of
rather theoretical purposes. In Tables 12 we compare the exact error with our a posteriori error
estimates.
Now, we shall consider an important special case of integral equation, the Hammerstein equation (see [31])
1
(11)
u(s) = (s) + H (s, t)f t, u(t) dt.
0
These equations are related with boundary value problems for differential equations. For some
of them, third-order methods using second derivatives are useful for their effective (discretized)
solution.
The discrete version of (11) is
x i = (ti ) +
m
j H (ti , tj )f tj , x j ,
i = 0, 1, . . . , m,
(12)
j =0
where 0 t0 < t1 < < tm 1 are the grid points of some quadrature formula
m
i
j =0 j f (tj ), and x = x(ti ).
Let the Hammerstein equation
x(s) = 1
1
4
1
0
1
s
dt,
t + s x(t)
s [0, 1],
1
0
f (t) dt
(13)
studied in [35].
Table 1
A posteriori error estimates
Iterations Chebyshev
Error
Estimates
1
2
3
4
5
5.55e01
3.13e02
3.42e06
4.48e18
0.00e+00
6.98e01
5.50e02
6.17e06
8.06e18
1.79e53
Table 2
A posteriori error estimates
Iterations Halley
Error
Estimates
1
2
3
4
1.99e01
9.13e04
8.45e11
0.00e+00
5.33e01
1.76e03
1.52e10
1.20e31
257
Using the trapezoidal rule of integration with step h = m1 , we obtain the following system of
nonlinear equations
n
1 ti
1
1
1 1 ti
1 ti
i
+
, i = 0, 1, . . . , m,
(14)
0=x 1+
4m 2 ti + t0 x 0
ti + tk x k 2 ti + t m x m
k=0
j
m.
where tj =
In this case, the second Frchet derivative is diagonal by blocks.
We consider m = 100. The exact solution is computed numerically by Newtons method.
In Tables 34, a pivot close to the solution is used.
In Tables 58, the initial data are not very close to the solution. Problems of convergence can
appear.
Next, we consider quadratic equations of the type
F (x) = x T Ax + Bx + C = 0,
(15)
Newton
Chebychev
Halley
Two-step
1
2
3
4
0.0048
3.67e06
2.16e12
0
9.60e04
1.85e10
0
8.89e04
1.28e10
0
2.30e04
6.61e13
0
Table 4
x0 = 1, l -error, m = 100
Iterations
= 0.5
=1
= 1.5
=2
1
2
3
9.25e04
1.55e10
0
8.90e04
1.29e10
0
8.55e04
1.06e10
0
8.20e04
8.65e11
0
Table 5
x0 = 0.5, l -error, m = 100
Iterations
Newton
Chebychev
Halley
Two-step
1
2
3
4
5
6
7
8
0.2000
0.0048
3.67e06
2.16e12
0
0.4348
0.4647
0.6683
3.1872
0.0666
4.99e05
2.56e14
0
0.1451
0.0012
3.13e10
0
0.1174
1.56e04
1.94e13
0
258
Table 6
x0 = 0.5, l -error, m = 100
Iterations
= 0.5
=1
= 1.5
=2
1
2
3
4
0.1753
0.0027
3.95e09
0
0.0843
7.32e05
6.99e14
0
0.3438
0.0027
3.21e09
0
0.6034
0.0082
8.62e08
0
Table 7
x0 = 0.25, l -error, m = 100
Iterations
Newton
Chebychev
Halley
Two-step
4
6
7
Not convergence
1.41e12
0
Not convergence
0.3138
4.78e08
0
Table 8
x0 = 0.25, l -error, m = 100
Iterations
= 0.5
=1
= 1.5
=2
4
6
8.63e04
0
4.732e10
0
2.21e10
0
1.46e10
0
Newton
Chebychev
Halley
Two-step
1
2
3
4
5
0.0208
2.08e04
2.29e08
8.38e14
0
0.0044
5.32e08
1.07e12
0
0.0019
2.00e09
1.61e12
0
0.0044
5.32e08
7.46e13
0
Table 10
x0 = 1.8, x = 2, l -error, m = 100
Iterations
= 0.5
=1
= 1.5
=2
1
2
3
4
0.003
1.25e08
4.76e13
0
0.0016
1.22e09
1.76e12
0
2.43e04
9.78e13
0
0.0012
2.68e12
0
259
Table 11
x0 = 1.5, x = 2, l -error, m = 100
Iterations
Newton
Chebychev
Halley
Two-step
1
2
3
4
5
6
0.1826
0.0146
1.06e04
1.46e08
1.24e12
0
0.1474
0.0036
2.31e07
8.41e13
0
0.0408
2.59e05
4.86e13
0
0.1474
0.0036
2.31e07
7.66e13
0
Table 12
x0 = 1.5, x = 2, l -error, m = 100
Iterations
= 0.5
=1
= 1.5
=2
1
2
3
4
5
0.0701
2.09e04
3.22e11
5.13e13
0
0.0215
5.50e06
1.20e12
0
0.0846
1.31e04
3.77e12
0
0.1620
3.86e04
1.23e12
0
second Frchet derivative is computed efficiently (it is diagonal by blocks or it is computed only
once because it is constant). However, it is also true that a good approximation is reached in a
very small number of iterations, and the two-step method is not very expensive to compute, either.
The methods evaluating second derivative are more competitive when there must be evaluated
a greater number of iterations, or when the order of the system is small, and the evaluation of
the second derivative is not more expensive than the evaluation of several values of the original
function (if the system is m m, evaluation of F (x) requires m2 computations while the second
derivatives needs around m3 ). Those conditions are verified when we need a great accuracy (a lot
of iterations) or we have a poor discretization (small order) or both. On the other hand, we can
apply data compression techniques, as wavelets decompositions, to the linear and the bilinear
forms that appear on the methods in order to reduce its computational cost [6].
The search for an optimal constant for the iterative methods is also a feature to be considered. In our analysis, though, our goal is not to find a good constant in order to apply it, but to look
for analytic functions t (x) such that T (x) is easy to evaluate, thus providing error bounds according to the above results. Then, is a consequence of the method, and not a condition to choose it.
However, for quadratic equations using = 2 we obtain fourth-order of convergence [20].
Finally, we study the system of nonlinear equations
3x 2 + y 2 1 + |x 1| = 0,
x 4 + xy 3 1 + |y| = 0,
analyzed in [25]. The approximate solution
x , y = (0.8946553733346867, 0.3278265117462974)
is considered.
In this case, the operator is not Frchet differentiable and we cannot apply classical third-order
methods. We can consider modifications using divided differences [4,5,7]. In this example, we
consider the following modified two-step method:
260
Table 13
(x1 , y1 ) = (5, 5), (x0 , y0 ) = (1, 0), l -error
Iterations
Secant
Mod. two-step
1
2
3
4
5
3.15e01
2.71e02
5.41e03
2.84e04
3.05e06
2.56e03
1.34e08
0.00e+00
1
sn = tn tn n f (tn ), tn + n f (tn ); f
f (xn ),
1
tn+1 = sn tn n f (tn ), tn + n f (tn ); f
f (sn ),
where the n is a real parameter computed such that
tolc n f (tn ) tolu ,
where tolc is related with the computer precision and tolu is a free parameter for the user. The
convergence of the modified two-step is better than the secants method (see Table 13).
Conclusions
Summing up, in this paper we have studied a wide class of third-order methods. We established error bounds for them (both a priori and a posteriori). We have analyzed several cases
where the evaluation of the second derivative is not very time consuming. As a conclusion, we
may add that in some cases it pays to evaluate the second derivative. The family also includes
schemes, as the two-step, with only one evaluation more than Newtons method that are very
competitive. Using, approximations by divided differences, we can consider not Frchet differentiable operators. Mainly, the theoretical analysis we did allows to ensure convergence conditions
for all these schemes.
Acknowledgments
We would like to thank Professor Dr. V.F. Candela and Professor Dr. F. Potra for their support and their comments on
the first draft of this paper. Finally, we also want to thank the referees and the editor for their suggestions.
References
[1] G.E. Alefeld, F.A. Potra, Z. Shen, On the existence theorems of Kantorovich, Moore and Miranda, in: Topics in
Numerical Analysis, in: Comput. Suppl., vol. 15, Springer, 2001, pp. 2128.
[2] G.E. Alefeld, On the convergence of Halleys method, Amer. Math. Monthly 88 (7) (1981) 530536.
[3] G. Altman, Iterative Methods of Higher Order, Bull. Acad. Pollon. Sci. (Srie des sci. math., astr., phys.) IX (1961)
6268.
[4] S. Amat, S. Busquier, Convergence and numerical analysis of a family of two-step Steffensens methods, Comput.
Math. Appl. 49 (1) (2005) 1322.
[5] S. Amat, S. Busquier, V.F. Candela, Third order iterative methods without using second Frchet derivative, J. Comput. Math. 22 (3) (2005) 341346.
[6] S. Amat, S. Busquier, D. El Kebir, J. Molina, A fast Chebyshevs method for quadratic equations, Appl. Math.
Comput. 148 (2) (2004) 461474.
[7] S. Amat, S. Busquier, V.F. Candela, Third-order iterative methods without using any Frchet derivative, J. Comput.
Appl. Math. 158 (1) (2003) 1118.
261
[8] S. Amat, S. Busquier, J.M. Gutirrez, Geometric constructions of iterative functions to solve nonlinear equations,
J. Comput. Appl. Math. 157 (1) (2003) 197205.
[9] S. Amat, S. Busquier, Geometry and convergence of some third-order methods, Southwest J. Pure Appl. Math. 2
(2001) 6172.
[10] I.K. Argyros, M.A. Tabatabai, Error bounds for the Halley method in Banach spaces, Adv. Nonlinear Var. Inequal. 3 (2) (2000) 113.
[11] I.K. Argyros, Improved error bounds for a ChebysheffHalley-type method, Acta Math. Hungar. 84 (3) (1999)
209219.
[12] I.K. Argyros, Improving the order and rates of convergence for the super-Halley method in Banach spaces, Korean
J. Comput. Appl. Math. 5 (2) (1998) 465474.
[13] I.K. Argyros, D. Chen, Results on the Chebyshev method in Banach spaces, Proyecciones 12 (2) (1993) 119128.
[14] I.K. Argyros, The convergence of a HalleyChebyshev-type method under NewtonKantorovich hypotheses, Appl.
Math. Lett. 6 (5) (1993) 7174.
[15] V.F. Candela, A. Marquina, Recurrence relations for rational cubic methods I: The Halley method, Computing 44
(1990) 169184.
[16] V.F. Candela, A. Marquina, Recurrence relations for rational cubic methods II: The Chebyshev method, Computing 45 (1990) 355367.
[17] P. Deuflhard, Newton Methods for Nonlinear Problems, Affine Invariance and Adaptive Algorithms, Springer Ser.
Comput. Math., vol. 35, Springer-Verlag, Berlin, 2004.
[18] A. Diaconu, On the convergence of an iterative proceeding of Chebyshev type, Rev. Anal. Numr. Thor. Approx. 24 (12) (1995) 91102.
[19] J.A. Ezquerro, M.A. Hernndez, New Kantorovich-type conditions for Halleys method, Appl. Numer. Anal. Comput. Math. 2 (1) (2005) 7077.
[20] J.A. Ezquerro, J.M. Gutirrez, M.A. Hernndez, M.A. Salanova, Chebyshev-like methods and quadratic equations,
Rev. Anal. Numr. Thor. Approx. 28 (1) (2000) 2335.
[21] J.A. Ezquerro, A modification of the Chebyshev method, IMA J. Numer. Anal. 17 (4) (1997) 511525.
[22] E. Halley, Methodus nova, accurata facilis inveniendi radices aequationum quarumcumque generaliter, sine praevia reductione, Philos. Trans. R. Soc. London 18 (1694) 139148.
[23] M.A. Hernndez, Second-derivative-free variant of the Chebyshev method for nonlinear equations, J. Optim. Theory
Appl. 104 (3) (2000) 501515.
[24] M.A. Hernndez, N. Romero, On a characterization of some Newton-like methods of R-order at least three, J. Comput. Appl. Math. 183 (1) (2005) 5366.
[25] M.A. Hernndez, M.J. Rubio, Semilocal convergence of the secant method under mild convergence conditions of
differentiability, Comput. Math. Appl. 44 (34) (2002) 277285.
[26] M.A. Hernndez, M.A. Salanova, Modification of the Kantorovich assumptions for semilocal convergence of the
Chebyshev method, J. Comput. Appl. Math. 126 (12) (2000) 131143.
[27] Z. Huang, On a family of ChebyshevHalley type methods in Banach space under weaker Smale condition, Numer.
Math. J. Chinese Univ. (English Ser.) 9 (1) (2000) 3744.
[28] X.H. Liang, A revised Chebyshev iterative method for solving nonlinear equations in Banach spaces, J. Zhejiang
Univ. Sci. Ed. 27 (1) (2000) 813 (in Chinese).
[29] A. Melman, Geometry and convergence of Eulers and Halleys methods, SIAM Rev. 39 (4) (1997) 728735.
[30] M.I. Nechepurenko, On the convergence of the Chebyshev method, Dokl. Akad. Nauk 393 (5) (2003) 597599
(in Russian).
[31] J.M. Ortega, W.C. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New
York, 1970.
[32] F.A. Potra, V. Ptk, Nondiscrete Induction and Iterative Processes, Res. Notes Math., vol. 103, Pitman, Boston,
1984.
[33] T.R. Scavo, J.B. Thoo, On the geometry of Halleys method, Amer. Math. Monthly 102 (1995) 417426.
[34] M. Weiser, A. Schiela, P. Deuflhard, Asymptotic mesh independence of Newtons method revisited, SIAM J. Numer.
Anal. 42 (5) (2005) 18301845.
[35] W. Werner, Newton-like methods for the computation of fixed points, Comput. Math. Appl. 10 (1) (1984) 7786.
[36] Q. Yao, On Halley iteration, Numer. Math. 81 (4) (1999) 647677.
[37] T.J. Ypma, Convergence of Newton-like-iterative methods, Numer. Math. 45 (2) (1984) 241251.
[38] S. Zheng, Application of Newtons and Chebyshevs methods to parallel factorization of polynomials, J. Comput.
Math. 19 (4) (2001) 347356.