Вы находитесь на странице: 1из 8

Método de Newton-Rapshon y soluciones

(marzo 2018)
Dayana Lorena Rodriguez Reyes T00044849 Lorenarodriquez@outlook.es

 polynomial type or in the case of linear systems. Therefore,


there is a need to develop numerical methods to calculate, at
Abstract— The Newton-Raphson method is an open method, in least roughly, the solutions for this type of problem. Taking into
the sense that its global convergence is not guaranteed. The only account these questions, Mathematica incorporates different
way to achieve convergence is to select an initial value close enough commands for the re-solution of equations and systems of
to the root sought. Thus, the iteration must begin with a value equations in a formal way (for example, Solve, Reduce,
reasonably close to zero (called the starting point or assumed Eliminate, Roots), as long as they admit symbolic solution; but
value). The relative closeness of the initial point to the root depends
very much on the nature of the function itself; if it has multiple
in the majority of cases a numerical approximation of said
inflection points or large slopes in the root environment, then the solution will be calculated by means of other orders (for
probabilities that the algorithm diverges increase, which requires example, NSolve, NRoots, FindRoot), using appropriate
selecting an assumed value close to the root. Once this has been techniques for each concrete problem.
done, the method linearizes the function by the tangent line at that
assumed value. The abscissa at the origin of said line will be, We will begin by describing some of the best-known numerical
according to the method, a better approximation of the root than methods of approximation of solutions for a ge-neral equation,
the previous value. Successive iterations will be made until the of the form
method has converged sufficiently. Let f: [a, b] -> R derivable
MATH
function defined in the real interval [a, b]. We start with an initial
value x0 and define for each natural number n.
for f: MATH at least continuous and with a different sign at
the ends of I .
Note that the method described is of exclusive application for
functions of a single variable with analytical or implicit knowable We will continue later with equations of polynomial type, both
form. There are variants of the method applicable to discrete with the description of direct commands of Mathematica for its
systems that allow to estimate the roots of the trend, as well as formal and approximate resolution, and with certain algorithms
algorithms that extend Newton's method to multivariable systems, and specific features specific to this type of equations.
systems of equations, etc..
Many problems related to Mathematics are reduced to solving
Index Terms—Enter key words or phrases in alphabetical an equation f (x) = 0, (1.1) where, in principle, we will assume
order, separated by commas. For a list of suggested keywords,
that f is a real function of a real variable. Despite its simplicity
send a blank e-mail to keywords@ieee.org or visit
file:///C:/Users/DAYANA/Downloads/Metodo%20de%20Newton of approach, this has been a complicated problem that has been
%20Raphson,%20Secante%20y%20Regla%20Falsa.pdf addressed by numerous famous mathematicians: Cardano,
Newton, Ruffini, Galois, etc. Even for "simple" functions such
I. INTRODUCTION as polynomials, the calculation of their roots is a complicated
The resolution of equations and systems (whether polynomial, issue. Initially, an attempt was made to find these roots as a
algebraic or transcendent) is one of the problems that most function of the polynomial coefficients, as is the case with the
frequently appears in the different fields of Science and second-degree equation: ax2 + bx + c = 0 ⇐⇒ x = -b ± √ b 2 -
Technology. It is also a problem that has been studied since 4ac 2a. There are also known formulas for polynomials of
ancient times; for example, already in the year 100 BC. Heron degrees 3 and 4 although, due to their complexity, they are not
used an iterative method to approximate the square root of a usually used in practice. At the beginning of the s. XIX, Galois
positive number. proved that there are no such formulas for polynomials of
degree greater than or equal to 5. The problem of not being able
There are no general methods of symbolic resolution of to find exactly the solutions of an equation also arose when
equations or systems; except for certain equations of

This paragraph of the first footnote will contain the date on which you National Institute of Standards and Technology, Boulder, CO 80305 USA (e-
submitted your paper for review. It will also contain support information, mail: author@ boulder.nist.gov).
including sponsor and financial support acknowledgment. For example, “This S. B. Author, Jr., was with Rice University, Houston, TX 77005 USA. He is
work was supported in part by the U.S. Department of Commerce under Grant now with the Department of Physics, Colorado State University, Fort Collins,
BS123456”. CO 80523 USA (e-mail: author@lamar.colostate.edu).
The next few paragraphs should contain the authors’ current affiliations, T. C. Author is with the Electrical Engineering Department, University of
including current address and e-mail. For example, F. A. Author is with the Colorado, Boulder, CO 80309 USA, on leave from the National Research
Institute for Metals, Tsukuba, Japan (e-mail: author@nrim.go.jp).
working with transcendental equations, such as the Kepler Local Convergence Theorem of Newton's Method
equation, related with the calculation of planetary orbits: f (E)
Be. If , and ,
= E - e sin E - M for different values of E and M. These
limitations force us to look for methods to find the solutions of then there exists a r> 0 such that
an equation of approximate form. Basically, these methods yes, then the sequence xn with verifies that:
generate a sequence {x1, x2,. . . , xn,. . . } that, under appropriate
conditions, converges to a solution of equation (1.1). In many
for all n and xn it tends to p when n tends to
cases, this sequence is generated recursively xn + 1 = F (xn, ...,
infinity.
xn-p + 1), from an iteration function, F, which can depend on
one or several arguments. Attempts to solve these types of If in addition , then the convergence is
equations have been reflected since ancient times. Reading quadratic.
Knill [50], we realize that since Greek civilizations,
approximately two thousand years before Christ, iterative Newton's Method Global Convergence Theorem
algorithms were already known, such is the case of Heron's Be checking:
formula for the calculation of square roots, where starting from
a initial approximation of √ S is proposed as a new
approximation (s + S / s) / 2. With this formula, Heron could
approximate the value of the square root that appears in the for all
formula of the area of a triangle: S = pp (p - a) (p - b) (p - c), for all
where p = (a + b + c) / 2 is the semiperimeter of the triangle of
sides a, b and c. The use of an iterative algorithm involves the
posing of several theoretical problems since it is essential to
examine their speed of convergence, determine the number of
iterations necessary to obtain the agreed accuracy, study the Then there is a single such that by which
effects of computer arithmetic on succession, as well as solve the sequence converges to s.
the problem of looking for a good initial approximation.
The order of convergence of this method is, at least, quadratic. A. Newton-Raphson method
However, if the searched root is Algebraic multiplicity greater It is one of the most used methods in engineering to get to the
than one (i.e., a double, triple root ...), the Newton-Raphson result of the problema raised very quickly. It is based on
method loses its quadratic convergence and becomes linear of drawing tangent lines that "take the form" of them function by
asymptotic constant of convergence 1-1 / m, with m the means of its first derivative. Suppose a function f (x) to which
multiplicity of the root. we want to calculate its root. Evaluating a nearby x1 value ,To
the root in the function and drawing a tangent line to the point
x1, f (x1) a new value is obtained x2 which is much closer to
There are numerous ways to avoid this problem, such as the the root than x1.
methods of accelerating the convergence type Δ² of Aitken or
the Steffensen method.
Obviously, this method requires knowing in advance the
multiplicity of the root, which is not always possible. For this
reason, the algorithm can also be modified by taking an
auxiliary function g (x) = f (x) / f '(x), resulting in:

Its main disadvantage in this case would be how expensive it


would be to find g (x) and g '(x) if f (x) is not easily derivable.
On the other hand, the convergence of the method is shown
quadratic for the most usual case based on treating the method
as a fixed point: if g '(r) = 0, and g "(r) is different from 0, then
the convergence is quadratic. However, it is subject to the Analytical deduction of the order of convergence of the
particularities of these methods. Newton-Raphson method
Note, however, that the Newton-Raphson method is an open
La elección de la función para el método de Newton Raphson
method: convergence is not guaranteed by a global convergence es:
theorem as it could be in false position or bisection methods.
Thus, it is necessary to start from an initial approximation near
the root sought for the method to converge and fulfill the local
convergence theorem.
whenever there is f ^ {\ prime} and it is not canceled. method (Nf (x)) and its derivative (Lf (x)), which take the
Obviously, if for a value s the function f for that value is following form:
canceled, the function g is precisely s (s is the fixed point of g)
and vice versa.

It can be easily proved that, if f is twice derivable in s and f ^ {\


prime} (s) \ neq0, then

Functions (2.7) and (2.8) are widely used in demonstrations of


convergence of many iterative methods, such as the Halley
and, therefore, the convergence of the Newton-Raphson method method and the method of Chebyshev, as well as in books and
is at least order two (or quadratic), which makes it one of the articles of Numerical Analysis, among which they can quote
most interesting methods for solving nonlinear equations. [2], [23], [40] and [84].
When f is at least of class C3 we can also calculate the second It is important to note that expression (2.8) is identified as the
derivative of g and, using (ordenp), write MATH. This limit can Degree of Logarithmic Convexity of a function. Reading to [40]
be null in some cases (which would imply a higher order we find that the degree of the logarithmic convexity of a
convergence). function is a point measure of convexity, given
by the resistance of a function to be "concave" by a logarithmic
The analysis of the convergence of secant and regula-falsi operator. In other words, it is the number of times that a
methods can not be done in the same way, since none of them logarithmic operator must be applied
can be expressed as a functional iteration process of the type to a convex function to obtain a concave function as a result.
given by (metiterfunc), using both two iterations previous for In addition, the degree of logarithmic convexity is related to the
the calculation of the next one. However, by means of the speed of convergence of the Newton-Raphson method. In
corresponding equations in differences (see for example effect, reading [24] we find that from the geometrical
Kincaid-Cheney) the order of convergence of both methods can interpretation of the Newton-Raphson method verify that at a
be analyzed. The secant is intermediate between the linear and logarithmic convexity of the function y = f (x), the sequence of
the quadratic, MATH, while the regulate-falsi is not linear (p = Newton-Raphson presents greater speed of convergence to the
1). root of the equation f (x) = 0, so the logarithmic convexity is
applied to build new variants of Newton's method. For an in-
Study of the convergence of the NewtonRaphson depth study of the variations of Newton's method from the
method application of the logarithmic convexity degree.

In the previous sections, we have seen how to define a


succession.
xn+1 = xn − f(xn) / f 0(xn)

with the objective of approximating the solution of x * of the


equation f (x) = 0. We have to be aware that the convergence of
{xn} to x * will not always happen. For this to happen, a series
of conditions must be given on the function f, the starting point
x0 or the root x.
Specifically, we will distinguish 3 types of convergence results:

1. Local: Conditions are given on the root x *.


2. Semilocal: Conditions are given on the starting point x0.
3. Global: Conditions are given over an interval.

There are a large number of publications with diverse


convergence results for the Newton-Raphson method (see [4],
[5], [7], [21], [28], [30], [47], [52], [54], [55], [57], [60], [68],
[73], [74], [78] and [82]). It is not our objective to give a
exhaustive list of all of them. However, by way of examples, Global Convergence
we will expose one or two results of each type of convergence. In the Numerical Analysis mainly two theorems are used that
allow know the global behavior of the iterative methods of a
Local convergence point. These are
The theorem that we present below has been chosen taking into 1. The F delo Point Theorem.
account the simple and powerful way of their demonstration. In 2. The Fourier theorem.
it, two elements, the iteration function of the Newton-Raphson
The F delo point theorem gives us enough conditions for geometry of the Newton-Raphson method, we realize that its
convergence of an algorithm, from an initial value taken in an convergence is ensured as long as the second derivative of the
appropriate interval. For its demonstration it is necessary to function f 00 (x) do not change sign in the interval [a, b] in
know in advance the meaning of function contractive. which the solution appears x *
That is, when the function has a turning point in its domain. In
Definition 7 (Contractive function). A function g (x) ∈ C1 this sense, [40] makes a study of the Newton-Raphson method
(I) is contractive in I, if there exists 0 <L <1 such that for all x, based on the degree of logarithmic convexity (2.8). For more
y ∈ I, it is fulfilled | g (x) - g (y) | ≤ L | x - y |. details on this study, refer again to [40].
To the condition | g (x) - g (y) | ≤ L | x - y | it's called Lipchitz's
condition and to L constant Lipchitz. It is usually said that a Semilocal Convergence
function g (x) is contractive if his Lipchitz constant is less than Historically speaking, in 1829, Cauchy presented the first
unity. semilocal convergence result for the Newton-Raphson method.
The details of this theorem are found in [12], while in [94] a
Theorem 3 (Point Fijo). Let I be a closed interval and g: I → I summary similar to the one presented below appears.
a contraction (L <1). Under these conditions, it follows that: i)
The sequence defined by xn + 1 = g (xn) converges for all initial
values x0 ∈ I. ii) The sequence converges to a fixed point of g,
that is, if x * = l' ımn → ∞ xn ⇒ x * = g (x *). iii) This fixed
point is unique in I, that is, there exists a single x * ∈ I such that
g (x *) = x *.

Theorem 4 (Fourier convergence conditions). Let f (x): [a, b]


→ R a function f ∈ C2 [a, b] that satisfies the following
conditions: i) f (a) f (b) <0. ii) f 0 (x) 6 = 0, ∀x ∈ [a, b]. iii) f 00
(x) does not change sign in [a, b]. iv) m'ax {| f (a) f 0 (a) |, | f (b)
f 0 (b) |} ≤ b - a.

Then there exists a single root x * of (1.1) in [a, b] and the


sequence {xn} ∞ n = 0, defined by (2.2) converges towards x *
for any initial value x0 ∈ [a, b]. Demonstration. According to
the conditions established in the hypothesis, several
possibilities can occur. To fix ideas we will assume that f 0 (x)
<0 and that f 00 (x) ≥ 0 in [a, b], which ensures that ii) and iii)
are met. Note that in these conditions x * is the only root of f This result, initially established for real functions of real
(x) = 0 in [a, b]. In addition, since f 0 (x) <0 and verify i), it variables, was generalized years later by Kantorovich in his
follows that f (a)> 0 and f (b) <0, as shown in graph (2.5). famous theorem enunciated for operators defined in Banach
spaces. Reading [36], we find that it was at the end of 1940
when L. V. Kantorovich and L. B. Rall introduced the Newton-
Raphson method in these types of spaces. Its approach
establishes that if F is a sufficiently differentiable operator
defined between two spaces of Banach X and Y, from a point
x0 ∈ X the Newton-Raphson sequence is constructed.

At the end of the 1980s, a new theory emerged on the semilocal


convergence of iterative processes. This theory, known as α-
theory was introduced by H.M. Kim and S. Smale [18]. Kim
introduced it in his doctoral thesis entitled Computational
Complexity of the Euler Type Algorithms for the Roots of
Polynomials, published in February 1986 and in an article
published in 1988 under the title On approximate zeroes and
Having seen this, and assuming that x0 ∈ [a, x *] we will prove root finding algorithms. In both papers Kim applied the α-
that: {xn} ∞ n = 0 is a growing sequence. The limit of {xn} ∞ theory to one-variable polynomial equations. For his part,
n = 0 is the root x * Let's see the first part. If x0 ∈ [a, x *] then Smale in a book entitled New Directions in Pure, Applied and
xn <xn + 1 <x *. The demonstration will be by induction. For n Computational Mathematics, published in 1986, introduced the
= 0 we will verify that x0 <x1 <x *. α-theory for systems of equations, which have a solution based
According to [40], Fourier's conditions are insufficient to on three invariants of the form.
ensure the convergence of the Newton-Raphson method, taking
into account the interpretation
Order of convergence and efficiency of the Newton-
Raphson method.
Convergence order
The order of convergence of an iterative method tells us the
«speed» with which a sequence converges to its limit. The main
characteristic of this "velocity" is verified in the proportion with
which the significant digits of the approximate solution are
multiplied. Thus, when the iterative method has linear
convergence, the significant digits are reproduced at the same
scale, whereas, if the convergence is quadratic, the significant
digits in each iteration are increased in power of two and so on.
The theorem we present below shows that for simple roots the
Newton-Raphson method has a quadratic or two order of
convergence.
Order of computational convergence

Recall that in computational mathematics an iterative method


deals with solve an equation of type (1.1) by successive
approximations to the solution.
Taking this into account, Weerakoon and Fernando [90], Grau
[38] and Grau and Gutiérrez [39] have introduced different
ways to approximate the order of convergence of a
iterative method. In particular, each of them defines what is
called the order of OCC computational convergence of an
iterative method. We are, for Therefore, with three definitions
of the OCC, which we will denote OCCW, OCCG and
OCCGG.

According to [90], the OCCW is expressed by the formula


Mathematical Monthly, 100 (1993) 53–58. [20] N. L. De la
Bibliografía Caille, Sur les éléments de la théorie du soleil, Premier
Mémoire, Mémoires de l’Académie Royale des Sciences, A2
[1] J. C. Adams: On Newton’s solution of Kepler’s problem. (1750) 166–178.
Monthly Notices of Royal Astronomical Society, Vol. 43, [21] J. M. Díaz y F. B. Trujillo, Introducción a los métodos
(1882) pp. 43–49. numéricos para la resolución de ecuaciones, Servicio de
[2] S. Amat, S. Busquier, J. M. Gutiérrez, Geometric publicaciones de la Universidad de Cádiz, 1998.
constructions of iterative functions to solve nonlinear [22] P. Díez, A note on the convergence of the Secant Method
equations, Journal of Computational and Applied Mathematics, for simple and multiple roots, Applied Mathematics Letters, 16
157 (2003) 197–205. (2003) 1211–1215. [23] J. A. Ezquerro, J. M. Gutiérrez, M. A.
[3] S. Amat, S. Busquier, D. El Kebir, J. Molina, A fast Hernández y M. A. Salanova, El método de Halley:
Chebyshev’s method for quadratic equations, Applied Posiblemente, el método más redescubierto del
Mathematics and Computation, 148 (2004) 461– 474. mundo,Margarita Mathematica, Servicio de Publicaciones de la
[4] I. K. Argyros, Newton methods. Nova Science Publisher, Universidad de la Rioja, Logroño, 2001.
Inc, New York, 2004. [24] J. A. Ezquerro, Construcción de procesos iterativos
[5] A. Aubanell. A. Benseny y A. Delshams, Útiles básicos de mediante aceleración del método de Newton, Tesis Doctoral,
cálculo numérico, Editorial Labor, S. A, Barcelona, España, Servicio de Publicaciones de la Universidad de la Rioja,
1993. Logroño, 1996.
[6] D. K. R. Babajee and M. Z. Dauhoo, An analysis of the [25] W. F. Ford and J. A. Pennline, Accelerate convergence in
properties of the variants of Newton’s method wiht third order Newton’s method, SIAM Review, 38 (1996) 658–659. [26] J.
convergence, Applied Mathematics and Computation, 183, S. Frame, A variation of Newton’s method. American
Issue 1 (2006) 659–684. Mathematical Monthly, 51 (1944) 36–38. [27] C. E. Froberg,
[7] R. G. Bartle, Introducción al análisis matemático, Editorial Introduction to numerical analysis, Addison-Wesley, Second
Limusa, S. A, Mé- xico. 1980. edition, 1970.
[8] A. Ben-Israel, Newton’s Method with modified funtions, [28] F. García y A. Nevot, Métodos numéricos en forma de
Contemporary Mathematics, 204 (1997) 39–50. [9] R. Burden ejercicios resueltos, UPCU Departamento Publicaciones,
and D. Faires, Análisis Numérico, Grupo Editorial Madrid, 1997.
Iberoamérica, España, 1985. [29] B. García, I. Higueras y T. Roldán, Análisis Matemático y
[10] B. Carnahan and H. A. Luther, Applied Numerical Métodos Numéricos, Universidad Pública de Navarra, Campus
Method, John Wiley and Sons, New York, 1969. de Arrosadia, Pamplona, España, 2005.
[11] J. D. Cassini: Nouvelle manière géometrique et directe de [30] M. Gasca, Cálculo Numérico. Resolución de ecuaciones y
trauver les apogées, les excentricités, et les anomalies du sistemas, Editora Librería Central, Zaragoza, 1987.
mouvement des planètes. Memoires de l’ Académie Royale des [31] E. Gaughan, Introducción al análisis, Editorial Alhambra
Science, Vol. 10, (1669) pp. 488–491. S. A, Madrid, 1972.
[12] A. L. Cauchy, Leçons sur le Calcul Differentiel, Sur la [32] W. Gautschi, Numerical Analysis. An introduction,
détermination approximative des racines d’une équation Birkhäuser Boston, USA, 1997.
algébrique ou transcendante, París, 1829. [33] J. Gerlach, Accelerated convergence in Newton’s Method,
[13] S. C. Chapra and R. P. Canale, Numerical methods for SIAM Review, 36 (1994) 272–276. [34] J. M. Gutiérrez, M. A.
engineers, McGrau-Hill, 1998. Hernández y M. Amparo Salanova, α-theory for nonlinear
[14] J. Chavarriga, I. A. García, J. Giné, Manual de métodos Fredholm integral equations, Grazer Mathematische Berrichte,
numéricos, Ediciones de la Universidad de Lleida, 1998. 346 (2004) 187– 196.
[15] A. M. Cohen, Análisis Numérico, Editora Reverté, S.A, [35] J. M. Gutiérrez, A new semilocal convergence theorem for
1977. Newton’s method, Journal of Computational and Applied
[16] D. Conte y C. De Boor, Análisis Numérico elemental, Mathematics, 79 (1997) 131–145.
McGrau-Hill. Inc, USA, 1972. [36] J. M. Gutiérrez, El método de Newton en espacios de
[17] J. M. A. Danby and T. M. Burkardt: The solution of Kepler Banach, Tesis doctoral, Universidad de La Rioja, Servicio de
equation I. Celestial Mechanics, Vol. 31, (1983) pp. 95–107. Publicaciones, Logroño, 1995.
[18] J. P. Dedieu and M. H. Kim, Newton’s method for analytic [37] J. M. Gutiérrez y M. A. Hernández, A family of
system of equations with constant rank derivatives, Journal of Chebyshev-Halley type method in Banach spaces, Austral.
Complexity, 18 (2002) 187–209. Math. Soc. 55 (1997) 113–130.
[19] G. C. Donovan, A. R. Miller and T. J. Moreland, [38] M. Grau: Eficiencia computacional de métodos iterativos
Pathological functions for Newton’s method, American con multiprescisión, Comunicación particular.
[39] M. Grau and J. M. Gutiérez: Some family of zero-finder
methods derive from Obreshkov’s techiques, Comunicación
particular. [40] M. A. Hernández y M. A. Salanova, La
convexidad en la resolución de ecuaciones escalares no lineales,
Servicio de Publicaciones de la Universidad de la Rioja,
Logroño, 1996. [41] F. H. Hildebrand, Introduction to
Numerical Analysis, Dover Publications, Inc, New York, 1974.
[42] P. Horton, No Fooling! Newton’s Method Can Be Fooled,
Mathematics Magazine, 80 (2007) 383–387. [43] J. A. Infante
y J. M. Rey, Métodos numéricos. Teoría, problemas y prácticas
con MATLAB, Ediciones Pirámide, tercera edición, Madrid,
2007. [44] L. V. Kantorovich, On Newton’s method for
functional equations, Dokl. Akad. Nauk, SSRR 59 (1948)
1237–1240. [45] L. V. Kantorovich, The majorant principle and
Newton’s method. Dokl. Akad. Nauk, SSRR 76 (1951) 17–20.
[46] L. V. Kantorovich and G. P. Akilov, Functional Analysis,
Pergamom, Oxford, (1982). [47] A. Kharab and R. B. Guen, An
introduction to numerical methods. A MATLAB approach,
Chapman and Hall/ CRC, Florida, 2002. [48] D. Kincaid,
Análisis Numérico, Addison-Wesley Iberoamericana, 1991.
[49] E. Kobal, Notice concerned with the calculation of roots of
numerical equations, Monatsh. Math. 02 (2002) 331–332. [50]
R. J. Knill, A modified Babylonian algorithm, Amer. Math.
Monthly, 99 (1997) 734–737. [51] T. Levi-Civita: Sopra la
equazione di Kepler. Astronomische Nachrichten, Vol. 164,
(1904) pp. 313–314.
[52] O. M. López, Métodos iterativos de resolución de
ecuaciones, Editorial Alhambra, S. A, Madrid, 1986. [53] R.
Lozada, Análisis Matemático, Ediciones Pirámide, Madrid,
1978.
[54] I. Martín. y V. M. Pérez, Cálculo numérico para
computación en ciencia e ingeniería. MATLAB, Editora
Síntesis S.A, Madrid, 1998.
[53] R. Lozada, Análisis Matemático, Ediciones Pirámide,
Madrid, 1978. [54] I. Martín. y V. M. Pérez, Cálculo numérico
para computación en ciencia e ingeniería. MATLAB, Editora
Síntesis S.A, Madrid, 1998. [55] V. Muto, Curso de métodos
numéricos, Servicio editorial de la Universidad del Pais Vasco,
1998.
[56] S. Nakamura, Análisis numérico y visualización gráfica
con MATLAB, PrenticeHill Inc, A Simon y Schuster Company,
Mexico, 1997. [57] C. Neuhauser, Matemática para ciencias,
Pearson Educación, S. A, Madrid, 2004. [58] I. Newton, Sir
Isaac Newton Mathematical Principles of Ntural Philosophy
and his System of the World, University of California Press,
Berkeley, 1934.
[59] A. Newmaier, Introduction to numerical analysis,
Cambridge University Press, UK, 2001.

Вам также может понравиться