Вы находитесь на странице: 1из 14

Fixed Point Iterations

Dr. Philippe B. Laval


Kennesaw State University
January 29, 2010

Abstract
Iteration is a fundamental principle in computer science. As the name
suggests, a process is repeated until an answer is found. Iterative tech-
niques are used in solving many problems including (but not limited to)
…nding roots of equations, solving linear as well as nonlinear systems of
equations, and solving di¤erential equations. In this document, we will
describe …xed point iterations, and how they apply to …nding roots of
equations.

1 Introduction
1.1 Basic Concepts
We recall some basic concepts and introduce new ones.
De…nition 1 (zero of a function) Let f be a function. A zero of f is a num-
ber r for which f (r) = 0.
De…nition 2 (root of an equation) Let f be a function. Any number r for
which f (r) = 0 is a root of the equation f (x) = 0.
De…nition 3 (…xed point) A …xed point for a function g is a number p such
that g (p) = p.
Geometrically, the …xed points of a function y = g (x) are the points where
the graphs of y = g (x) and y = x intersect.
In theory, …nding the …xed points of a function g is as easy as solving g (x) =
x.
Example 4 The function g (x) = x2 2 has two …xed points. We can …nd them
by solving
x2 2=x
2
x x 2=0
(x 2) (x + 1) = 0
x= 1 or x = 2

1
Figure 1: Fixed points of y = x2 2

The …xed points can also be found on …gure 1, by looking at the intersection of
y = x and y = x2 2.

1.2 Relation Between Fixed Points and Roots


Given a function f , one can de…ne a function g (in many ways) such that a zero
of f (i.e. a solution of f (x) = 0) is also a …xed point of g.

For example, if we de…ne g (x) = x f (x), and if p is a zero of f , that is


f (p) = 0, then

g (p) = p f (p)
=p 0
=p

Thus, p is a …xed point of g.

2
In fact, if C is any constant, and if p is a zero of f , then p will also be a
…xed point of g (x) = x + Cf (x) because

g (p) = p + Cf (p)
= p + C (0)
=p

Conversely, given a function g, one can de…ne a function f such that any
…xed point of g is also a zero of f . For example, if f (x) = x g (x), and
g (p) = p, then f (p) = p g (p) = 0.
r
10
Example 5 Verify that a …xed point of g (x) = is also a zero of f (x) =
4+x
3 2
x + 4x 10.
If p is a …xed point of g, then we have

p = g (p)
r
10
=
4+p
10
p2 =
4+p
4p2 + p3 = 10
4p2 + p3 10 = 0
f (p) = 0

Thus, p is a zero of f .

Though we know what has to be done to …nd a …xed point of a function g


(solve g (x) = x) , we may not know how to do it. Thus, we will study some
iteration techniques to allow us to approximate …xed points. But before we
embark in trying to …nd …xed points, we need to be sure they exist. It is the
purpose of the next section.

1.3 Existence of Fixed Points: Theory


In this section, we use two theorems from Calculus called the Mean Value Theo-
rem, and the Intermediate Value Theorem.. Before we continue, let us establish
some notation. You have already worked with sets. But most (if not all) of
the sets you have studied were sets of numbers. As you know, sets can contain
anything. A set which will be of importance to us is the set of functions which
are continuous on some interval [a; b]. This set is denoted C [a; b]. From now
on, instead of saying: Let f be a continuous function on [a; b], we will simply
say: Let f 2 C [a; b].

Theorem 6 Let g 2 C [a; b] :

3
1. If g (x) 2 [a; b] for every x 2 [a; b], then g has a …xed point in [a; b].
2. If in addition g 0 (x) exists on (a; b) and there exists a constant 0 < k < 1
such that jg 0 (x)j k for all x 2 (a; b) then the …xed point in [a; b] is
unique.

Proof. We prove each part separately.

1. If either g (a) = a or g (b) = b, then we are done, g has a …xed point at


either a or b. If it is not the case, then since g (x) 2 [a; b], we must have
g (a) > a and g (b) < b. We de…ne h (x) = x g (x). Then, h is continuous
on [a; b]. Furthermore, h (a) = a g (a) < 0, and h (b) = g g (b) >
0. Thus, h (a) and h (b) have opposite signs. By the intermediate value
theorem, there exists p 2 [a; b] such that h (p) = 0. But h (p) = p g (p).
Thus, g (p) = p; hence p is a …xed point for g, and p 2 [a; b].
2. We do a proof by contradiction. We assume that g has two distinct …xed
points p and q in [a; b]. We arrive at a contradiction. Since g 0 (x) exists
in (a; b), it implies that g is di¤ erentiable in (p; q). By the Mean Value
theorem, there exists c 2 (p; q) such that

g (p) g (q)
f 0 (c) =
p q
p q
=
p q
=1

Thus jg 0 (c)j = j1j = 1 which contradicts the fact that jg 0 (x)j k < 1.

Intuitively, it is not too hard to understand why the theorem works. Part 1
of the theorem says that the graph of g has to remain in the square shown on
Figure 2. Picture yourself trying to draw the graph of a function g satisfying
the conditions of the theorem. You would start at g (a), go all the way to g (b).
If either g (a) = a or g (b) = b, then we are done, g has a …xed point at either a
or b. If it is not the case, then since g (x) 2 [a; b], we must have g (a) > a and
g (b) < b. Assume that g (a) and g (b) are the big dots shown on Figure 2. Since
g is continuous, its graph has no breaks or holes. The only way the two dots
can be joined with a graph which has no breaks is if the graph goes through
the line y = x. This implies that the corresponding function has a …xed point
there. For part 2 of the theorem, look at …gure 3. Suppose you have started to
draw the graph to join the two dots. Once you have cross the line, to cross it
again, the slope of the graph would have to be larger than the slope of the line,
but the slope of the line is 1, and by assumption jg 0 (x)j < 1. The line cannot
be crossed twice, hence there is only one …xed point.

4
Figure 2: Existence of a …xed point

x2 1
Example 7 Consider g (x) = in the interval [ 1; 1].
3
2
g 0 (x) = x. It is left as an exercise to check that the global minimum of g occurs
3
1
at x = 0. The global minimum is g (0) = . The global maximum occurs at
3
1
x = 1, the global maximum is 0. Thus, g (x) 0. Thus, g (x) 2 [ 1; 1].
3
2 2
By theorem 6, g has a …xed point in [ 1; 1]. In addition, jg 0 (x)j = jxj <1
3 3
if x 2 [ 1; 1]. Hence, by theorem 6, g has a unique …xed point in [ 1; 1].
Example 8 Consider g (x) = 3 x in the interval [0; 1].
g 0 (x) = 3 x ln 3 < 0 on [0; 1]. Thus, g is strictly decreasing on [0; 1]. This
implies that the global maximum occurs at x = 0; the global maximum is g (0) =
1
1. The global minimum occurs at x = 1; the global minimum is g (1) = . Thus,
3
1
g (x) 1, in other words, g (x) 2 [0; 1]. By theorem 6, g has a …xed point
3
in [0; 1]. This time, we cannot use part 2 of theorem 6 to establish uniqueness.
g 0 (0) = ln 3. Thus, jg 0 (0)j > 1. However, since g is strictly decreasing, we
know it can only cross the line y = x once.
We now develop an algorithm to approximate …xed points.

5
Figure 3: Uniqueness of Fixed Point

2 Finding Fixed Points


2.1 Description of the Method
De…nition 9 (…xed-point iteration) The iteration pn = g (pn 1) for n =
1; 2; ::: is called a …xed-point iteration.

Theorem 10 If g is a continuous function and fpn g is a sequence generated


by …xed-point iteration such that lim pn = p, then p is a …xed point of g.
n!1
Proof. If lim pn = p, then lim pn 1 = p. Since g is continuous, we have:
n!1 n!1

p = lim pn
n!1
= lim g (pn 1)
n!1

=g lim pn 1 (g is continuous)
n!1
= g (p)

Thus p is a …xed point for g.

Not every sequence generated by …xed-point iteration will converge. The


…gures below illustrate this. To understand how this works, start at p0 , then

6
follow the lines. More precisely, from p0 , move vertically until we hit the graph,
this gives us p1 = g (p0 ). We now use p1 as our new starting point. In other
words we need to copy its location on the x-axis. To do so, from where we were,
we move horizontally until we hit the line y = x, then we move vertically until
we hit the x-axis. This gives us p1 . We repeat the procedure. The Figures
illustrate the following cases:

If 0 < g 0 (x) < 1, we have monotone convergence as shown on …gure 4.


In this case, the points pn get closer and closer to the …xed point as n
increases.
If 1 < g 0 (x) < 0, we have oscillating convergence as shown on …gure 5.
In this case, as n increases, pn oscillates from one side of the …xed point
to the other. pn will eventually converge to the …xed point.

If g 0 (x) > 1, we have monotone divergence as shown on …gure 6. In this


case, the points pn get further away from the …xed point as n increases.
If g 0 (x) < 1, we have divergent oscillations as shown on …gure 7. In this
case, as n increases, pn oscillates from one side of the …xed point to the
other. pn is also getting away from the …xed point.

2.2 The Algorithm


To …nd a solution to p = g (p) given an initial approximation p0 .

INPUT: g, p0 , TOL (tolerance) and N (max number of iterations)


OUTPUT: the approximate solution p (…xed point) or a message of failure.
step 1: Set i = 1

step 2: while i N do steps 3 - 6

– step 3: set p = g (p0 ) (this computes pi )


– step 4: if jp p0 j < T OL then
OUTPUT p (the procedure succeeded)
STOP
– step 5: set i = i + 1
– step 6: set p0 = p (update the starting point)

step 7: OUTPUT error message

step 8: STOP

7
Figure 4: Monotone Convergence

The next example illustrates the fact that for a given root …nding problem,
there are several …xed point problems which can be associated with it. Some
of which will converge faster than others. Some may even diverge. So, a bigger
question, which we will not answer right now is: ”How can we …nd a …xed-point
problem that produces a sequence that rapidly converges to a solution to a
given root-…nding problem?” The theorem we will study in the next section will
provide part of the answer. For now, let us look at a root-…nding problem with
several …xed-point problems associated with it. We apply the above algorithm
to each of them to observe which ones converge, and how fast.

Example 11 Let f (x) = x3 + 4x2 10. Since f (1) = 5 and f (2) = 14, the
Intermediate Value theorem tells us that f (x) = 0 has a unique root in [1; 2].
We leave it as an exercise to verify that the …xed point of the functions given
below is a solution of f (x) = 0.

1. g1 (x) = x x3 4x2 + 10

8
Figure 5: Oscillating Convergence

r
10
2. g2 (x) = 4x
x
1p
3. g3 (x) = 10 x3
2
r
10
4. g4 (x) =
4+x
x3 + 4x2 10
5. g5 (x) = x
3x2 + 8x
With p0 = 1:5 and T OL = 0:000000001, we obtain the following results:

9
Figure 6: Monotone Divergence

n g1 g2 g3 g4 g5
0 1:5 1:5 1:5 1:5 1:5
1 0:875 0:8165 1:286953768 1:348399725 1:373333333
2 6:732 2:9969
p 1:402540804 1:367376372 1:365262015
3 469:7 8:65 1:345458374 1:364957015 1:365230014
4 1:03 108 1:375170253 1:365264748 1:365230013
5 1:360094193 1:365225594
6 1:367846968 1:365230576
7 1:363887004 1:365229942
8 1:365916734 1:365230022
9 1:364878217 1:365230012
10 1:365410062 1:365230014
15 1:365223680 1:365230013
20 1:365230236
25 1:365230006
30 1:365230013

10
Figure 7: Divergent Oscillations

The sequence produced by g1 diverges. The sequence produced by g2 be-


comes unde…ned. The remaining functions produced the result. If we solve the
equivalent root …nding problem using the bisection method, it takes 30 itera-
tions. So, g4 and especially g5 performed very well.

3 Fixed Point Theorem


As noticed earlier, the sequence generated by the …xed-point iteration does not
always converge. We now state and prove a theorem which will give us necessary
conditions for such a sequence to converge.

Theorem 12 (Fixed point theorem) Suppose that g satis…es the conditions


below:

1. g 2 C [a; b]

11
2. g (x) 2 [a; b] for all x 2 [a; b]
3. g 0 exists on (a; b)
Then the following is true:

1. If There exists a constant 0 < k < 1 such that jg 0 (x)j k for all x 2 [a; b]
then the iteration pn = g (pn 1 ) will converge to the unique …xed point p
in [a; b]. In this case, p is said to be an attractive …xed point.

2. If jg 0 (x)j > 1for all x 2 [a; b] then the iteration pn = g (pn 1 ) will not
converge to p. In this case p is said to be a repelling …xed point.

Proof. Theorem 6 implies that a unique …xed point p exists in [a; b]. Since
g (x) 2 [a; b] for all x 2 [a; b], the sequence fpn g is de…ned for all n 0 and
pn 2 [a; b] for all n. Using the mean value theorem, there exists c 2 (a; c) such
that
g (pn ) g (p)
g 0 (c) =
pn p
Or
jpn pj = jg 0 (c)j jg (pn ) g (p)j
Since jg 0 (x)j k for all x 2 [a; b], we obtain

jpn pj k jg (pn ) g (p)j


k jpn 1 pj

If we apply the same argument to jpn 1 pj, we get that

jpn 1 pj k jpn 2 pj

and so on. It follows that

jpn pj k jpn 1 pj
2
k jpn 2 pj
:::

Applying this inequality inductively, we obtain

jpn pj k n jp0 pj (1)

We recall from Calculus that if 0 < k < 1 then lim k n = 0 therefore,


n!1

lim jpn pj = 0
n!1

and pn ! p. If on the other hand k > 1, then k n ! 1, thus pn will not converge
to p.

12
Corollary 13 If g satis…es the hypotheses of theorem 12, then bounds for the
error involved in using pn to approximate p are given by:

jpn pj k n max fp0 a; b p0 g

and
kn
jpn pj jp1 p0 j for all n 1
1 k
Proof. The …rst bound follows from equation (1) since both p0 and p 2 [a; b].
For n 1, the procedure used in the proof of theorem 12 implies that

jpn+1 pn j jg (pn ) g (pn 1 )j


k jpn pn 1j
k jg (pn 1) g (pn 2 )j
:::
k n jp1 p0 j

So, for m > n 1, we have

jpm pn j = jpm pm 1 + pm 1 pm 2 + ::: + pn+1 pn j


jpm pm 1 j + jpm 1 pm 2 j + ::: + jpn+1 pn j
m 1 m 2 n
k jp1 p0 j + k jp1 p0 j + ::: + k jp1 p0 j
n 2 m n 1
k 1 + k + k + ::: + k jp1 p0 j
mX
n 1
k n jp1 p0 j ki
i=0

Since lim pm = p, we obtain


n!1

jp pn j = lim jpm pn j
n!1
1
X
k n jp1 p0 j ki
i=0

1
X
Recall from calculus that k i is a geometric series with ration k. Since 0 <
i=0
1
k < 1, the series converges to . Thus, we get our second bound.
1 k
The rate of convergence of the series depends on the factor k. If k is small,
then k n will approach 0 quickly, the convergence of the sequence will be fast.
If k is close to 1, the convergence will be slow. For example, if we look at

13
r
10
g4 (x) = in example 11 for x 2 [1; 2] we have
4+x
p
10
jg40 (x)j = p 3
2 4+x
p
10
p
3
for x 2 [1; 2]
2 5
:15

4 Problems
Exercise 14 Show that a …xed point of g1 ; g2 ; g3 ; g4 ; and g5 is a zero of f in
example 11

Exercise 15 Determine rigorously if each function has a unique …xed point on


the given interval.

x2
1. g (x) = 1 on [0; 1]
4
x
2. g (x) = 2 on [0; 1]
1
3. g (x) = on [0:5; 5:2]
x
Exercise 16 Let g (x) = x2 + x 4. Can …xed-point iteration be used to …nd
the solutions(s) to the equation x = g (x)? Why?

Exercise 17 Use algebraic manipulations to show that each function below has
a …xed point p which is also a zero of f (x) = x4 + 2x2 x 3
p
1. g1 (x) = 4 3 + x 2x2
r
x + 3 x4
2. g2 (x) =
2
r
x+3
3. g3 (x) =
x2 + 2
3x4 + 2x2 + 3
4. g4 (x) =
4x3 + 4x 1

14

Вам также может понравиться