Вы находитесь на странице: 1из 21

2 Basic Functional Analysis

2.1 Some Notation


Let R denote the real numbers, C the complex numbers and K will be used to denote either R or
C at times when it could be either one. For a, b R with a < b we use the following notations
for open, closed and half open intervals: (a, b), [a, b], (a, b], [, a, b). Sometimes we will denote an
interval by the symbol I. The notation C(R), C(a, b), C[a, b], C(I) all denote the set of continuous
functions on the respective interval. Similarly, for any interval, C
k
(I) denotes the set of functions
which together with all derivatives up to and including the k-th derivative are continuous. We use
the notation L
p
(I) to denote the set of all functions for which
|f|
p

__
I
[f(x)[
p
dx
_
1/p
< .
At this point we do not consider other properties of functions in L
p
(I) but this forms an important
part of analysis which is the main topic of the course Real Analysis, namely, the Lebesque integral.
Of particular importance is the case p = 2. The set L
2
(I) is then refered to as a Hilbert space.
2.2 Vector Spaces
Denition 2.1. A linear space (or vector space) is a triple (X, +, ) where X is a set of objects
(called vectors), + (called vector addition) is a binary operator + : X X X and (called
scalr multiplication) satises : K X K. In addition, these operations satisfy (for all
x, y, z X and , K)
1. (a) x + y = y + x
(b) x + (y + z) = (x + y) + z
(c) there exists a vector 0 X such that x + 0 = x
(d) for each x X there exists a unique vector, denoted x, so that x + (x) = 0
2. (a) ( x) = () x
(b) 1 x = x
3. (a) ( + ) x = x + x
(b) (x + y) = x + y
Remark 2.1. 1. If for Kwe use R for scalars then X is called a real vector space and if we use
C then it is called a complex vector space.
1
2. The same notaion 0 is used for the zero in R, C and X. Also the same + is used. This is bad
but is standard practice.
3. Axiom 1 b) in Denition 2.1 shows that the sum of several vectors can be written without
parenthses, i.e., x + y + z. Repeated application shows that this is true on any nite sum of
vectors so for x
j

n
j=1
we can write
n

j=1
x
j
.
4. We often will write x y when, in fact, we mean x + (y).
5. It can be proved that 0 x = 0 (the number zero times a vector is the zero vector) and
(1) x = x (the number 1 times a vector is the additive inverse of the vector).
An important concept in studying vector spaces is the idea of linear dependence and indepen-
dence.
Denition 2.2. Let X be a vector space over K and let x
j

k
j=1
X be a set of vectors from X.
1.
k

j=1

j
x
j
X is called a linear combination of the vectors x
j

k
j=1
.
2. A linear combination is called nontrivial if at least one

,= 0.
3. We say that x
j

k
j=1
is a linearly dependent set if there exists a nontrivial linear combination
which is equal to the zero vector. Otherwise x
j

k
j=1
is called linearly independent.
We emphasize that
Remark 2.2. 1. x
j

k
j=1
are linearly dependent if there exist not all
j
not all zero so that
k

j=1

j
x
j
= 0. On the other hand if
k

j=1

j
x
j
= 0
1
=
2
= =
k
= 0
then the set is linearly independent.
2. If x
j

k
j=1
is linearly dependent, then there is some and scalars
j

k
j=1
j=
so that
x

=
k

j=1
j=
x
j
.
3. If, in a set x
j

k
j=1
, some x
j
= 0 then the set is linearly dependent.
2
4. If a set x
j

k
j=1
is linearly independent, then x
j

k
j=1
y for any y is also dependent. That
is, any set containing a linearly dependent set is linearly dependent.
5. Any subset of a set of linearly independent set is linearly independent.
Denition 2.3. 1. A linear space X is called n-dimensional if X contains a set of n linearly
independent vectors but every set of (n + 1) vectors is dependent. If no such n exists then
X is called innite dimensional.
2. A nite set u
j

n
j=1
in a nite dimensional vector space is called a basis for X if each vector
in X as a unique representastion as a linear combination of the u
j

n
j=1
. That is, given
x X, there is a unique set of numbers
j

n
j=1
so that x =
n

j=1

j
u
j
.
Theorem 2.1. Let X be a nite dimensional vector space.
1. If u
j

n
j=1
is a basis, then the vectors u
j

n
j=1
are linearly independent.
2. If dim(X) = n and u
j

n
j=1
X are linearly independent, then they form a basis.
Proof. Homemwork assignment.
Denition 2.4. Let X be a vector space and Y a subset of X, i.e., Y X.
1. Y is called a subspace of X if it is closed under vector additiona and scalar multiplication,
i.e., for every y
1
, y
2
Y and , K, we have y
1
+ y
2
Y . Note that a subspace of a
vector space is a vector space.
2. If M X, then we dene the Span(M) as the set of all nite linear combinations of
elements of M, i.e.,
Span(M) =
_

j
m
j
:
j
K, m
j
M are nte sets
_
.
Remark 2.3. If M is a subset of a vector space X, then Span(M) is a subspace of X. If u
j
is a
basis for X then Span
_
u
j

_
= X.
Example 2.1. In these examples the scalars are assumed to be the real or complex numbers, i.e.,
K = R or C.
1. R and C are vector spaces with usual addition and multiplication.
2. R
n
and C
n
are also vector spaces. Here an element in K
n
= R
n
or C
n
have the form
x = (x
1
, , x
n
). Addition and scalar multiplication are dened by
x + y = (x
1
, , x
n
) + (y
1
, , y
n
) = (x
1
+ y
1
, , x
n
+ y
n
),
x = (x
1
, , x
n
) = (x
1
, , x
n
).
3
3. The set of all functions from an interval I R to R, denoted F(I), is a vector space with
vector addition and scalar multiplication dened by
(f
1
+ f
2
)(x) = f
1
(x) + f
2
(x), (f)(x) = f(x), for f, f
1
, f
2
F(I), x I.
4. There are many important subspaces of F(I):
(a) P
n
is the space of all polynomials of degree less than n is a subspace of F(I) of dimen-
sion n. A basis is given by 1, x, x
2
, , x
n1
.
(b) P is the space of all polynomials which is innite dimensional since 1, x, x
2
, P
is an innite linearly independent set.
(c) The set of all solutions to an nth order homogeneous ordinary differential equation.
Recall that the general solution of
y
(n)
+ a
n1
y
(n1)
+ + a
1
y
(1)
+ a
0
y = 0
is given in terms of n linearly independent solutions y
j

n
j=1
as
y =
n

j=1
c
j
y
j
.
(d) L
2
(a, b) = f C[a, b] :
_
b
a
[f(x)[
2
dx < is an innite dimensional vector
space. For example, if < a < b < then P L
2
(a, b).
2.3 Metric Spaces
Denition 2.5. A metric space is a pair (X, d), where X is a set of objects called vectors and d is
a metric (or distance function) d : X X R
+
= [0, ) such that for all x, y, z X:
1. d(x, y) 0, d(x, y) = 0 x = y
2. d(x, y) = d(y, x)
3. d(x, y) d(x, z) + d(z, y) (triangle inequality).
Example 2.2. 1. X = R
n
with d(x, y) = [x y[ =
_
(x
1
y
1
)
2
+ + (x
n
y
n
)
2
where
x = [x
1
, , x
n
]
T
and y = [y
1
, , y
n
]
T
.
2. A generalization of this is the metric space denoted by
2
n
(R) = R
n
with
d
p
(x, y) = [x y[
p
=
_
n

j=1
(x
j
y
j
)
p
_
1/p
for p R with p 1.
3. The complex version of this is
2
n
(C) = C
n
with d
p
(x, y) = [x y[
p
=
_
n

j=1
[x
j
y
j
[
p
_
1/p
where in this case [ [ denotes the absolute value.
4
4. X = R
n
or C
n
with d(x, y) = max
1jn
[x
j
y
j
[.
5. X = C[a, b], continuous functions on an interval [a, b], with d

(f, g) = sup
x[a,b]
[f(x) g(x)[
6. X = L
2
(a, b) consisting of function in C[a, b] with d(f, g) =
__
b
a
[f(x) g(x)[
2
dx
_
1/2
.
7. More generally, X = L
p
(a, b) with d(f, g) =
__
b
a
[f(x) g(x)[
p
dx
_
1/p
The distance function on a metric space leads immediately to the notion of convergence.
Denition 2.6. 1. We say that a sequence x
j

j=1
in a metric space (X, d) converges to x X
if for every > 0 there is an N > 0 such that for all n > N d(x
n
, x) < .
2. A sequence x
j

j=1
is called Cauchy if for every > 0 there exists an N so that n, m > N
implies d(x, y) < .
3. A metric space (X, d) is called complete if every Cauchy sequence in X converges.
Theorem 2.2. If a sequence x
j

j=1
in a metric space (X, d) converges then it is Cauchy.
The converse of this is not true in general, i.e., there are many metric spaces that are not
complete.
Example 2.3. 1. If [a, b] is a closed bounded interval then C[a, b] with the metric d

(, ) is
complete.
2. The space L
p
(a, b) consisting of functions in C[a, b] is not complete. A home work assign-
ment will lead to understand the difculty.
3. Once you have the concepts of Lebesgue integration then it is possible to understand the
spaces L
p
(a, b) complete metric spaces but not consisting only of continuous function. The
set of functions must be enlarged. The process for doing this is called taking the completion
of C[a, b] which consists of adding all the limits of Cauchy sequences with respect to the
metric.
2.4 A Fixed Point Theorem and Contraction Mappings
Many problems in applied mathematics can be cast in terms of nding a xed point of a mapping
T in a metric space X, i.e., given a metric space X and a mapping T : X X you want to nd
or prove the existence of an x X so that T(x) = x (the point x is xed by T).
An important application of this idea is the method of successive approximations. The idea
with this method is the following. Given T and x
0
we dene a sequence of values that there exists
an x so that x
j
x as j . If this happens and, for example, T is continuous then we have
x = lim
j
x
j
= lim
j
T(x
j1
) = T
_
lim
j
x
j1
_
= T(x).
That is, x is a xed point.
5
Denition 2.7. A mapping T on a metric space (X, d) is called Lipschitz continuous if there exists
a > 0 such that
d(Tx, Ty) d(x, y), for all x, y X.
If < 1 then T is called a contraction.
Remark 2.4. 1. If T is Lipschitz then T is continuous.
2. The converse is not true. Consider, for example, X = R with d(x, y) = [x y[ and T(x) =
_
[x[. We have that T is clearly continuous (as the composition of two continuous functions).
On the other hand, we calim that there does not exist a so that d(Tx, Ty) d(x, y) for all
x, y. To see this simply take y = 0 and then, rst 0 < x < 1 so that d(T(x), T(y)) =

x > x
and then take x > 1 so that d(T(x), T(y)) =

x < x.
Theorem 2.3 (Banach Fixed Point theorem). Let T be a contraction on a a complete metric
space (X, d). Then T has a unique xed point x X. Furthermore, given any x
0
X, the
sequence x
n
= T(x
n1
) converges to x, i.e., T(x) = x and d(x, x
n
)
n
0.
Proof. (a) Uniqueness Suppose that T(x) = x and T(y) = y, then
d(x, y) = d(T(x), T(y)) d(x, y) (2.4.1)
but < 1 implies that d(x, y) < d(x, y) (or 1 < 1) which is a contradiction unless we have
d(x, y) = 0 in (2.4.1). Therefore x = y.
(a) Existence Take any x
0
X and let x
n
= T(x
n1
). We show that x
n
is a Cauchy sequence
and since (X, d) is complete there must exist an x X so that x
n
x.
First we note that
d(x
m
, x
m+1
) = d(T(x
m1
), T(x
m
)) (2.4.2)
d(x
m1
, x
m
)
.
.
.

m
d(x
0
, x
1
).
Now for p > m we have
d(x
m
, x
p
) d(x
m
, x
m+1
) + d(x
m+1
), x
p
) (2.4.3)
.
.
.

m
d(x
0
, x
1
) +
m+1
d(x
0
, x
1
) + +
p1
d(x
0
, x
1
)
_
=
_

m
+
m+1
+ +
p1
_
d(x
0
, x
1
)
=
m
_
1 +
1
+ +
pm1
_
d(x
0
, x
1
)
=
m
_
1
pm
1
_
d(x
0
, x
1
)

m
_
1
1
_
d(x
0
, x
1
)
m
0.
6
We have shown that the sequence x
n
is cauchy and since X is complete there must exist
an x X so that x
n
x. now
x = lim
j
x
j
= lim
j
T(x
j1
) = T
_
lim
j
x
j1
_
= T(x).
Corollary 2.1. Let T : X X , X a complete metric spaceand assume that for some k T
k
is a
contraction with xed point x, Then x is also a unique xed point for T.
Proof. We assume that T
k
x = x and x is unique. Applying T to both sides this implies that
T
k+1
x = Tx or T
k
(Tx) = (Tx). From the uniqueness of the xed point from Theorem 2.3 we
conclue that T(x) = x.
As for uniqueness, let assume that also T(y) = y. We will showthat this implies that T
k
(y) = y
which (again by uniqueness) will imply that y = x.
We have T(y) = y so T
2
(y) = T(y) = y and we can continue applying T until we arrive at
T
k
(y) = y.
Remark 2.5. The condition < 1 in Theorem 2.3 is essential. For example consider the follow-
ing.
1. f(x) = x on X = R has every real number as a xed point (no uniqueness) with = 1.
2. f(x) = x + a with a ,= 0 on X = R has no xed point (no existence) with = 1.
We now turn to our main application of these results the proof of the Fundamental Existence
and Uniqueness Theorem for a rst order ordinary differential equation.
Theorem 2.4 (Fundamental Existence Uniqueness Theorem). Let G R
2
be given by
G = (t, y) : [t t
0
[ , [y y
0
[ c,
and assume that f(t, y) is a Lipschitz function in G, i.e., there is an M > 0 so that
[f(t, y
1
) f(t, y
2
)[ M[y
1
y
2
[ for all (t, y
1
), (t, y
1
) G.
7
Let
p = max
(t,y)G
[f(t, y)[, = min
_
,
c
p
_
,
and let
G

= (t, y) : [t t
0
[ , [y y
0
[ c G.
Then the initial value problem
dy
dt
= f(t, y) (2.4.4)
y(t
0
) = y
0
has a unique solution in the interval [t t
0
[ < .
Remark 2.6. The proof of this result is based on the Banach xed point theorem and the important
fact that the problem (2.4.4) is equivalent to the integral equation
y(t) = y
0
+
_
t
t
0
f(s, y(s)) ds. (2.4.5)
In the proof we will use the mapping
T() = y
0
+
_
t
t
0
f(s, (s)) ds, (2.4.6)
dened on the complete metric space
C

= C[t
0
, t
0
+ ] : [(t) y
0
[ c, for [t t
0
[ . (2.4.7)
Her we equip C

with the metric inherited from C[t


0
, t
0
+ ], namely,
d(, ) = sup
|tt
0
|
[(t) (t)[.
We use the fact that C

is a closed subset of the complete metric space C[t


0
, t
0
+ ] and
therefore is a complete metric space. We have to show that T(C

) C

. To do this we need only


show that for any C

we have
[T()(t) y
0
[ c.
We have
[T()(t) y
0
[ =

_
t
t
0
f(s, (s)) ds

(2.4.8)

_
t
t
0
[f(s, (s))[ ds

p[t t
0
[ p c.
8
We also claim that there exists an N so that T
N
is a contraction on C

. Notice that
[T(
1
)(t) T(
2
)(t)[ =

_
t
t
0
[f(s,
1
(s)) f(s,
2
(s))] ds

(2.4.9)

_
t
t
0

f(s,
1
(s)) f(s,
2
(s))

ds

_
t
t
0

1
(s)
2
(s)

ds

M[t t
0
[ d

(
1
,
2
) M d

(
1
,
2
).
Since T(
1
), T(
2
) are back in C

we can apply T to them and again use the Lipschitz condi-


tion to get
[T
2
(
1
)(t) T
2
(
2
)(t)[ =

_
t
t
0
[f(s, T(
1
)(s)) f(s, T(
2
)(s))] ds

(2.4.10)
M

_
t
t
0

T(
1
)(s) T(
2
)(s)

ds

M
2
d

(
1
,
2
)
_
t
t
0
[t t
0
[ ds M
2
[t t
0
[
2
2
d(
1
,
2
)
M
2

2
2
d(
1
,
2
).
Continuing in this way we can eventually obtain
d

(T
n
(
1
), T
n
(
2
))
M
n

n
n!
d(
1
,
2
). (2.4.11)
It is clear that we can take N sufciently large that
M
N

N
N!
< 1.
Thus we can conclude that T
N
is a contraction on C

.
Proof of Theorem 2.4. Let (t
0
, y
0
) G and choose , C

, T and N as above. Then starting with


any x
0
in C

we can obtain a sequence of successive approximations x


n
= T
N
(x
n1
). Since T
N
is
a contraction in C

we can conclude from Theorem 2.4 that there is a unique xed point y solving
T
N
(y) = y and from Corollary 2.1 y is also the unique xed point of T. Thus we have T(y) = y
and from (2.4.5) we conclude that (2.4.4) has a unique solution.
2.5 Norms, Inner Products, Banach and Hilbert Space
In the last section we studied the basic tools of vector spaces and metric spaces. Then we intro-
duced the idea of a complete metric space and xed point methods for contraction mappings. In
this section we put two of these ideas together namely, we consider a distance function on a
vector space.
Normed Spaces
9
Denition 2.8. Let X be a vector space.
1. A norm | | on X is a function from X to R
+
satisfying
(a) |x| 0 and |x| = 0 if and only if x = 0.
(b) x| = [[|x| for all x X and scalar .
(c) |x + y| |x| +|y|
2. A vector space with a norm is called a normed space.
Remark 2.7. 1. Every normed space is a metric space. Just dene d(x, y) = |x y|.
2. The norm is a continuous function,i.e., if f(x) = |x| and x
n
n
x, then f(x
n
)
n

f(x). To see this use the backwards triangle inequality (see the exercises)
[ |x| |y| [ |x y|.
3. Let X = R and dene
d(x, y) =
_
0 x = y
1 x ,= y
.
The function d is a metric but there does not exist a norm that generates this metric.
We can show this by contradiction. Suppose that | | is a norm so that d(x, y) =
x y|. Then for all ,= 0, we must have |x| = [[|x|. Let = 2 and x ,= 0, then
1 = |x| = [[|x| = 2
which is a contradiction.
4. If a metric satises the extra condition
d(x, y) = [[d(x, y), for all x, y X, R(C),
then the |x| d(x, 0) is a norm.
Denition 2.9. A X normed space for which the associated metric induced by the norm is com-
plete is called a Banch space.
Inner Product Spaces
Denition 2.10. Let X be a (real) vector space.
1. A real inner product , ) is a function from X X to R satisfying:
(a) x, y) = y, x) for all x X.
(b) x, y) = x, y) for all x, y X and scalar R.
(c) x + y, z) = x, z) +y, z).
10
(d) x, x) 0 for all x X and x, x) = 0 if and only if x = 0.
2. A vector space with a real inner product is called a real inner product space.
Denition 2.11. Let X be a (complex) vector space.
1. A complex inner product , ) is a function from X X to C satisfying:
(a) x, y) = y, x) for all x X.
(b) x, y) = x, y) for all x, y X and scalar C.
(c) x + y, z) = x, z) +y, z).
(d) x, x) 0 for all x X and x, x) = 0 if and only if x = 0.
2. A vector space with a real inner product is called a complex inner product space.
Note that for a complex vector space we have
x, y) = y, x) = y, x) = x, y),
and therefore
x, y) = x, y). (2.5.1)
Example 2.4. 1. X = R
n
is a real inner product space with x, y) =
n

j=1
x
j
y
j
.
2. X = C
n
is a complex inner product space with x, y) =
n

j=1
x
j
y
j
.
3. L
2
(a, b) is a real inner product space with f, g) =
_
b
a
f(x) g(x) dx.
4. L
2
(a, b) is a complexl inner product space with f, g) =
_
b
a
f(x)g(x) dx.
Theorem 2.5 (Schwarz Inequality). For any x, y in a complex (or real) inner product space X
we have
[x, y)[ )x, x)y, y). (2.5.2)
Equuality holds if and only if y is a multiple of x.
Proof. First we note that for all x and w
0 x w, x w) (2.5.3)
x, x) x, w) w, x) +w, w)
x, x) 2Re (x, w)) +w, w).
11
we also note that equality holds if and only if x = w (i.e., x w = 0). Therefore
x, x) 2Re (x, w)) +w, w). (2.5.4)
We now set
w =
x, y)
y, y)
y
in (2.5.4) to obtain
x, x) 2Re
_
(
_
x,
x, y)
y, y)
y
__
+
_
x, y)
y, y)
y,
x, y)
y, y)
y
_
= 2Re
_
x, y)
y, y)
x, y)
_
+
[x, y)[
2
y, y)
2
y, y)
= 2
[x, y)[
2
y, y)

[x, y)[
2
y, y)
=
[x, y)[
2
y, y)
.
Thus we have
[x, y)[
2
x, x) y, y).
Now recall that equality holds above if and only if x is a multiple of y (since, by denition of
w, it is a multiple of y).
Remark 2.8. Every inner product space is a norm space. To see this we dene a norm by
|x| = x, x)
1/2
. (2.5.5)
To show that this gives a norm we must show that the triangle inequality holds.
0 |x + y|
2
= x + y, x
y
) = x, ) +x, y) +y, x) +y, y)
= |x|
2
+ 2Re x, y) +|y|
2
|x|
2
+ 2[x, y)[ +|y|
2
|x|
2
+ 2|x||y| +|y|
2
= (|x| +|y|)
2
,
and we have
|x + y| |x| +|y|.
Lemma 2.1. The inner product in an inner product space is a continuous function in its arguments.
Indeed, we have
1. If x
n
x and y
n
y, then x
n
, y
n
) x, y) (here u
n
u means |u
n
u| 0 where
|u| = u, u).)
12
2. For every v, if x
n
x then x
n
, v) x, v).
3. If x
n
x then |x
n
| |x|.
Lemma 2.2. The norm induced from the inner product satises the parallelogram law
|x + y|
2
+|x y|
2
= 2|x|
2
+ 2|y|
2
.
Hilbert Spaces
We have seen that every inner product space is a normspace. If in addition the normis complete
then it is a Banach space. But in this case we use a different name in honor of David Hilbert.
Denition 2.12. An inner product space for which the induced normgives a complete metric space
is called a Hilbert space.
Denition 2.13. 1. Two vectors x and y in X are orthogonal if x, y) = 0.
2. A subset S X is an orthogonal set if each pair of elements of S are orhtogonal.
3. A set is orthonormal if it is orthogonal and every element x S satises |x| = 1.
Remark 2.9. If x, y) = 0 then the parallelogram law reduces to the Pythagorean Theorem
|x y|
2
= |x|
2
+|y|
2
and |x + y|
2
= |x|
2
+|y|
2
.
By denition, a Hilbert space H is a vector space, so that a subset M H is a subspace if
x+y M for all x, y M and scalars and . Thus if x
0
H, then M = x
0
= Spanx
0

is a subspace.
Denition 2.14. The (orthogonal) projection of x H on M = Spanx
0
is dened by
Px =
x, x
0
)
|x
0
|
2
x
0
.
Note that the following properties of P hold.
1. P
2
= P. Namely we have
P
2
x = P(Px) =
Px, x
0
)
|x
0
|
2
x
0
= x, x
0
)
x
0
, x
0
)
|x
0
|
2
x
0
|x
0
|
2
= Px.
13
2. Px, y) = x, Py). To see this we note that
Px, y) =
_
x, x
0
)
|x
0
|
2
x
0
, y
_
=
x, x
0
)
|x
0
|
2
x
0
, y)
=
x, x
0
)
|x
0
|
2
y, x
0
)
=
_
x,
y, x
0
)
|x
0
|
2
y
_
= x, Py).
3. Px, (I P)y) = 0 since
Px, (I P)y) = Px, y) Px, Py)
= Px, y) P
2
x, y)
= Px, y) Px, y)
= 0.
4. For every x H we have x = Px + (I P)x and Px (I P)x.
Denition 2.15. 1. Given x and y in a Hilbert space H we dene the angle between x and y
by the formula
cos() =
x, y)
|x| |y|
.
2. If M H (a subspace) is a closed subspace if x
n
M and x
n
x H implies that
x M.
3. If M H the the orthogonal complement of M, denoted M

is the subspace
M

= x H : x, m) = 0 for all m M.
4. M

is a closed subspace of H
If x
n
M

and x
n
x then
x, m) = lim
n
x
n
, m) = 0, for all m M,
so x, m) = 0 for all m M and x M

.
14
Theorem 2.6 (Projection Theorem). Let M be a closed subspace of H. For every x M there
exists a unique x
0
M, y
0
M

such that x = x
0
+ y
0
.
We call x
0
the orthogonal projection of x onto M and denote it by x
0
= P
M
(x). We note that
P
M
is an orthogonal projection and (I P
M
) is also an orthogonal projection, the projection onto
M

.
Proof. 1. First we prove that there exists an x
0
M so that |x x
0
| is minimum.
Let
d = min
mM
|x m| 0.
Then there is a sequence x
n
M such that
d = limn |x x
n
|.
Apply the parallelogram law |a + b|
2
+|a b|
2
= 2|a|
2
+ 2|b|
2
with a = (x x
n
) and
b = (x x
k
) to obtain
|x
n
x
k
|
2
+ 4
_
_
_
_
x
(x
n
+ x
k
)
2
_
_
_
_
2
(2.5.6)
= |(x x
n
) (x x
k
)|
2
+|(x x
n
) + (x x
k
)|
2
= 2|x x
n
|
2
+ 2|x x
k
|
2
Now since M is a subspace
(x
n
+ x
k
)
2
M, for all n, k
so that by the denition of d
_
_
_
_
x
(x
n
+ x
k
)
2
_
_
_
_
2
d
2
.
15
Therefore from (2.5.6) we have
|x
n
x
k
|
2
2|x x
n
|
2
+ 2|x x
k
|
2
4d
2
2d
2
+ 2d
2
4d
2
= 0 as n, k .
We conclude that the sequence x
n
is Cauchy and since H is a Hilbert space (and hence
complete), there exists an x
0
H so that x
n
x. Finally since M is closed and x
n
M
we must have x
0
M and
|x x
0
| = lim
n
|x x
n
| = d.
2. Let w = (x x
0
). We show that x M, i.e. w, m) = 0 for all m M. This is clearly
true for m = 0 so let us assume that m ,= 0. Note that x
0
m M for all R(C) and
m M (since M is a vector space). So we have
|w m|
2
= |x (x
0
+ m)|
2
|x x
0
|
2
= |w|
2
.
From this we see that the real valued function f() = |wm|
2
has a minimum at = 0.
Now let use write f is another way
f() = w m, w m)
= |w|
2
+
2
|m|
2
w, m) m, w).
Take the derivative with respect to and set = 0. Since = 0 is a minumum this result
must be zero:
f

()

=0
=
_
2|m|
2
2Re (w, m))

=0
= 2Re (w, m)) = 0, for all m M.
Now if this is a real Hilbert space then w, m) = Re (w, m)) = 0 and we are done, other-
wise, if H is a complex Hilbert space then im H for every m H so we can write
Re (w, im) = 0, fpr all m M.
This implies
0 = Re [iw, m)]
= Re [i(Re w, m) + iw, m))]
= Im w, m)
so we have Im w, m) = 0 for all m M and nally we conclude that w, m) = 0 for all
m M.
3. At this point we have shown that for every x H there exists an x
0
M and w = x x
0

M

. It is clear that if we let y


0
= (x x
0
) M

then x = x
0
+ y
0
gives the desired
16
decomposition. Our nal goal is to show that this decomposition is unique. To this end let
us suppose that also x = x
0
+ y
0
with x
0
M and y
0
M

. Then we can write


0 = x x = (x
0
x
0
) (y
0
y
0
)
where (x
0
x
0
) M and (y
0
y
0
) M

. If we take the inner product of 0 = (x


0
x
0
)
(y
0
y
0
), rst with (x
0
x
0
) and then with (y
0
y
0
) (and use the fact that (x
0
x
0
), (y
0

y
0
)) = 0) we have
|(x
0
x
0
)|
0
= 0, |(y
0
y
0
)|
0
= 0,
and we conclude that x
0
= x
0
and y
0
= y
0
.
Exercises for Chapter 2
1. Show that the subset P
0
n
= p(x) P
n
: p(0) = 0 is a subspace of P. Find its dimension.
Find a basis.
2. Show that the subset Q
n
= p(x) P
n
: p(0) = 1 is not a vector space.
3. Consider the ordinary differential equation y

= 0 on 0 < x < 1. Find the dimension of the


vector space of all solutions satisfying:
(a) y(0) = y(1)
(b) y(0) = y(1) = 0
(c) y

(0) = y

(1) = 0
4. Let C

be the set of all real-valued continuous functions on R such that the derivative does
not exist at x = 0. Is C

a vector space?
5. Given vectors x
1
= (1, 1, 0), x
2
= (2, 1, 1), x
3
= (0, 1, 1), x
4
= (0, 3, 1). Are x
1
, x
2
,
x
3
linearly independent or dependent? Same question for x
1
, x
2
, x
4
? Give reasons for your
answers, i.e., show why your answer is correct.
6. Show that the continuous functions on an interval [a, b], denoted C[a, b], is a metric space
with the distqance function d(f, g) = sup
x[a,b]
[f(x) g(x)[.
7. Let L
2
(1, 1) denote the space C[1, 1] with the metric function
d
2
(f, g) =
__
1
1
[f(x) g(x)[
2
dx
_
1/2
.
17
Show that this metric space is not complete.
To do this show that the sequence f
n
(x) =
1
2
+
1

tan
1
(nx) is a Cauchy sequence but it
converges to a discontinuous function.
The following gure shows the convergence of f
n
(x)
-1 -0.5 0 0.5 1
0
0.2
0.4
0.6
0.8
1
k = 1
k = 10
k = 100
k = 1000
You can use the following (Lebesgue Bounded Convergence Theorem) to do this problem.
You still need to argue why the theorem applies and how.
Theorem: Let f
n
be a sequence of integrable functions on a nite interval [a, b]. Assume
that
(a) There is an integrable function f(x) so that lim
n
f
n
(x) = f(x) for almost all x [a, b]
(almost all includes all but a nite set of points).
(b) There is a constant M so that [f(x)[ M or almost all x [a, b].
Then we can conclude
lim
n
_
b
a
f
n
(x) dx =
_
b
a
f(x) dx.
If you have trouble with this example do the following instead. Consider the metric space
L
1
(0, 1) consisting C[0, 1] with the metric function
d
1
(f, g) =
_
1
0
[f(x) g(x)[ dx.
Let
18
f
m
(x) =
_

_
0 0 x 1/2
m(x 1/2) 1/2 x 1/2 + 1/m
1 1/2 + 1/m x 1
,
and once again show that f
n
is Cauchy but the se-
quence converges to a discontinuous function.
1
1/2
1
(1/2+1/m)
8. A usual sufcient condition for convergence of an iteration x
n
= g(x
n1
) is that g be contin-
uously differentiable and [g

(x)[ < 1. By appealing to the Banach xed point theorem


show that this is indeed a sufcient condition for convergence of the iteration sequence.
9. In a metric space X with metric d, the condition d(Tx, Ty) d(x, y) with < 1 cannot
be replaced with d(Tx, Ty) < d(x, y) when x ,= y. Use the example
X = x : 1 x < , d(x, y) = [x y[, Tx = x +
1
x
and show that [Tx Ty[ < x y for x ,= y but T has no xed point.
10. Consider the IVP:
_
y

= xy
y(0) = 1
Use successive approximations to nd y. In particular use
(a) Use
0
= 1
(b) and let
k+1
= 1 +
_
x
0
t
k
(t) dt.
(c) Find
1
,
2
,
3
,
4
.
(d) Give an argument to obtain a formula for
k
(x)
_
keep in mind e
x
=

k=0
x
k
k!
_
.
11. Let H be a Hilbert space with inner product x, y) for x, y H and norm |x| = x, x)
1/2
.
(a) Show that the norm satises the parallelgram law
|x y|
2
+|x + y|
2
= 2|x|
2
+ 2|y|
2
.
(b) Show that
x, y) +y, x) =
1
2
_
|x + y|
2
|x y|
2

so that, in a real space, x, y) =


1
4
_
|x + y|
2
|x y|
2

19
(c) Show that in a complex Hilbert space
x, y) y, x) =
i
2
_
|x + iy|
2
|x iy|
2

.
(d) Consequently, in a complex Hilbert space
x, y) =
1
4
_
|x + y|
2
|x y|
2
+ i|x + iy|
2
i|x iy|
2

12. Show that the reverse triangle inequality holds in any normed space, i.e.,

|x| |y|

|x y|.
13. For the numerical example of collocation using Maple modify the Maple code to use the
different points x
1
= 1/4 and x
2
= 3/4.
14. Use the Maple code to carry out the Taylor method for F(x, y) = 3x
2
y,
y(0) = 1 on [0, 1].
15. Use the Maple euler_solve procedure to approximate the solution of
y

= t
2
y on [0, 2] with y(0) = 1. The exact answer is y = e
t
+ t
2
2t + 2. Use
h = .2, .1, .05.
References
[1] L. Collatz, The numerical treatment of Differential Operators, Springer-Verlag, NY, 1966.
[2] V.I. Arnold, Ordinary differential equations, Springer-textbook, Springer-Verlag, 1992.
[3] Differential Equations, Frank Ayres, jr., Schaums Outline Series, Schaum Publishing, New
York, 1952.
[4] F. Brauer, J.A. Nohel, Qualitative Theory of Ordinary Differential Equtions, Dover, 1969.
[5] E.A. Coddington and N. Levinson, Theory of Ordinary Differential Equations, McGraw-
Hill Book Co. New York, Toronto, London (1955).
[6] E.A. Coddington, An introduction to ordinary differential equations, Dover 1989.
[7] G.F. Simmons, Differential Equations with applications and historical notes, McGraw-Hill
Book Co. New York, Toronto, London (1972).
[8] H. Sagan, Boundary and Eigenvalue Problems in Mathematical Physics, J. Wiley & Sons,
Inc, New York, 1961.
[9] D.A. Sanchez, Ordinay Differential Equations and Stability Theory: An Introduction, W.H.
Freeman and Company, 1968.
20
[10] I. Stakgold, Greens Functions and Boundary Value Problems, Wiley-Interscience, 1979.
[11] R. Dautray and J.L. Lions, Mathematical analysis and numerical methods for science and
technology,
[12] J. Dieudonn e, Foundations of Modern Analysis, Academic press, 1960.
[13] S. J. Farlow, Partial differential equations for scientists and engineers, Dover 1993.
[14] G. Folland, Introduction to partial differential equations,
[15] G. Folland, Fourier Series and its Applications, Brooks-Cole Publ. Co., 1992.
[16] K.E. Gustafson Partial differential equations,
[17] R.B. Guenther and J.W. Lee, Partial Differential Equations of Mathematical Physics and
Integral Equations, (Prentice Hall 1988), (Dover 1996).
[18] G. Hellwig, Partial differential equations, New York, Blaisdell Publ. Co. 1964.
[19] J. Kevorkian, Partial differential equations,
[20] M. Pinsky, Introduction to partial differential equations with applications,
[21] W. Rudin, Functional analysis, McGraw-Hill, New york, 1973.
[22] H.F. Weinberger, Partial differential equations, Waltham, Mass., Blaisdel Publ. Co., 1965.
[23] E.C. Zachmanoglou and D.W. Thoe, Introduction to partial differential equations with appli-
cations,
[24] Richard L. Burden, J. Douglas Faires and Albert C. Reynolds, Numerical Analysis, Prindle,
Weber and Schmidt, Boston, 1978.
[25] G. Dahlquist and A. Bjorck, Numerical Methods, Prentice-Hall Series in Automatic Com-
putation, Englewood, NJ, 1974.
[26] Kendall E. Atkinson, An Introduction to Numerical Analysis, John Wiley and Sons, 1978.
[27] John H. Mathews, Numerical Methods: for computer science, engineering amd mathemat-
ics, 2nd Ed, 1992 Prentice Hall, Englewood Cliffs, New Jersey, 07632, U.S.A.
[28] M. Razzaghi and M. Razzaghi, Solution of linear two-point boundary value problems via
Taylor series, J. Franklin Inst., 326, No. 4, 1989, 511-521.
21

Вам также может понравиться