Вы находитесь на странице: 1из 18

3.

VECTOR SPACES

1. Ve tors

De nition 1: A nonempty set V is said to be a ve tor spa e or linear spa e if it satis es the following
three axioms:
1. V is losed under the operation addition (+) - i.e., any two elements ;  2 V
determine a unique element  = +  in V, su h that
a) +  =  + ( ommutativity);
b) ( + ) +  =  + ( + ) (asso iativity);
) There exists the zero element 0 2 V, with the property that + 0 =
8 2 V;
d) For every 2 V; there exists an element 2 V, alled the negative of
, and has the property: + ( ) = 0.
2. For any a 2 K , alled a s alar, and any 2 V, the produ t a is an element
of V, su h that
a) a(b ) = (ab) ;
b) 1 = .
3. The operations addition and multipli ation by s alars have the distributive
properties:
a) (a + b) = a + b ;
b) a( + ) = a + a.
If K = R , then we have a real ve tor spa e; on the other hand, if K = C , then we have a omplex
ve tor spa e.

Example 1: The set of ordered n-tuples, x = (x1 ; x2 ; : : : ; xn ), denoted by K n . The sum and s alar
multiple are de ned by:
(x1 ; x2 ; : : : ; xn ) + (y1 ; y2 ; : : : ; yn) = (x1 + y1 ; x2 + y2 ; : : : ; xn + yn );
(x1 ; x2 ; : : : ; xn ) = ( x1 ; x2 ; : : : ; xn ):

Example 2: The spa e l2 .


Example 3: The set of arrows in a plane, all emanating from one point, as used in elementary
physi s, with addition of two arrows de ned by the ompletion of the parallelogram, and s alar
multipli ation by dilation of the length of the arrows.
Example 4: The set of all k-times ontinuously di erentiable (real- or omplex-valued) fun tions
de ned over a domain [a; b℄  R , where k = 0; 1; 2; : : : ; 1. This spa e is denoted by C k [a; b℄.
1
2

Example 5: C k (
) denotes the spa e of fun tions f de ned on the domain
 R n , whose partial
derivatives D f of orders j j = 0; 1; 2; : : : ; k are ontinuous. The following notations are also used:
C (
) := C 0 (
), and C 1 (
) := \1 k
k=0 C (
).

De nition 2: Let V0  V be nonempty. If, for arbitrary a; b; a + b 2 V0 whenever ;  2 V0 ,


then we say that V0 is a linear subspa e (or simply subspa e) of the ve tor spa e V. Sometimes the
term linear manifold is also used for a linear subspa e.
This means that V0 is a ve tor spa e in its own right with respe t to the addition and s alar
multipli ation operations that it inherits from V. It is trivially veri able that the zero ve tor 0 is in
V0 .
Example 6: The ve tor spa e R p , for p < n, is a subspa e of R n .
Example 7: In the spa e C 1 [a; b℄, the set P n [a; b℄ of polynomials of degree n de ned on [a; b℄ is a
subspa e. So is the set P [a; b℄ of all polynomials.
Example 8: Let be an element of a ve tor spa e V. Then the set fa : a 2 R g is a subspa e of
V. It is alled the \one-dimensional" subspa e of V.
De nition 3: A nite set of ve tors 1 ; 2 ; : : : ; n in the spa e V is said to be linearly dependent
if there exist s alars a1 ; a2 ; : : : ; an , not all zero, su h that a1 1 + a2 2 +    + an n = 0. If the set
1 ; 2 ; : : : ; n is not linearly dependent, then it is said to be linearly independent. An in nite set of
ve tors S is said to be linearly independent if every subset of S is linearly independent. Otherwise,
S is linearly dependent.
For linearly independent ve tors, any equation of the form ai 1 + a2 2 +    + an n = 0 ne essarily
implies a1 = a2 =    = an = 0.
Let V be a ve tor spa e, and let S be any nonempty subset of V. Let M be the set of all nite linear
ombinations of the elements of S, i.e., the elements of the form a1 1 + a2 2 +    + an n for any
n 2 Z+ ; n not xed. Then M is a linear subspa e of V. It is alled the subspa e generated by S. We
also say the S spans M. It is an easy exer ise to show the following
1. M onsists pre isely of those ve tors that are in every subspa e that ontains
S - i.e., M is the interse tion of all these subspa es that ontain S.
2. M is the smallest subspa e that ontains S in the sense that every subspa e
that ontains S ne essarily ontains M also.
The following theorem illustrates the term linear dependen e.
Theorem 1. Let 1 ; 2 ; : : : ; n be a set of ve tors, and suppose 1 6= 0. The set is linearly depen-
dent i one of the ve tors, k ; k = 1; 2; : : : ; n, is in the subspa e spanned by 1 ; 2 ; : : : ; k 1 .
PROOF: Assume that the set is linearly dependent. This means that there is a smallest integer
2  k  n su h that 1 ; 2 ; : : : ; k is linearly dependent. This means that there exist onstants
a1 ; a2 ; : : : ; ak ; not all zero, su h that the equation a1 1 + a2 2 +    + ak k = 0 is satis ed. We
assume ak 6= 0, for otherwise it would merely imply that it is the set 1 ; 2 ; : : : ; k 1 whi h is
linearly dependent. Hen e,
a1 a2 ak 1
k=
ak 1 ak 2    ak k 1
;
showing that k is a linear ombination of, and therefore in the subspa e spanned by the ve tors
1; 2; : : : ; k 1.
3

Now assume that k is in the subspa e spanned by 1 ; 2 ; : : : ; k 1 . Therefore, k = 1 1 + 2 2 +


   + k 1 k 1 , where not all the i 's are zero. Hen e, we obtain the equation 1 1 + 2 2 +    +
k 1 k 1 k = 0, whi h means that the set 1 ; 2 ; : : : ; k , and therefore the set 1 ; 2 ; : : : ; n is
linearly dependent. :)
De nition 4: If n linearly independent ve tors an be found in the ve tor spa e V, while every set
of n + 1 ve tors is linearly dependent, then V is said to have dimension n. If n linearly independent
ve tors an be found for every n, then the ve tor spa e V is said to be in nite-dimensional.
De nition 5: A nite set B of ve tors in V is alled a basis of V if B is linearly independent and
spans V.
Theorem 2. A ve tor spa e V has dimension n () it has a basis of n ve tors.
PROOF: Assume V has dimension n. Then there are n linearly independent ve tors 1 ; 2 ; : : : ; n ,
while the set 1 ; 2 ; : : : ; n ; , for any ve tor , is ne essarily linearly dependent. This implies
that there are s alars 1 ; 2 ; : : : ; n ; a; not all zero, su h that 1 1 + 2 2 +    + n n + a = 0.
Ne essarily, a 6= 0 be ause, if it were, then we would be left with 1 1 + 2 2 +    + n n = 0 { with
the i 's not all zero! This would violate the assumption we have made above that the set fi gni=1 is
linearly independent. We an now express as

= 1 1 + 2 2 +    + n n :
a a a
Thus we see that the (linearly independent) set fi gni=1 spans V, and is therefore a basis set.
Now let us suppose that V has a basis of n ve tors 1 ; 2 ; : : : ; n . This set, being a basis, must
be linearly independent. All we need to show is that any set onsisting of n + 1 ve tors is linearly
dependent. Let thesePve tors be 1 ; 2 ; : : : ; n+1 . For ea h i; i = 1; 2; : : : ; n + 1, the ve tor i an
be written as: i = nj=1 ij j ; i = 1; 2; : : : ; n + 1. Now onsider the following equation:
n+1 n n n+1
0 =! a1 1 + a2 2 +    + an+1 n+1 =
XX XX
ai ij j = ai ij j :
i=1 j =1 j =1 i=1

Sin e the set fj gnj=1 is linearly independent, the above equation an only be satis ed if ea h
oeÆ ient of j ; j = 1; 2; : : : ; n, vanishes { i.e.,
a1 11 + a2 21 +    + an+1 n+1 1 = 0
a1 12 + a2 22 +    + an+1 n+1 2 = 0
::: :::
a1 1n + a2 2n +    + an+1 n+1 n = 0:
This is a system of n equations in the n + 1 unknowns ai , and by the theory of linear equations this
has solution for the ai 's whi h are not all zero. Hen e the set f i gni=1 is not linearly independent. :)
Example 9: Consider the set C[a; b℄; a; b; of ontinuous real-valued fun tions. Previously we de ned
the sum of two fun tions f and g belonging to this set, and we also de ned the s alar multiple f of
su h fun tions. Thus, the set C[a; b℄ is a ve tor spa e of ontinuous fun tions de ned on the interval
[a; b℄  R . We shall prove that this spa e is in nite dimensional by showing that for every positive
integer n, there exists a set of n linearly independent ve tors (fun tions) in this spa e.
Consider the fun tions f0 (x) = 1; f1 (x) = x; f2 (x) = x2 ; : : : ; fn (x) = xn ; : : : , where x 2 [a; b℄.
These fun tions are all ontinuous in the domain [a; b℄, and are therefore in C[a; b℄. This set is
4

linearly independent for all positive integer n, for it is well known that the polynomial equation
a0 + a1 x + a2 x2 +    + an xn = 0 holds only for every x 2 [a; b℄ if ak = 0; k = 0; 1; 2; : : : ; n.
This on lusion also holds for omplex-valued fun tions on [a; b℄, and note also that this on lusion
is independent of the length of the domain [a; b℄
Example 10: Consider the set of omplex-valued fun tions f on the interval [0; ℄ whose derivatives
f 0 and f 00 are ontinuous on the same domain. The spa e of su h fun tions we denote as C2 [0; ℄. It
is a linear spa e. Of all the fun tions in this spa e, onsider the subset whi h satis es the equation
f 00 (x)+ f (x) = 0. The solution set of this di erential equation is a 2-dimensional subspa e of C2 [a; b℄.
One possible basis set is feix ; e ixg. Another is f os x; sin xg.
Example 11: Still another subspa e of the pre eding example is the spa e of fun tions whi h
satisfy the ondition f () = f (0) = 0. This subspa e is in nite dimensional, sin e it ontains the
set sin nx; n = 1; 2; 3; : : : whi h is linearly independent for whatever value of n.

De nition 6: Let V and W be ve tor spa es, and let there be a mapping A : V ! W. A is alled
a linear operator if Dom(A) is a subspa e of V, and if A(a 1 + b 2 ) = aA 1 + bA 2 , where a; b are
s alars and 1 ; 2 are any two ve tors in V. If Dom(A) = V, then A is said to be a linear operator
on (or from) V into W.
Note that for any linear operator A; A(0) = 0.
De nition 7: The set of ve tors in V that is mapped by a linear operator to 0 2 W is alled the
null spa e or kernel of A, and denoted by N(A) :
(1) N(A) = f 2 V : A = 0g:

It is an easy exer ise to show that N(A) is a subspa e of V. Also, Ran(A) is a subspa e of W.
Theorem 3. Let A : V ! W be a linear operator. Then the inverse A 1 exists i N(A) = f0g, i.e.,
if the null spa e of A is the trivial null spa e. If A 1 exists, it is also a linear operator.

PROOF: Assume that N(A) = f0g. Suppose A 1 = A 2 . Be ause A is linear, A 1 A 2 =


A( 1 2 ) = 0. Hen e, ( 1 2 ) 2 N(A), and therefore 1 2 = 0, when e 1 = 2 . A is
therefore inje tive, and hen e, A 1 exists.
Now assume that A 1 exists. Suppose there is an element 2 V su h that A = 0. Sin e A(0) = 0
also, and A is inje tive in this ase, it follows that = 0. Therefore N(A) = f0g. :)
De nition 8: Let A : V ! W be an inje tive linear operator. If the domain of A is the whole of V,
and its range is the entire W, then A is alled a linear isomorphism of V onto W. Two linear spa es
V and W are said to be isomorphi if there exists a linear isomorphism between them.
From the point of view of ve tor spa e theory, two ve tor spa es whi h are isomorphi are indis-
tinguishable. They are the same spa e, regardless of the intrinsi nature of their ve tors. Linear
isomorphism between ve tor spa es is an equivalen e relation. Thus, when we speak of ve tor spa es,
we a tually mean - stri tly speaking - the olle tion of all ve tor spa es modulo isomorphism - i.e.,
when we speak of a ve tor spa e we a tually mean the equivalen e lass to whi h that parti ular
ve tor spa e belongs.
Example 12: R n is a ve tor spa e, an element of whi h is an ordered n-tuple, represented either by
a n  1 row matrix or a 1  n olumn matrix. Every linear operator A : R n ! R n is represented by
5

an n  n matrix [A℄; onversely, every n  n matrix [A℄ represents an operator A : R n ! R n . Whi h


parti ular matrix represents a given operator (and vi e-versa) depends on the hoi e of basis for the
spa e R n . If the operator A has an inverse, then that inverse A 1 is - for the same basis - represented
by the inverse matrix [A℄ 1 . Now, a square matrix is invertible if and only if its determinant does
not vanish (then the matrix is said to be non-singular). Hen e, we an say that a linear operator
A : R n ! R n has an inverse if and only if its matrix representation is non-singular. It is also a
well-known theorem in linear algebra that the determinant of a matrix is invariant under a hange
of basis.
The next example shows the operator approa h to the theory of di erential equations. This example
dis usses ordinary di erental equations in parti ular, but the same ideas hold for the ase of partial
di erential equations. In fa t, the use of Hilbert spa es in the theory of partial di erential equations
is mu h more extensive [see, for example, John, F., Partial Di erential Equations; or Gilbarg, D. &
Trudinger, N., Ellipti Partial Di erential Equations of Se ond Order℄.
Example 13: Let V = C [a; b℄, the spa e of real-valued ontinuous fun tions de ned on the interval
[a; b℄. Let W  V onsist of those ontinuous fun tions x(t); t 2 [a; b℄, that have ontinuous rst and
se ond derivatives on [a; b℄ and, in addition, su h that x(a) = x0 (a) = 0. Finally, let v(t); w(t) 2 V.
Now de ne the linear operator A : W ! V by:
(2) A : x(t) 7! y(t) = x00 (t) + v(t)x0 (t) + w(t)x(t);
The usual question in ODE as to whether Eq.(2) has a solution x(t) for a given y(t), given the initial
onditions x(a) = x0 (a) = 0, an be rephrased as: \Does the operator A : W ! V have an inverse?"
whi h, in e e t, asks: \Is A surje tive?" Now, the standard existen e theorem in ODE says that
A is, in fa t, surje tive. Moreover, the same theorem also says that A is inje tive. Therefore A 1
exists.
From the pre eding dis ussion, it should be obvious that the set of all linear operators A : V ! W
also make up a ve tor spa e.

2. Norm and Inner Produ t

De nition 9: Let V be a ve tor spa e. A norm on V is a real-valued fun tion, denoted by the
symbol k  k, whi h has the properties:
a) k k  0; k k = 0 () = 0;
b) ka k = jajk k;
) k 1 + 2 k  k 1 k + k 2 k;
A normed linear spa e is the pair (V; kk) onsisting of a ve tor spa e and the norm de ned therein.
On e a norm is de ned in a ve tor spa e, the de nition of a metri is just a step away. Thus, given
a normed linear spa e (V; k  k), we de ne the natural metri said to be indu ed by the norm as:
(3) d( ; ) := k k:
It is a simple exer ise to show that the above de nition does satisfy the requirements of a metri ,
i.e., positivity, symmetry and the triangle inequality. It should be noted that the norm de ned on
a ve tor spa e, like the metri on a metri spa e, is not unique.
6

Example 14: The spa e R n , where an element x 2 Rn is given by x = (x1 ; x2 ; : : : ; xn ). the


Eu lidean norm is given by:
n
X 1 =2
kxk = xk2 :
k=1

Any operator A : V ! W, where V and W are normed ve tor spa es an then be ategorized as
ontinuous or not, using the same riteria developed in the pre eding hapter on metri spa es. In
parti ular, one an show that ve tor addition +  7!  is a ontinuous mapping from V  V into
V. Similarly, multipli ation by a s alar a  7! a is a ontinuous mapping from K  V into V.
Let V and W be two normed ve tor spa es, and suppose that there exists a bije tive linear operator
T : V ! W whi h is isometri , and whi h at the same time makes the two spa es isomorphi in the
sense of Def. (8). Then the two spa es are said to be isometri ally isomorphi , and T itself is alled
an isometri isomorphism between V and W.
On the other hand, if the linear operator T : V ! W is bije tive (so that T 1 exists), is an
isomorphism, and both T and T 1 are ontinuous, then T is a topologi al isomorphism between the
spa es V and W whi h are then said to be topologi ally isomorphi .
De nition 10: A omplex-valued fun tion, denoted by (; ) : V  V ! C is alled an inner produ t
(sometimes s alar produ t) on the omplex ve tor spa e V if it satis es the following onditions for
all ; ;  2 V, and a 2 C :
a) (;  )  0, and (;  ) = 0 ()  = 0;
b) (;  +  ) = (; ) + (;  );
) (; a) = a(; );
d) (; ) = (;  ) .
The pair (V; (; )) onsisting of the ve tor spa e V and the inner produ t (; ) de ned in it is alled
an inner produ t spa e.
It is a useful exer ise to show that the following follow dire tly from the de nition:
(4) (; a + b  ) = a(; ) + b (;  );
(5) (a ; ) = a (; ):

Example 15: Let C n be the ve tor spa e of omplex n-tuples. If x = (x1 ; x2 ; : : : ; xn ) and y =
(y1 ; y2 ; : : : ; yn ), we de ne the inner produ t in C n as:
n
xk yk :
X
(6) (x; y) :=
k=1

Example 16: Let the spa e C[a; b℄ denote the omplex-valued fun tions ontinuous on the interval
[a; b℄. Then, for f; g 2 C[a; b℄, de ne the inner produ t as:
Z b
(7) (f; g) := f (x) g(x) dx:
a
7

De nition 11: Let V be an inner produ t spa e. Two ve tors ;  2 V are said to be orthogonal if
(; ) = 0. A set of ve tors fj : j = 1; 2; : : : g in V is said to be orthonormal if:
(
1 for i = j
(8) (i ; j ) =
0 otherwise.

In an inner produ t spa e, it is natural to de ne the norm in terms of the inner produ t:
p
(9) k k := ( ; ):
From the de nition of the inner produ t, it is obvious that this satis es parts a) and b) of the
de nition of the norm. That it also satis es part ) will be shown later.
Theorem 4. Let fk : k = 1; 2; : : : ; ng be an orthonormal set in an inner produ t spa e V. Then
the norm of any  2 V an be written:
n
X n
X 2
(10) k  k2 = j (k ;  ) j2 +  (k ;  ) k :
k=1 k=1

PROOF: It is easily veri able that these two ve tors are orthogonal:
n
X n
X
(k ;  ) k and  (k ;  ) k :
k=1 k=1
Hen e, it follows that:
Xn 2 n
X 2

(;  ) =
(k ;  ) k +  (k ;  ) k
k=1 k=1
n
X n
X 2
= j (k ;  ) j2 + 
(k ;  ) k ::)
k=1 k=1

Corollary 1 (Bessel's inequality). Let fk : k = 1; 2; : : : ; xn g be an orthonormal set in an inner


produ t spa e V. For all  2 V, we have:
n
X
(11) k  k2  j (k ;  ) j2 :
k=1

Corollary 2 (S hwarz's inequality). Let V be an inner produ t spa e, and let ;  2 V. Then:
(12) j (; ) j  k  k k  k:
PROOF: The result follows trivially if  or  is zero. Thus, we assume that neither vanishes. The
singleton f=kkg is an orthonormal set; hen e for any  2 V, we an apply Bessel's inequality:
jj  k2  j (; =kk)j2 = j (k;k)2j ;
2

and multiplying both sides by k  k2 , and taking the square root, we obtain the orollary. :)
Theorem 5. Every inner produ t spa e V is a normed linear spa e with the norm k  k = (;  )1=2 .
8

PROOF: Sin e this de nition of the norm, given an inner produ t, is easily shown to satisfy require-
ments a) and b) of the de niton of a norm, we need only to verify that it also satis es requirement
) (the triangle inequality). Let  and  be any two elements of V. Then
k  +  k2 = (;  ) + (; ) + (;  ) + (; )
= (;  ) + 2 Re (; ) + (; )
 (;  ) + 2 j(; )j + (; ) sin e Re z  jz j 8z 2 C
 (;  ) + 2 (;  )1=2 (; )1=2 + (; ) applying S hwarz's inequality:
Taking the square roots of both sides, we nally have the triangle inequality:
k  +  k  (k  k + k  k): :):
We have remarked above that a normed linear spa e almost automati ally be omes a metri spa e.
Now we an expand that remark to in lude inner produ t spa es. Thus, if V is an inner produ t
spa e, Eqs.(3) and (9) guarantee the existen e of a metri :
p
(13) d(; ) = k   k = ( ;  ) 8;  2 V:
With this metri , we an now de ne su h on epts as open balls, onvergen e and ontinuity. Given
a ve tor 0 2 V, we de ne the open ball entered at 0 with radius r > 0 as: B (0 ; r) := f 2 V :
k  0 k < rg. Given a sequen e of ve tors n ; n = 1; 2; : : : in V, we say that the sequen e is Cau hy
if, given any  > 0, there is an integer N > 0 su h that k m n k <  whenever m; n  N . On the
other hand, the same sequen e (n ) onverges to  2 V, or n !  , if, for any given an  > 0, there
exists an integer N > 0 su h that k n  k <  whenever n  N . If every Cau hy sequen e V is
onvergent, then we say that V is a omplete inner produ t spa e.
The in nite sum of ve tors
X1
(14) = k ; n 2 V
k=1
exists only if the sequen e of partial sums (n ) de ned by:
n
X
n = k
k=1
onverges to the limit .
Similarly, any mapping f : V ! W, where V and W are inner produ t spa es, is ontinuous if the
preimage f 1 [O℄ of any open ball O in W is also open in V.

De nition 12: A omplete normed ve tor spa e is alled a Bana h spa e. A omplete inner produ t
spa e is alled a Hilbert spa e. Inner produ t spa es are sometimes alled pre-Hilbert spa es.

Note that a Hilbert spa e is ne essarily a Bana h spa e also. The onverse of ourse is not true,
sin e one an have a norm in a ve tor spa e without having an inner produ t in it.

3. Separability

A Hilbert spa e with a ountable dense subset is alled a separable Hilbert spa e H. In this ourse
we shall deal only with separable Hilbert spa es. What we would like to establish in this se tion is
9

the result that if H is separable, then it has a ountable orthonormal basis. Why are we after su h
a basis? We are after su h a basis for the following reasons:
I. If H has a ountable basis B = f1 ; 2 ; : : : ; k ; : : : g, then any ve tor 2H
an be written rather simply as a linear ombination:
X1
= ak k ;
k=1
II. If, in addition, the basis B is orthonormal, then ea h oeÆ ient ak an be eas-
ily al ulated. In fa t, aj = (j ; ), and thus the pre eding sum is expressed
as:
1
X
= (k ; )k :
k=1
Bear in mind that when we write these ini nite sums what we a tually mean
is that the left hand side is the limit of the sequen e of partial sums on the
right, i.e.,
n
X
lim k
n!1 n k = 0; where n := (k ; )k :
k=1
However, to establish the fa t that a separable Hilbert spa e has a ountable basis, we need to
olle t several statements together. The rst step is to establish that one an always ome up with
an orthonormal set of ve tors, if one is merely given a linearly independent set. This is a very
important step, for if we know that a ve tor spa e V has a ountable basis, then we an repla e that
basis with an orthonormal one.
Let S = f 1 ; 2 ; : : : ; n g be a maximal linearly independent set in an n-dimensional inner produ t
spa e Vn . Then one an always produ e from S an orthonormal set S0 using the Gram-S hmidt
orthonormalization pro ess as follows:
1. De ne a new ve tor 1 := 1 , and set e1 := 1 =k1k. Then we have ke1 k = 1.
2. De ne ve tor 2 := 2 ( 2 ; e1)e1 , and set e2 := 2 =k2 k. Then we have
(e2 ; e1) = 0, and ke2k = 1.
3. Pro eeding by indu tion, at the (m + 1)-th step, we de ne:
m
X m+1
m+1 := m+1 ( m+1 ; ek )ek ; and set em+1 :=
k=1 km+1 k :
Then we see that em+1 is orthogonal to ek ; k = 1; 2; : : : ; m, and kem+1 k = 1.
4. Repeat the pro ess until one obtains the last ve tor en in the orthonormal set
fek g.
Obviously, in the ase where V is in nite-dimensional, S would be ountably in nite. In that ase,
we simply pro eed inde nitely with the Gram-S hmidt pro ess. Then orthogonality would mean
that at any nite step n, the result en would be orthogonal to the previously obtained n 1 ve tors
ek .
Lemma 1. The spa e l2 is omplete.
10

PROOF: Let (~xj ) be a Cau hy sequen e, where ~xj = (x1j ; x2j ; : : : ; xij ; : : : ). Therefore, for any given
 > 0; 9N su h that
X1
(15) k~xj ~xk k2 = (xij xik )2 < 2 whenever j; k > N:
i=1
In parti ular, jxij xik j <  holds for ea h i whenever j; k > N . In other words, for ea h xed i,
the sequen e of real( omplex) numbers (xij ) is Cau hy. Sin e R (C ) is omplete, then there exists a
number x0i whi h is the limit of xij as j ! 1. By Eq.(15), the following holds for ea h n:
n
X
(16) (xij xik )2 < 2 for j; k > N:
i=1
Suppose now we x j in the sum in Eq.(16) while letting k ! 1. This gives us:
n
x0i )2  2
X
(xij for j > N:
i=1
Sin e this must be true for ea h n, then it is true that:
1
x0i )2  2
X
(17) (xij whenever j > N:
i=1
Now de ne ~hj := (x1j x01 ; x2j x02 ; : : : ; xij x0i ; : : : ). By Eq.(17), ~hj 2 l2 and k~hj k   whenever
j > N . Hen e, ~x 0 = ~xj ~hj = (x01 ; x02 ; : : : ; x0i ; : : : ) is in l2 , and
k~xj ~x 0 k = k~hj k   for j > N:
This means that k~xj ~x 0 k ! 0 as j ! 1. Hen e l2 is omplete, and is a Bana h spa e.
De nition 13: Let H be a Hilbert spa e, and suppose S is a maximal orthonormal set of ve tors
in H in the sense that no other orthonormal set ontains S as a proper subset. Then S is alled an
orthonormal basis or a omplete orthonormal system in H.
The proof for the next theorem may be found in many fun tional analysis books.
Theorem 6. Every Hilbert spa e H has an orthonormal basis.
The next theorem states that, as in nite-dimensional ve tor spa es, every ve tor in a Hilbert spa e
an be expressed as a linear ombination { possibly in nite { of basis ve tors.
Theorem 7. Let H be a Hilbert spa e, and let S = fx : 2 Ag be an orthonormal basis (the set
A is an index set). Then for ea h y 2 H, the following hold:
X
(18) y = (x ; y) x
2A
X
(19) kyk2 = j(x ; y)j2 :
2A
Conversely, if
X
(20) j j2 < 1; where 2 C ;
2A
H
P
then 2A x onverges to an element in
11

The equality in (18) means the same thing as the equality in an in nite sum: the right-hand side
onverges (independent of order) to y. Eq.(19) is also known as Parseval's relation. PROOF: We
know from Bessel's inequality that for any nite subset A0  A; 2A j(x ; y)j2  kyk2. Hen e
P
0

(x ; y) 6= 0 for at most ountably many 's in A whi h we


PN
an then arrange in some order 1 ; 2 ; : : : .
Furthermore, as N ! 1, the sequen e of partial sums j=1 j(x j ; y)j2 is monotoni ally in reasing
but bounded. Hen e the sequen e must onverge to a nite limit (less than or equal to its bound).
Now let yn := nj=1 (x j ; y) x j . For n > m,
P

n
X 2 n
X
kyn ym k2 = (x j ; y) x j = j(x ; y)j2 :
j
j =m+1 j =m+1
Therefore the sequen e (yn ) is Cau hy, therefore onverges to an element - all it y0 2 H. Note that
 n 
(y y0 ; x l ) =
X
lim y
n!1
(x j ; y) x j ; x l
j =1
= (y; x l ) (y; x l ) = 0:
If 6= l for some l, we have
 1 
(y y0 ; x ) = nlim
X

!1 y (x j ; y) x j ; x = 0:
j =1
We see then that y y0 is orthogonal to all the x in S , and sin e S is a omplete orthonormal
system, it follows that y y0 = 0. Thus
n
X
y = nlim
!1 (x j ; y) x j ;
j =1
establishing (18). Moreover,
n
X 2
0 = lim y
n!1
(x j ; y) x j
j =1
 n
X 
= lim kyk2
n!1
j(x ; y)j2
j
j =1
X
= kyk2 j(x ; y)j2 ;
2A
establishing (19).
The onverse an be proved mu h more easily.:)
Before we pro eed to the main theorem of this se tion, we need the following de nition.
Theorem 8. A Hilbert spa e H is separable i it has a ountable omplete orthonormal system S . If
there are N < 1 elements in S , then H is isomorphi to C N . If there are ountably many elements
in S , then H is isomorphi to l2 .
Remarks: By an isomorphism between two Hilbert spa es, we mean a one-to-one orresponden e
between the two spa es that also preserves the inner produ t. It follows that this also makes the
two spa es isometri .
PROOF: Let H be separable, and let fxn g be a ountable dense subset. We an dis ard those
super uous elements (if any) of fxn g, leaving us only with a sub olle tion of linearly independent
12

ve tors whose span { or nite linear ombinations { is the same as that of the original olle tion
fxn g; hen e this sub olle tion is still dense. Using the Gram-S hmidt method, we then obtain a
omplete orthonormal set.
Assume the onverse, that there exists a ountable omplete orthonormal set fyng in H. By Theorem
7, the set of nite linear ombinations of the yn 's with rational oeÆ ients is dense in H. Sin e this
set is ountable, H is separable. This proves the rst part of the theorem.
For the se ond part of the theorem, suppose H is separable, and let fyn : n = 1; 2; : : : ; 1g be a
omplete orthonormal set. De ne a mapping  : H ! l2 by:
 : x 7! ~ = (1 ; 2 ; : : : ; n ; : : : ) where n := (yn ; x) 2 C :
By Theorem 7, this map is well de ned and surje tive. It is inner produ t-preserving (therefore
isometri , therefore inje tive). That H is isomorphi to C N if ardS = N an be proven using the
same steps.:)
Let M be a losed subspa e of H. The set of all ve tors whi h are orthogonal to every ve tor in M
is alled the orthogonal omplement of M, denoted by M? . It an be shown that M? is a losed
subspa e of H. The reason why we all M? the orthogonal omplement of H is that any ve tor
2 H an always be (uniquely) de omposed into a part whi h is in M and a part in M? . For, let
fe1; e2 ; : : : ; ek ; : : : g be an orthonormal basis for M. By Bessel's inequality, we have, for any ve tor
2 H,
X
j(ek ; )j2  k k2;
k
and sin e k k is nite, then so is ea h inner produ t in the sum, whi h is a omplex number. Using
these numbers in turn, we form a ve tor M as follows:
X
M := (ek ; ) ek :
k
By the ompleteness of M, M is in M. Now de ne M := ? M . Then M is orthogonal to
?

all the ek 's, when e M 2 M? , and we have = M + M . This de omposition is unique; sin e
? ?

(ek ; M ) = 0, it follows that (ek ; M ) = (ek ; ) for ea h basis ve tor ek of M. M is alled the
?

proje tion of on M, and M the proje tion of on M? .


?

Two subspa es M1 and M2 of H are said to be orthogonal if every ve tor in one subspa e is orthogonal
to every ve tor in the other. Suppose we have a sequen e of subspa es M1 ; M2 ; : : : ; Mk ; : : : of H,
su h that the subspa es are pairwise orthogonal, and every ve tor 2 H an be written as a sum
of the proje tions of on all the P Mk 's. Then H is said to be the dire t sum of the subspa es:
H = M1  M2      Mk     = k Mk . In parti ular, we have, if M is a subspa e of H, then
H = M  M? .
4. Linear Fun tionals

A linear mapping F : V ! C is alled a linear fun tional. By its linearity, we have: F (a + b) =
aF ( ) + bF (), where ;  2 V, and a; b 2 C . The set of linear fun tionals on a V themselves also
form a linear spa e, provided we de ne the sum of two linear fun tionals and multipli ation of a
linear fun tional by a s alar as follows:
(21) (F1 + F2 )() := F1 () + F2 ()
(22) (aF )() := aF ()
13

for every  2 V.
A linear fun tional F : V ! C is ontinuous if the sequen e ( F (n ) ) is onvergent to F (), i.e.,
F (n ) ! F () whenever the sequen e (n ) onverges to .
Let V be an inner-produ t spa e. Then to every ve tor 2 V, we an asso iate a linear fun tional,
denoted by F and de ned by:
(23) F () := ( ; ):
Moreover, sums of ve tors and s alar multiples of ve tors de ne the orresponding linear fun tionals:
(24) F+ () := F () + F () 8; ;  2 V;
(25) Fa () := aF () 8;  2 V; a 2 C :
On the other hand, suppose fe1; e2 ; : : : ; en g is an orthonormal basis for a n-dimensional ve tor spa e
V. To ea h linear fun tional F on V, we asso iate a ve tor denoted by F , de ned as:
n
F (ek ) ek :
X
(26) F :=
k=1
Pn
This would mean that, for any ve tor  = k=1 (ek ; ) ek , we have:
n
X
(27) F () = (ek ; )F (ek ) = ( F ; ):
k=1
Note that this asso iation of linear fun tionals and ve tors obey Eqs. (24) and (25). Thus, we have
a one-to-one orresponden e between a nite dimensional ve tor spa e V and the set of all linear
fun tionals on V.
Theorem 9. Let V be an n-dimensional ve tor spa e, and let Vf be the ve tor spa e of all linear
fun tionals on V. Then V f is also n-dimensional.

PROOF: Let fe1 ; e2; : : : ; en g be an orthogonal basis for V. We want to show that in Vf there exist n
linearly independent fun tionals while any set of n + 1 fun tionals is ne essarily dependent. De ne
n linear fun tionals fk as follows:
fk (ei ) = Æki i; k = 1; 2; : : : ; n:
It is trivial to verify that fk is indeed a linear fun tional. Now onsider the equation 1 f1 + 2 f2 +
   + n fn = 0, where the left hand side of the equation is an arbitrary linear ombination of the n
linear fun tionals fk . For i = 1; 2; : : : ; n, we have:
n n
0 =!
X  X
f ei = k Æki = i :
k k
k=1 k=1
Hen e we see that the set ff1; f2 ; : : : ; fn g is linearly independent sin e i = 0 for ea h i. Next,
onsider the set of n + 1 linear fun tionals ff1; f2 ; : : : ; fn ; f g, where f is not equal to any of the
fk 's. Repeating the same operation above, we have:
n n
0 =!
X  X
f + f e = Æ + h ; where h = f (e ):
k k j k ki i i i
k=1 k=1
Now, if hi is zero for all i, then f is ne essarily the zero linear fun tional and therefore the set
ff1; f2 ; : : : ; fn ; f g is linearly dependent sin e it ontains the zero ve tor. On the other hand, if
6 0 for at least one i, then again the set of the n + 1 fun tionals is linearly dependent, for we
hi =
14

have i = hi 6= 0 for at least one i. We have thus shown that any set of n + 1 linear fun tionals is
linearly dependent.:)
Now we extend this to the ase when V is in nite-dimensional.
Theorem 10. Let H be a separable Hilbert spa e, and F a ontinuous linear fun tional de ned
therein. Then there exists a unique ve tor F 2 H su h that F () = ( F ; ) for every ve tor
 2 H.
PROOF: Let fe1; e2 ; : : : ; ek ; : : : g be an orthonormal basis in H. Suppose we are given a linear
fun tional F : H ! C . Sin e the a tion of a linear fun tional F on any ve tor  is ompletely
de ned its a tion on all of the basis ve tors, then we an nd F that the theorem refers to by
requiring that ( F ; ek ) = F (ek ) for ea h ek . This gives us the (unique) result:
1
F (ek ) ek ;
X
F :=
k=1
assuming this in nite sum onverges. By Theorem 8, this ve tor sum onverges if
P 1 jF (e )j2 <
k=1 k
1. Let n := Pnk=1 F (ek ) ek . Then
n n
F (ek ) F (ek ) = jF (ek )j2 = k n k2 :
X X
F ( n) =
k=1 k=1
We need to show that this remains nite as n ! 1. Let (n ) be a sequen e of ve tors de ned by
1
n := 2 n:
k nk
Then kn k = 1=k nk, and F (n ) = 1 8n. Now, F is ontinuous by hypothesis. So if n ! 0 as
n ! 1, then F (n ) ! F (0). And sin e F is linear, F (0) = 0. But this is impossible sin e, as stated
above F (n ) = 1, regardless of n. Hen e it is not true that n ! 0, and therefore k n k remains
nite as n ! 1, as required.
Conversely, every ve tor 2 H de nes a linear fun tional
( ; ) : H ! C by ( ; ) :  7! ( ; ) 8 2 H:
By the ontinuity of the inner produ t (prove this!), this linear fun tional is ontinuous. :)
The set of all ontinuous linear fun tionals on a Hilbert spa e H is also a Hilbert spa e, denoted by
H or H0 , and is alled the topologi al dual spa e or topologi al onjugate spa e to H.
There is a theorem whi h says that there is an isometri isomorphism between the Bana h spa e Lp
and its topologi al onjugate spa e (Lp )0 , when 1  p < 1; moreover, when 1 < p < 1, we an
identify the dual spa e of the dual spa e (H0 )0 = H00 with the original spa e H. Then we say that
H (and thus, H0 ) is (norm) re exive.
On the other hand, we ould have stopped short at the algebrai stru ture only. In other words,
one ould onsider only the set of all linear fun tionals on the spa e H, without regard to their
ontinuity. This set is also a linear spa e, and is alled the algebrai dual or algebrai onjugate
spa e of H, denoted by Hf . It turns out that there is an isomorphism between H and Hf whi h
allows us to identify (Hf )f = Hff with H if and only if H is nite diemsional to begin with. In other
words, the spa e H (and thus, Hf ) is algebrai ally re exive if and only if H is nite dimensional.
We now introdu e the so- alled Dira notation. In the Hilbert spa e H of the quantum me hani al
system that is being dis ussed, we denote the state ve tor, say, , of the system as a ket ve tor j i.
15

The dual spa e, H0 , is the spa e of all ontinuous linear fun tionals on H. As we have seen above,
every linear fun tional (in H0 ) is uniquely asso iated with a ve tor in H. In H0 , a linear fun tional
then is denoted by a bra ve tor, hj, with the implied statement that this is the (unique) fun tional
asso iated with the ve tor  2 H. Then when the linear fun tional hj a ts on the ve tor j i, we
write this is hj i. From the above dis ussion it is lear that this is also the inner produ t that we
used to write as (; ).

5. Distributions: a Detour

De nition 14: Let D be the set of omplex-valued fun tions on Rn whi h have the following
properties:
I. A fun tion ' 2 D is C 1 ;
II. ' 2 D is non-zero only on a bounded set
 R n .
The subset
need not be the same for all fun tions ' of D being onsidered. If
happens to
be the smallest losed set outside of whi h ' is zero, then
is alled the support of ', written as

= supp '. A set whi h is losed and bounded in R n is said to be ompa t. Thus, we an restate
the de nition:
De nition 15: The spa e D is the spa e of C 1 C -valued fun tions with ompa t support in R n .
We used the term spa e above. This is be ause D is a linear spa e if we take addition of two fun tions
'1 + '2 as the usual pointwise addition. If it so happens that supp '1 is not the same as supp '2 ,
then obviously supp ('1 + '2 ) = supp '1 [ supp '2 . As a matter of fa t, D is more than just a linear
spa e of fun tions. It is an algebra of fun tions sin e multipli ation of fun tions here is well-de ned,
and for '1 ; '2 2 D; '1 '2 2 D { i.e., D is losed under multipli ation of its elements (we de ne
multipli ation here as pointwise multipli ation). In this ase, supp '1 '2 = supp '1 \ supp '2 .
The linear spa e D is also alled the spa e of test fun tions.
Example 17: In R , the following is a test fun tion:
if jxj  1
(
0 
(28) '= 
1
exp 1 x2 if jxj < 1:
1. ' is bounded;
2. ' is C 1 for jxj > 1; jxj < 1, and jxj = 1;
3. supp ' = [ 1; 1℄.
For a nite Taylor series about x = 0, all terms vanish ex ept the remainder term. On the other
hand, an in nite Taylor series about 1 onverges to 0 sin e all terms are zero, and does not represent
' for jxj < 1.
Example 18: In R n , where x = (x1 ; x2 ; : : : ; xn ), we have a similar fun tion:
(
0  ra
(29) '(x) = a2
 pP
n 2
exp a2 r2 r<a r= k=1 xk

There is a theorem whi h we shall ite without proof:


16

Theorem 11. Let f be a ontinuous fun tion with ompa t support


 R n ; f an be approximated
to within a distan e  in the sup metri by some fun tion ' 2 D. Furthermore, supp ' is ontained
within an arbitrary neighborhood of
.
This means that for any su h ontinuous f; 9' 2 D su h that
sup jf (x) '(x)j  :
x 2R n
De nition 16: A distribution is a C -valued ontinuous linear fun tional on the ve tor spa e D.
To de ne pre isely what we mean by ontinuity of a distribution, we rst de ne what we mean by
onvergen e in the spa e D: A sequen e ('n ) is said to onverge to the limit ' if
a) supp 'n is ontained in the same bounded set independent of the index n;
b) For any = 1; 2; : : : the sequen e (D 'n ) ! D ' uniformly as n ! 1.
Note that we require only the uniform onvergen e of ea h order of derivative, taken separately,
not the simultaneous uniform onvergen e of all orders of di erentiation.
Thus, to say that T is a distribution, we mean the following:
1. T : D ! C ;
2. T : (a'1 + b'2 ) = aT ('1 ) + bT ('2 );
3. If a sequen e ('n ) onverges to ' in D as de ned above, T ('n ) onverges to
T (') in C as n ! 1.
Note also that the distributions form a linear spa e, in the sense that:
1. (T1 + T2 )(') = T1 (') + T2(');
2. (aT )(') = aT ('), where a 2 C .
The a tion of a distribution T on a test fun tion '; T ('), is also denoted by hT; 'i.
Example 19: Let f be a lo ally integrable fun tion on R n . Then we de ne the distribution
asso iated with f by:
Z

(30) Tf (') := f (x)'(x) dx:


Rn
This integral exists sin e the a tual domain of integration is not R n , but
= supp ' whi h is
bounded, and f by hypothesis is lo ally integrable. Furthermore, the value of the integral is a linear
fun tional of '. To show ontinuity of this fun tional, assume that a sequen e ('j ) onverges to '
in D. Let
 R n be a ompa t set ontaining ea h
j = supp 'j . We then have:
Z Z
jTf ('j ) Tf (')j =
f (x)'j (x) dx f (x)'(x) dx

Z
=
f (x)('j (x) '(x)) dx
Z

 jf (x)('j (x) '(x))j dx


Z

= jf (x)jj'j (x) '(x)j dx



Z
 
 jf (x)j dx max j' (x) '(x)j:
x2
j

17

By assumption, as j ! 1; max j'j (x) '(x)j ! 0 8x 2


, when e jTf ('j ) Tf (')j ! 0, and
therefore Tf ('j ) ! Tf ('). Hen e, the distribution Tf is ontinuous.
There is another theorem we shall ite without proof:
Theorem 12. Two fun tions f and g de ne the same distribution, Tf = Tg , if they are equal almost
everywhere.

By the phrase f and g being equal almost everywhere, we mean that f and g di er only over subsets
of R n of zero volume.
The most widely used distribution in physi s is perhaps the Dira delta distribution. The Dira
delta Æ(r) is used to represent, say, a point harge of magnitude +1 at the lo ation r 2 R n . The
basi de nition of the Dira distribution is:
Z

(31) hÆ(0) ; 'i = Æ(0) '(x) dx = '(0):


Rn
for ea h ' 2 D. This is usually written in physi s as
Z

Æ(x)'(x) dx = '(0):
Rn
Eq.(31) has the obvious extension:
Z

(32) hÆ(a) ; 'i = Æ(a) '(x) dx = '(a)


Rn
where Æ(a) ould stand for a unit point harge at a 2 R n . This last equation is usually written in
physi s as:
Z

Æ(x a)'(x) dx = '(a):


Rn

To onsider the Dira Æ distribution as a fun tion is of ourse nonsense. In fa t it an be shown


that there is no lo ally or globally integrable fun tion whi h oin ides with the Æ distribution. Let
us assume that there is a lo ally integrable fun tion f (x) su h that for every ' 2 D, we have
Z

f (x)'(x) dx = '(0):
Rn
Now suppose that for ', we use the test fun tion '(x; a) de ned by Eq.(29). Then we obtain:
Z

(33) f (x)'(x; a) dx = '(0; a) = e 1 :


Rn
But if we let a ! 0, the integral on the left goes to zero, while the right hand side remains onstant
at e 1 .
A distribution T is said to be zero in an open set !  R n if hT; 'i = 0 for any ' 2 D whi h has its
support in !.
We now attempt to de ne the derivative of a distribution. We start with the distribution Tf where
f is at least C 1 . Using Eq.(30) as our de nition, we have:
Z
D
f E f
(34) ;' = ' dx:
x1 R n x1
18

Sin e the domain of integration is bounded, and we are assuming that all the fun tions in the
integrand are ontinuous, we have:
Z Z Z Z 1
f
(35)    dx2    dxn x
' dx1 :
x2 x3 xn 1 1
When we evaluate the integral over x1 by integrating by parts, we obtain:
1 Z 1
'
(36) f ' f dx1 :
1 1 x1
The boundary terms disappear sin e supp ' is bounded. Eq.(35) now be omes:
Z Z Z 1 Z
' ' D
' E
   dx2    dxn f dx1 = f dx = f; ;
x2 xn 1 x1 R n x1 x1
giving us the nal result:
D
f E D
' E
(37) ; ' = f; :
x1 x1
This leads us to de ne the derivative of a distribution T , as the distribution T=x1 whose a tion
on any ' 2 D is given by
D
T E D
' E
(38) ; ' = T; :
x1 x1
One an he k that this is a valid de nition of a distribution T=x1 . First, it is a linear fun tional
on D. Se ond, it is ontinuous, for suppose that ('j ) onverges to the limit (') as we had de ned
onvergen e in D above. Then ('j =x1 ) onverges to '=x1 in D. Sin e T is a ditribution,
then hT; 'j =x1 i onverges to hT; '=x1 i. This proves then that hT=x1 ; 'j i onverges to
hT=x1; 'i. Thus T=x1 is a distribution.
The following may be easily proven:
D
2T E D
T ' E
(39) ;' = ;
xi xj xj xi
D
2' E
(40) = + T;
xj xi
Be ause ' has ontinuous se ond derivatives, then:
2' 2'
= ;
xi xj xj xi
with the onsequen e that
2T 2T
(41) = :
xi xj xj xi
In general, a distribution has derivatives of all orders, and if = ( 1 ; 2 ; : : : ; n ) is a multi-index
with j j = 1 + 2 +    + n , we have
(42) hD T; 'i = ( 1)j j hT; D 'i:
Moreover, the order of the partial derivatives may be freely hanged.

Вам также может понравиться