Вы находитесь на странице: 1из 22

Calculus on Manifolds

Solution of Exercise Problems


Yan Zeng
Version 1.0, last revised on 2000-01-10.

Abstract
This is a solution manual of selected exercise problems from Calculus on manifolds: A modern
approach to classical theorems of advanced calculus, by Michael Spivak. If you would like to correct any
typos/errors, please send email to zypublic@hotmail.com.

Contents
1 Functions on Euclidean Space 2
1.1 Norm and Inner Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Subsets of Euclidean Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Functions and Continuity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 Differentiation 3
2.1 Basic Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Basic Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.3 Partial Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.4 Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.5 Inverse Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.6 Implicit Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

3 Integration 6
3.1 Basic Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.2 Measure Zero and Content Zero . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.3 Integrable Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.4 Fubini’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.5 Partitions of Unity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.6 Change of Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

4 Integration on Chains 11
4.1 Algebraic Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.2 Fields and Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.3 Geometric Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.4 The Fundamental Theorem of Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

5 Integration on Manifolds 17
5.1 Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5.2 Fields and Forms on Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
5.3 Stokes’ Theorem on Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.4 The Volume Element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.5 The Classical Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

1
1 Functions on Euclidean Space
1.1 Norm and Inner Product
I 1-2.
∑ ∑
Proof. Re-examine the proof of Theorem 1-1(2), we see equality holds if and only if i xi yi = | i xi yi | =
|x||y|. The second equality requires x and y are linearly dependent. The first equality requires ⟨x, y⟩ ≥ 0.
Combined, we conclude x and y are linearly dependent and point to the same direction.
I 1-7.

Proof. (a) Theorem 1-2 (4) and (5) establish a one-to-one correspondence between norm and inner product.
(b) If T x = 0, |x| = |T x| = 0, which implies x = 0. So T is injective. This further implies T is surjective
(Lax [1] page 15). So T is an isomorphism and T −1 is well-defined. It’s clear that T −1 is also norm preserving
and hence must enjoy the same properties.

I 1-8.

Proof. Refer to [1]


I 1-9.

Proof. Use matrix to prove T is norm preserving. This will imply T is inner product preserving and hence
angle preserving. Also use matrix to check ⟨x, T x⟩ = cos θ|x||T x|.

I 1-12.
Proof. Lax [1] page 66, Corollary 4′ .

1.2 Subsets of Euclidean Space


I 1-17.

Proof. Step1: We divide the square [0, 1] × [0, 1] into four equal squares by connecting ( 21 , 0) and (0, 12 ), (0, 12 )
and (1, 12 ). We place on point in each of the squares and make sure no two points are on the same horizontal
or vertical line.
···
Step n: We divide each of the squares obtained in Step (n-1) into four equal squares. We place one point
in each of the newly obtained squares and make sure no two points of all the points placed so far are on the
same horizontal or vertical line.
···
We continue this procedure infinitely and denote by A the collection of all the points placed according
to the above procedure. Then ∂A = [0, 1] × [0, 1] and A contains at most one point on each horizontal and
each vertical line.

I 1-18.
Proof. Clearly A ⊂ [0, 1]. For any x ∈ [0, 1] − A and any interval (a, b) with x ∈ (a, b), (a, b) must contain a
rational point of [0, 1]. So (a, b) ∩ A ̸= ∅ and (a, b) ∩ Ac ̸= ∅. This implies [0, 1] − A ⊂ ∂A. Since A is open,
a boundary point of A cannot be in A. This implies ∂A ⊂ [0, 1] − A. Combined, we get [0, 1] − A = ∂A.

2
1.3 Functions and Continuity

2 Differentiation
2.1 Basic Definitions
I 2-4.
Proof. (a) If x = 0, then h(t) ≡ 0; if x ̸= 0, h(t) = txg( |x| x
) by the property g(0, 1) = g(1, 0) = 0 and
g(−x) = −g(x). Since h is a linear function, it is differentiable.
(b) f (x1 , 0) = f (0, x2 ) ≡ 0 by the property g(0, 1) = g(1, 0) = 0 and g(−x) = −g(x). If f is differentiable,
Df (0, 0) would have to be (0, 0). This implies |x| · g( |x|
x
) = f (x) = o(|x|) for x ̸= 0. So g has to be identically
0.
I 2-5.
Proof. Let g(x1 , x2 ) = x1 |x2 | with (x1 , x2 ) on unit circle. Then for (x, y) ̸= 0,
( )
(x, y) x |y| x|y|
g =√ ·√ = 2 .
|(x, y)| 2
x +y 2 2
x +y 2 x + y2
Therefore ( )
(x, y)
f (x, y) = |(x, y)|g .
|(x, y)|

I 2-6.
∂f (0,0) ∂f (0,0)
Proof. f (x, 0) = f (0, y) ≡ 0. So ∂x = ∂y = 0. Assume f is differentiable at (0, 0), then

∂f (0, 0) ∂f (0, 0)
f (∆x, ∆y) − f (0, 0) =
∆x + ∆y + o(ρ) = o(ρ),
∂x ∂y
√ √ √
√ ρ = (∆x) + (∆y) . This implies |∆x∆y| = o( (∆x) + (∆y) ). Let ∆y = ∆x, we get |∆x| =
where 2 2 2 2

o( 2|∆x|), contradiction.
I 2-8.
Proof. Note for any h ∈ R1 and λ = (λ1 , λ2 ) ∈ R2 , we have

max {|f i (a + h) − f i (a) − λi h|} ≤ |f (a + h) − f (a) − λh| ≤ 2 max {|f i (a + h) − f i (a) − λi h|}.
i=1,2 i=1,2

2.2 Basic Theorems


I 2-12.
Proof. (a)
f (h, k) = f (h1 , · · · , hn , k1 , · · · , km )
∑n
= f (0, · · · , hi , · · · , 0, k1 , · · · , km )
i=1

n ∑
m
= f (0, · · · , hi , · · · , 0, 0, · · · , kj , · · · , 0)
i=1 j=1
∑n ∑ m
= hi kj f (0, · · · , 1, · · · , 0, 0, · · · , 1, · · · , 0).
i=1 j=1

3

So there exists M > 0, so that |f (h, k)| ≤ M i,j |hi kj | ≤ M |(h, k)|2 . This implies lim(h,k)→0 |f (h, k)|/|(h, k)| =
0.
(b) f (a + h, b + k) = f (a, b + k) + f (h, b + k) = f (a, b) + f (a, k) + f (h, b) + f (h, k). So

|f (a + h, b + k) − f (a, b) − f (a, k) − f (h, b)| |f (h, k)|


lim = lim = 0.
|(h,k)|→0 |(h, k)| |(h,k)|→0 |(h, k)|

This implies Df (a, b)(x, y) = f (a, y) + f (x, b).


I 2-14.

Proof. We note

f (a1 + h1 , a2 + h2 , · · · , ak + hk )

k
= f (a1 , · · · , ak ) + f (a1 , · · · , ai−1 , hi , ai+1 , · · · , ak )
i=1

k ∑
+ f (x1 , · · · , xk ).
j=2 (x , · · · , xk ) consists of j h’s and k − j a’s
1

Then use (a).

I 2-15.
Proof. (a) Note det is multi-linear, so we can use Problem 2.14(b).
(b) Define xi (t) = (ai1 (t), ai2 (t), · · · , ain (t)). Then x′i (t) = (a′i1 (t), · · · , a′in (t)) by Theorem 2.3(3) and f
can be seen as the composition g ◦ x with x(t) = (x1 (t), · · · , xn (t))T ∈ Rn × · · · × Rn and  g(x)  = det(x).
x1
· · ·
∑n  
Theorem 2-2 (chain rule) implies f ′ (t) = Dg(x(t)) ◦ Dx(t). Since Dg(x)(y) = i=1 det  
 y  and Dx =
· · ·
 ′  xn
x1 (t)
 · · · , we have
x′n (t)
   
x1 (t) a11 (t) · · · a1n (t)
∑  ···  ∑  ··· ··· ··· 
n
  n
 ′ 
f ′ (t) = Dg(x(t))(Dx(t)) = det 
 Dx(t) 
i = det  a
 i1 (t) · · · a ′ 
in (t)  .
i=1  · · ·  i=1  ··· ··· ··· 
xn (t) an1 (t) · · · ann (t)
     
a11 (t) · · · a1n (t) s1 (t) b1 (t)
(c) Let A(t) =  · · · ··· · · · , s(t) =  · · ·  and b(t) =  · · · . Then A(t)s(t) = b(t).
an1 (t) · · · ann (t) sn (t) bn (t)
So s(t) = A−1 (t)b(t) is differentiable. Moreover, A′ (t)s(t) + A(t)s′ (t) = b′ (t). So s′ (t) = A−1 (t)(b′ (t) −
A′ (t)A−1 (t)b(t)) = A−1 (t)b′ (t) − A−1 (t)A′ (t)A−1 (t)b(t).

2.3 Partial Derivatives


I 2-23.
Proof. (b) f (x) = x2 1{x>0,y>0} − x2 1{x>0,y<0} .

4
2.4 Derivatives
I 2-29.
Proof. (c)
f (a + tx) − f (a) f (a + tx) − f (a) − Df (a)(tx) |tx|
Dx f (a) = lim = lim · + Df (a)(x).
t→0 t t→0 |tx| t
|tx|
Note t is bounded for fixed x, and limt→0 |f (a + tx) − f (a) − Df (a)(tx)|/|tx| = 0. So Dx f (a) = Df (a)(x).

I 2-30.
f (tx)−f (0)
Proof. Note no matter t > 0 or t < 0, t = |x|g( |x|
x
).

2.5 Inverse Functions


I 2-36.
Proof. By Theorem 2-11, for any x ∈ A, there exists an open set Vx containing x and an open set Wx
containing f (x) such that f : Vx → Wx has a continuous inverse f −1 : Wx → Vx which is differentiable.
Then f (A) = ∪x∈A f (Vx ) = ∪x∈A Wx must be open. For any open subset B of A, f (B) = (f −1 )−1 (B). Since
f −1 is continuous and B is open, (f −1 )−1 (B) must also be open.
Remark: For a proof without using Theorem 2-11, see [2] Theorem 8.2.
I 2-37.
Proof. We only prove part (a). Part (b) can be similarly proved. Assume f is 1-1, then the rank of Df
cannot be 1 in an open set. Indeed, if, for example, D1 f (x, y) ̸= 0 for all (x, y) in some open set A, consider
g : A → R2 defined by g(x, y) = (f (x, y), y). Then detDg ̸= 0 in A. By Inverse Function Theorem, g is 1-1
in an open subset B of A and g(B) is open. So we can take two distinct points w1 and w2 in g(B) with
the same first coordinate. Suppose w1 = (f (x1 , y1 ), y1 ) and w2 = (f (x2 , y2 ), y2 ), then we must have y1 ̸= y2
and f (x1 , y1 ) = f (x2 , y2 ). This is contradictory with f being 1-1. So our claim must be true. (It is also a
straightforward corollary of the rectification theorem, Theorem 2-13).
Consequently, for any given (x, y) ∈ R2 and any neighborhood of (x, y), there is at least one point (x′ , y ′ )
such that D1 f (x′ , y ′ ) = 0. By the continuity of D1 f , we conclude D1 f (x, y) = 0. Similarly, we can prove
D2 f (x, y) = 0. Combined, we have Df (x, y) = 0 for any (x, y) ∈ R2 . This implies f is a constant (Problem
2-22). So f cannot be 1-1 and we get contradiction again. Therefore, f cannot be 1-1.
Remark: The hint is basically Theorem 2.13 in the next section.
I 2-38.
Proof. (a) Assume not, then there exists x1 , x2 ∈ R, such that x1 ̸= x2 and f (x1 ) = f (x2 ). WLOG, assume
x1 < x2 , since f is differentiable everywhere, f is continuous. So there exists y, z ∈ [x1 , x2 ] such that
f (y) = maxa∈[x1 ,x2 ] f (a) and f (z) = mina∈[x1 ,x2 ] f (a). Since f ′ (a) ̸= 0 for all a ∈ R, at least one of y, z is
not equal to x1 or x2 . WLOG, assume y ̸∈ {x1 , x2 }. Then y ∈ (x1 , x2 ). By Fermat’s theorem, f ′ (y) = 0.
Contradiction. ( x )
′ e cos y −ex sin y
(b) f (x, y) = . So detf ′ (x, y) = e2x ̸= 0. But clearly, f (x, y + 2nπ) = f (x, y)
ex sin y ex cos y
(n = ±1, ±2, · · · ). So f is not 1-1.
I 2-39.
{
1
2, if x = 0
Proof. f ′ (x) = . Note as x → 0, 2x sin x1 → 0 and cos x1 oscillates between
2x sin x1 − cos x1 , if x ̸= 0
1
2 +
-1 and 1. So in any neighborhood of 0, f ′ (x) becomes 0 infinitely many times. In particular, in any
neighborhood of 0, we can find a point x, so that f ′ has different signs on the two sides of x. This implies f
is not 1-1 in a neighborhood of x.

5
2.6 Implicit Functions
I 2-40.
∑n ∑n
Proof. Define f : R1 × Rn → Rn as f (t, s1 , · · · , sn ) = ( j=1 aj1 (t)sj − b1 (t), · · · , j=1 ajn sj − bn (t)). Then
 
a11 (t) a21 (t) ··· an1 (t)
(D1+j f (t, s1 , · · · , sn ))1≤i,j≤n
i
=  ··· ··· ··· ··· 
an1 (t) an2 (t) ··· ann (t)

is non-singular. By Implicit Function Theorem, there is an open interval I containing t and an open set
A containing (s1 , · · · , sn ) such that for each t̄ ∈ I there is a unique s(t̄) ∈ A with f (t̄, s(t̄)) = 0 and s(t̄)
differentiable.

3 Integration
3.1 Basic Definitions
3.2 Measure Zero and Content Zero
I 3-9.
Proof. (b) The set of integers.

I 3-10.
Proof.
∑n (a) By definition, ∀ε > 0, there is a finite cover {U1 , · · · , Un } of A by closed rectangles such that
i=1 v(Ui ) < ε. Clearly C̄ ⊂ ∪i=1 Ui . So ∂C = C̄ \ C ⊂ ∪i=1 Ui and hence the content of ∂C is 0.
n 0 n

(b) C = [0, 1] ∩ Q, then ∂C = [0, 1].

I 3-11.
∑∞
Proof. Suppose ∂A has measure 0, then by the fact ∂A = [0, 1] − A and i=1 (bi − ai ) < 1, we conclude
[0, 1] = ∂A∪A has a measure less than 1. But [0, 1] is compact, so [0, 1] has a content less than 1, contradictory
with Theorem 3-5.

3.3 Integrable Functions


I 3-14.

Proof. Use Theorem 3-8.


I 3-15.

Proof. By definition of content zero, C had content 0 implies C̄ has content 0. So ∂C ⊂ C̄ has content zero
and C is Jordan-measurable. By the definition of content zero, C is bounded, hence is contained by some
closed rectangle A. Finally, by Problem 3-8, C cannot contain any ∫closed rectangle. So for any partition of
A, if S is a closed rectangle contained by A, minx∈S 1C (x) = 0. So A 1C (x)dx = L(1C , P ) = 0.

I 3-16.
Proof. As in Problem 3-10(b), C = [0, 1] ∩ Q works since ∂C = [0, 1].

I 3-17.
Proof. Since C has measure 0, C cannot contain any closed rectangle by Theorem 3-6 and Problem 3-8. So
for any sub-rectangle S of
∫ A, 1C obtains 0 on S. This implies L(1C , P ) = 0 for any partition P of A. By
definition of integration, A 1C = 0.

6
I 3-18.

Proof. It suffices to show Bn = {x : f (x) > 1/n} has content 0. Indeed, for any ε > 0, we can find a partition
P of A, such that
∑ ∑ ∑ 1
ε > U (f, P ) = MS (f ) ≥ MS (f ) > · v(S).
n
S∈P S∈P,S∩Bn ̸=∅ S∈P,S∩Bn ̸=∅

So the collection of sets in P that intersect with Bn is a finite cover of Bn and has total content at most nε.
Since ε is arbitrary, we conclude Bn has content 0.
I 3-19.

Proof. Let A = {x : f (x) ̸= 1U (x)}. By Problem 1-8, ∂U = [0, 1] \ U . So ∀x ∈ ∂U \ A, f (x) = 1U (x) = 0.


Fix x, by definition of boundary and the fact that U is open, any neighborhood of x must contain an interval
I ⊂ U . Since A has measure 0, I \ A ̸= ∅. Therefore, any neighborhood of x contains points of U \ A, on
which f takes the value 1. So f is discontinuous at x. That is, the set of discontinuous points of f contains
∂U \ A. Problem 3-11 shows ∂U does not have measure 0. So ∂U \ A does not have measure 0. By Theorem
3-8, f is not integrable on [0, 1].

I 3-20.
Proof. Use Problem 3-12 and Theorem 3-8.

I 3-21.
Proof. First of all, ∂C is bounded and closed, hence compact. By Theorem 3-6, A has measure 0 if and only
if A has content 0. By Theorem 3-9,∑it suffices to show: ∑ ∂C has content 0 if and only if for every ε > 0,
there is a partition P of A such that S∈S1 v(S) − S∈S2 v(S) < ε, where S1 consists of all sub-rectangles
intersection C and S2 all sub-rectangles contained in C.
For necessity, we note ∂C has content 0, which implies ∀ε > 0, we can find finitely many closed rectangles
{U1 , · · · , Un } such that v(U1 ) + · · · + v(Un ) < ε. We can extend {U1 , · · · , Un } to a partition P of A, then

∑ ∑ ∑
n
v(S) − v(S) ≤ v(Ui ) < ε.
S∈S1 S∈S2 i=1

For sufficiency, it is clear that the condition implies ∂C has content 0.

I 3-22.
Proof. First note A is necessarily bounded by the definition of Jordan-measurable. Then use Problem 3-21:
∫ ∫ ∫ ∑ ∑
1= 1− 1≤ v(S) − v(S) < ε.
A−C A C S∈S1 S∈S2

7
3.4 Fubini’s Theorem
I 3-23.
Proof. The key trick consists of two parts: for compact sets, content 0 is equivalent to measure 0; Problem
3-18.
First, we can assume without loss of generality that C is closed. Indeed, denote by C̄ the closure
of C. Then by definition of∑content 0, for any ε > 0, there exists a finite cover {U1 , · · · , Un } of C by
n
closed rectangles such that i=1 v(Ui ) < ε. Since Ui ’s are closed, they also consist of a cover of C̄. In
conclusion, C̄ has content 0. Moreover, A′ = {x ∈ A : {y ∈ B : (x, y) ∈ C} is not of content 0} is contained
by A′′ = {x ∈ A : {y ∈ B : (x, y) ∈ C̄} is not of content 0 }. So for our problem, it suffices to consider C̄
and prove A′′ has measure 0. Note C ⊂ A × B is bounded,
∫ so if C is closed, then C is compact. ∫
By Problem 3-15, C is Jordan-measurable and A×B 1C = 0. By Fubini’s Theorem, for L(x) = L B 1C
∫ ∫ ∫ ∫
and U(x) = U B 1C , we have A×B 1C = A U = A L. Again by Problem 3-15, 1A′ U = U. Since C is
compact and coordinate projection is continuous, for any x ∈ A, {y : (x, y) ∈ C} is compact (Theorem
∫ 1-9).
Therefore, on the set A′ , U > 0 by Problem 3-18 and Theorem 3-6. On the other hand, 0 = A×B 1C =
∫ ∫
A
U = A 1A′ U. By Problem 3-18 and the fact U > 0 on A′ , we conclude A′ has measure 0.

I 3-24.
Proof. C is the union of countably many segments: {1} × [0, 1], { 12 } × [0, 21 ], { 31 } × [0, 13 ], { 23 } × [0, 31 ], · · · .
So the rectangle [0, 1] × [0, n1 ] will cover all of these segments, except finitely many. These finitely many
segments can be covered by finitely many rectangles whose total volume is as small as we want. Therefore,
C has content 0. Meanwhile A′ = Q ∩ [0, 1]. So A′ has measure 0 but is not of content 0.

I 3-25.
Proof. Since the set is compact, by Theorem 3.6, we can either prove the claim for content or prove the claim
for measure. When n = 1, Theorem 3-5 shows [a1 , b1 ] × · · · × [an , bn ] is not a set of measure 0. Suppose for
n ≤ N , [a1 , b1 ] × · · · × [an , bn ] is not of measure 0, then
∫ ∫ ∫
1= 1>0
[a1 ,b1 ]×···×[aN +1 ,bN +1 ] [aN +1 ,bN +1 ] [a1 ,b1 ]×···×[aN ,bN ]

by Problem 3-18 and assumption. Mathematical induction concludes the claim is true for any n ≥ 1.

I 3-26.
Proof. The boundary of Af consists of four parts: Γ1 = {a} × [0, f (a)], Γ2 = {b} × [0, f (b)], Γ3 = [a, b] × {0},
and Γ4 = {(x, y) : a ≤ x ≤ b, f (x−) ∧ f (x+) ≤ y ≤ f (x−) ∨ f (x+)}. It suffices to show Γ4 has measure
0. Indeed, by the integrability of f , A = {x ∈ [a,
∑b] : f is discontinuous at x} has measure 0. So ∀ε > 0,

n=1 v(Un ) < 4M , where M = supa≤x≤b |f (x)|. Then
ε
there exists open cover (Un )n≥1 of A, so that
(U
∑∞ n × (−2M, 2M ))n≥1 is an∑ open cover of {(x, y) : x ∈ A, f (x−) ∧ f (x+) ≤ y ≤ f (x−) ∨ f (x+)} and
∞ ∞
n=1 v(Un ×(−2M, 2M )) = n=1 v(Un )·4M < ε. The set B = [a, b]−∪n=1 Un is a compact set that consist
of continuous points of f . By uniform continuity of f on B, we can find an open cover of {(x, f (x)) : x ∈ B}
whose total volume is as small as we want. Combined, Γ4 ⊂ {(x, f (x)) : x ∈ ∪∞ n=1 Un } ∪ {(x, f (x)) : x ∈ B}
can be covered by open covers of arbitrarily small volume. So Γ4 has measure 0. This shows Af is Jordan-
∫b
measurable. Its area is a f can be easily proved by Fubini’s Theorem.
I 3-27.
∫b∫y ∫b∫b
Proof. a a f (x, y)dxdy = a a 1{x≤y} f (x, y)dxdy. 1{x≤y} is clearly integrable. So 1{x≤y} f (x, y) is inte-
∫b∫b ∫b∫b ∫b∫b
grable and Fubini’s Theorem applies: a a 1{x≤y} f (x, y)dxdy = a a 1{x≤y} f (x, y)dydx = a x f (x, y)dydx.

I 3-28.

8
Proof. Assume D1,2 f (a) − D2,1 f (a) > 0, then by continuity, there is a rectangle A = [a1 , b1 ] × [a2 , b2 ]
containing a such that D1,2 f − D2,1 f > 0 on A. By Fubini’s Theorem and Problem 3-18,
∫ ∫ b1 ∫ b2
0< D1,2 f − D2,1 f = [D1 f (x, b2 ) − D1 f (x, a2 )] − [D2 f (b1 , y) − D1 f (a1 , y)] = 0.
A a1 a2

Contradiction.

I 3-29.
Proof. Let A be a Jordan-measurable set in the yz-plane. For simplicity and for sake of illustration, we
assume A is in the half plane {(y, z) : y√ ≥ 0}. Let Γ be the set of R3 obtained by revolving A about the
z-axis. Then (x, y, z) ∈ Γ if and only if ( x2 + y 2 , z) ∈ A. So
∫ ∫ √
χΓ (x, y, z)dxdydz = χA ( x2 + y 2 , z)dxdydz.
R3 R3

Using the change of variable


{
x = ρ cos θ
y = ρ sin θ,

we have
∫ ∫ ∫ ∞ ∫ ∞
χΓ (x, y, z)dxdydz = χA (ρ, z)ρdρdθdz = 2π χA (ρ, z)ρdρdz.
R3 (ρ,θ,z)∈[0,∞)×[0,2π)×(−∞,∞) −∞ 0

I 3-32.
∫b
Proof. To prove F ′ (y) = a D2 f (x, y)dx, we only need to show for any c ≤ y1 ≤ y2 ≤ d, F (y2 ) − F (y1 ) =
∫ y2 ∫ b
y1 a
D2 f (x, y)dxdy. Since D2 f is continuous, it is integrable. So Fubini’s Theorem implies
∫ y2 ∫ b ∫ b ∫ y2 ∫ b
D2 f (x, y)dxdy = D2 f (x, y)dydx = [f (x, y2 ) − f (x, y1 )]dx = F (y2 ) − F (y1 ).
y1 a a y1 a

In particular, our proof shows Leibnitz’s rule holds as far as D2 f (x, y) is integrable on [a, b] × [c, d].

I 3-34.
∫y ∫y
Proof. By Leibnitz’s rule: D1 f (x, y) = g1 (x, 0) + 0
D1 g2 (x, t)dt = g1 (x, 0) + 0
D2 g1 (x, t)dt = g1 (x, y).

I 3-35.
Proof. The key difficulty is to show any linear transformation is the composition of linear transformations of
the type considered in (a). Recall the three linear transformations correspond to three basic manipulations
of matrices that reduce an invertible matrix to an identity matrix. So through the linear transformations of
the type considered in (a), any identity matrix can be transformed into any invertible matrix.

9
3.5 Partitions of Unity
I 3-37.
∫ 1−ε ∫ 1−ε
Proof. (a) Since f ≥ 0, ε f is monotone increasing as ε → 0. So limε→0 ε f exists if and only
∫ 1−ε
if limε→0 ε f is bounded. Let Φ be a partition of unity, subordinate to an admissible cover O of (0, 1).
∫ 1−ε ∫ 1−ε ∑
Because [ε, 1−ε] is a compact set, only finitely many ϕ ∈ Φ are not 0 on [ε, 1−ε]. So ε f = ε ϕ∈Φ ϕ·
∑ ∫ 1−ε ∑ ∫ ∫ 1−ε ∑ ∫
f = ϕ∈Φ ε ϕ · f ≤ ϕ∈Φ (0,1) ϕ · f , and limε→0 ε f ≤ ϕ∈Φ (0,1) ϕ · f . On the other hand, for any
finitely many ϕ in Φ, say ϕ1 , ϕ2 , · · · , ϕn , there exists ε > 0 such that all of ϕi ’s (i = 1, · · · , n) are 0 outside
[ε, 1 − ε]. So
n ∫
∑ n ∫
∑ 1−ε ∫ 1−ε ∑
n ∫ 1−ε ∫ 1−ε
ϕ·f = ϕi · f = ϕi · f ≤ f ≤ lim f.
(0,1) ε ε ε ε→0 ε
i=1 i=1 i=1

∑ ∫1 ∫ 1−ε
Let n → ∞, we get ϕ∈Φ 0 ϕ · f ≤ limε→0 ε
f . So by definition,

∫ ∑∫ ∫ 1−ε ∫ 1−ε
(0,1)
f exists ⇐⇒ ϕ · f < ∞ ⇐⇒ lim f < ∞ ⇐⇒ limε→0 ε
f exists.
(0,1) ε→0 ε
ϕ∈Φ

(b) This problem is about the distinction between absolute convergence and conditional convergence of
∑∞ n
the series n=1 (−1)
n . Look up any standard textbook in analysis.

I 3-38.
Proof. We leave out the details and only describe the main idea. For each n, we can find a number an ∈
∫ n ∫ n
(n, n + 1), such that An ∩(−∞,an ] f = 12 (−1)
n and An ∩[an ,∞) f = (−1)
2n . For a given small number ε > 0,
the partition of unity in (n, n + 1) can be chosen in two ways: for Φ, there are ϕ1 ∈ Φ and ϕ2 ∈ Φ such that
ϕ1 ≡ 1 on (an−1 , an − ε) and ϕ2 ≡ 1 on (an + ε, an+1 ), and they overlap a little bit around an ; for Ψ, there
are ψ1 ∈ Ψ and ψ2 ∈ Ψ such that ψ1 ≡ 1 on (n, n + 2 − ε) and ψ2 ≡ 1 on (n + 2 + ε, n + 4), and they overlap
∑ ∫ ∑∞ n n+1 ∑∞ (−1)n
a little bit around n + 2. Then ϕ∈Φ ϕ · f ≈ n=−∞ 21 [ (−1) n + (−1)
n+1 ] = 2
1
n=−∞ n(n+1) is absolutely
∑ ∫ ∑∞ n
(−1)n+1 ∑∞ (−1)n
convergent, and ψ∈Ψ ψ·f ≈ n=−∞ [ (−1) n + n+1 ] = n=−∞ n(n+1) is absolutely convergent. Clearly
they converge to different values.

3.6 Change of Variable


I 3-39.

Proof. Since g is continuous, the∫ set B =∫ {x ∈ A : detg′ (x)∫ = 0} is a closed set. Apply Theorem 3-13 to
open set A = A \ B, we have g(A′ ) f = A′ (f ◦ g)|detg | = A (f ◦ g)|detg ′ |. By Sard’s Theorem, g(B) has

∫ ∫ ∫
measure 0. So g(A) f = g(A′ )∪g(B) f = g(A′ ) f . So we still have Theorem 3-13 without the assumption
detg ′ (x) = 0.
Remark: By Theorem 2-11 (Inver Function Theorem), if we stick to the condition detg ′ (x) ̸= 0 in Theorem
3-13, then g is an open mapping. So g(A) is open and talking about integration on g(A) is meaningful by
the definition of integrability in the extended sense. But without detg ′ (x) ̸= 0, we cannot guarantee g is still
an open mapping and it may not be meaningful, rigorously speaking, to talk about integration on g(A) – we
have to check ∂g(A) has measure 0.
However, if B is bounded, it is compact and hence g(B) is compact. So ∂g(B) ⊂ g(B) has measure 0,
and it is meaningful to talk about integration on ∂g(B).

10
4 Integration on Chains
4.1 Algebraic Preliminaries
I 4-1.
Proof. (a) By Theorem 4-4(3) and induction, when i1 , · · · , ik are distinct,
φi1 ∧ · · · ∧ φik (ei1 , · · · , eik ) = k!Alt(φi1 ⊗ · · · ⊗ φik )(ei1 , · · · , eik )
1 ∑
= k! φi1 ⊗ · · · ⊗ φik (eσ(i1 ) , · · · , eσ(ik ) ) · sgnσ
k!
σ∈Sk

= φi1 (eσ(i1 ) ) · · · φik (eσ(ik ) ) · sgnσ
σ∈Sk
= 1.
If the factor (k+l)!
k!l! did
∑not appear in the definition of ∧, the right hand side would be 1
k! .
n
(b) Suppose vl = j=1 alj ej (1 ≤ l ≤ k), then

∑n ∑
n
φi1 ∧ · · · ∧ φik (v1 , · · · , vk ) = φi1 ∧ · · · ∧ φik ( a1j ej , · · · , akj ej )
j=1 j=1

n ∑
n ∑
n
= φi1 ∧ · · · ∧ φik (a1j ej , a2j ej , · · · , akj ej )
j=1 j=1 j=1


k ∑
n ∑
n
= φi1 ∧ · · · ∧ φik (a1ij eij , a2j ej , · · · , akj ej )
j=1 j=1 j=1

∑k ∑
n ∑
n
= φi1 ∧ · · · ∧ φik ( a1ij eij , a2j ej , · · · , akj ej )
j=1 j=1 j=1
= ···
∑k ∑
k ∑
k
= φi1 ∧ · · · ∧ φik ( a1ij eij , a2ij eij , · · · , akij eij )
j=1 j=1 j=1
= det(alij )1≤l≤k,1≤j≤k φi1 ∧ · · · ∧ φik (ei1 , · · · , eik )
= det(alij )1≤l≤k,1≤j≤k .
Remark: Combined with Theorem 4-6, this tells us how to calculate the result of a differential form acting
on vectors.
I 4-2.
Proof. Λn (V ) has dimension 1. So f ∗ as a linear mapping from Λn (V ) to Λn (V ) must be a multiplication
by some constant c. Let e1 , · · · , en be a basis of V and φ1 , · · · , φn the dual basis. Then
f ∗ (φ1 ∧ · · · ∧ φn )(e1 , · · · , en ) = φ1 ∧ · · · ∧ φn (f (e1 ), · · · , f (en )) = det(f ) · φ1 ∧ · · · ∧ φn (e1 , · · · , en )
by Theorem 4-6. By multi-linearity and alternating property, f ∗ (φ1 ∧ φn )(x1 , · · · , xn ) = det(f )φ1 ∧ · · · ∧
φn (x1 , · · · , xn ) is true for any x1 , · · · , xn ∈ V n .
I 4-3.
∑n
Proof. Let v1 , · · · , vn be an orthonormal basis and wi = j=1 aij vj . By Theorem 4-6, ω(w1 , · · · , wn ) =
∑n ∑n ∑n
det(aij )ω(v1 , · · · , vn ) = det(aij ). Meanwhile, gij = T (wi , wj ) = T ( k=1 aik vk , k=1 ajk vk ) = k=1 aik ajk .
If we denote the matrix (gij ) by G and (aij ) by A, we have G = AAT . So detG = (detA)2 , i.e.

|ω(w1 , · · · , wn )| = det(gij ).

11
I 4-4.
Proof. We first show f (e1 ), · · · , f (en ) is an orthonormal basis of V . Indeed, T (f (ei ), f (ej )) = f ∗ T (ei , ej ) =
⟨ei , ej ⟩ = δij . From this, we can easily show (f (ei ))ni=1 is linearly independent. Since dimV = n, (f (ei ))ni=1 is
an orthonormal basis. Since [f (e1 ), · · · , f (en )] = µ, f ∗ ω(e1 , · · · , en ) = ω(f (e1 ), · · · , f (en )) = ω(e1 , · · · , en ) =
1. By the multi-linearity and alternating property, f ∗ ω = det.
I 4-5.
∑n
Proof. Suppose ci (0) = j=1 aij (t)cj (t) (0 ≤ t ≤ 1). If we denote the matrix (aij (t)) by A(t), we have
(c1 (0), · · · , cn (0))T = A(t)(c1 (t), · · · , cn (t))T . So det[(c1 (0), · · · , cn (0))T ] = detA(t)det[(c1 (t), · · · , cn (t))T ].
Since A(t) is a continuous function of t, so is detA(t). Because A(t) is non-singular for any t ∈ [0, 1], detA(t)
does not change sign. This implies [c1 (0), · · · , cn (0)] = [c1 (t), · · · , cn (t)], ∀t ∈ [0, 1].

I 4-6.
Proof. (b) Denote v1 ×·  · ·×v n−1 by z. We first show v1 , · · · , vn−1 , z are linearly independent. Indeed, assume
v1
 v2 
 
not, then ⟨z, z⟩ = det  
 · · ·  = 0. So z = 0. On the other hand, by the linear independence of v1 , · · · , vn−1 ,
vn−1 
z  
v1
there exists vn ∈ Rn such that v1 , · · · , vn are linearly independent. As a result, ⟨vn , z⟩ = det · · · ̸= 0.
vn
Contradiction.
  So v 1 , · · · , v n−1 , z is a basis of R n
. Moreover, for any ω ∈ Λ n
(Rn
), ω(v 1 , · · · , vn−1 , z) =
v1
 ··· 
det  
vn−1  ω(e1 , · · · , en ) = ⟨z, z⟩ω(e1 , · · · , en ). So [v1 , · · · , vn−1 , z] is the usual orientation.
z
I 4-7.

Proof. For any non-zero ω ∈ Λn (V ), we can always find a basis v1 , · · · , vn so that ω(v1 , · · · , vn ) = 1.
Define a bilinear functional T on V × V by designating T (vi , vj ) = δij . Then T can be extended to
V × V and is an inner product. Under T , v1 , · · · , vn is an orthonormal basis. Suppose v1′ , · · · , vn′ is
another basis of V , which is orthonormal under T and [v1′ , · · · , vn′ ] = [v1 , · · · , vn ]. Then ω(v1′ , · · · , vn′ ) =
det(aij )ω(v1 , · · · , vn ) = det(aij
∑),n where A ∑
= (aij ) is the representation
∑n matrix of v1′ , · · · , vn′ under v1 , · · · ,
′ ′ n
vn . Since δij = T (vi , vj ) = T ( k=1 aik vk , k=1 ajk vk ) = k=1 aik ajk , A = (aij ) is an orthonormal matrix.
Hence |detA| = 1, which implies detA = 1.

I 4-8.

 v1 ×
Proof. · · · × vn−1 is the unique  z ∈ V , such that for any w ∈ V , T (w, z) = ω(v1 , · · · , vn−1 , w) =
 element
v1 v1
 ···   ··· 
det    
vn−1  ω(e1 , · · · , en ) = det vn−1 .
w w
I 4-9.

12
Proof. (a) We  prove e1 × e1 = 0 and e1 × e3 = −e2 . Suppose e1 × e1 = z, then for any w ∈ V ,
 only
e1
⟨w, z⟩ = det e1  = 0. So z = 0. Suppose e1 × e3 = a1 e1 + a2 e2 + a3 e3 , then for any w = b1 e1 + b2 e2 + b3 e3 ,
w    
e1 e1
⟨w, z⟩ = a1 b1 + a2 b2 + a3 b3 = det  e3  = b2 det e3  = −b2 . Since b1 , b2 , b3 are arbitrary,
b 1 e1 + b 2 e2 + b 3 e3 e2
a1 = a3 = 0 and a2 = −1. That is e1 × e3 = −e2 .
(b) Suppose z = (z 1 , z 2 , z 3 ), then
 1 
v v2 v3
⟨z, v × w⟩ = det w1 w2 w3  = z 1 (v 2 w3 − v 3 w2 ) + z 2 (w1 v 3 − v 1 w3 ) + z 3 (v 1 w2 − w1 v 2 ).
z1 z2 z3

Since z is arbitrary, we conclude v × w = (v 2 w3 − v 3 w2 , w1 v 3 − v 1 w3 , v 1 w2 − w1 v 2 ).


⟨v,w⟩2
(c) Note sin2 θ = 1 − cos2 θ = 1 − |v||w| 2 . So |v| |w| sin θ = |v| |w| − ⟨v, w⟩
2 2 2 2 2 2
= |v × w|2 , by part (b).
⟨v × w, v⟩ = ⟨v × w, w⟩ = 0 is clear by definition of  ”×”.    
v z w
(d) ⟨v, w × z⟩ = ⟨w, z × v⟩ = ⟨z, v × w⟩ = det w = det  v  = det  z . For rest of this part of
z w v
problem, use (b).
(e) Use (c).
I 4.10.
 
x1
 ··· 
Proof. Let V = span{w1 , wn−1 } and define φ ∈ Λn−1 (V ) by φ(x1 , · · · , xn−1 ) = det 
 xn−1
,

w1 ×···×wn−1
|w1 ×···×wn−1 |
∀x1 , · · · , xn−1 ∈ V . By definition of w1 × w2 × · · · × wn−1 , φ(w1 , · · · , wn−1 ) = |w1 × · · · × wn−1 |. Ac-
cording to Problem 4-5, it suffices to show φ is the volume element of V determined by ⟨, ⟩ and some
orientation µ. Indeed, w1 × · · · × wn−1 is perpendicular to V , so for any orthonormal basis v1 , · · · , vn−1 of
w1 ×···×wn−1
V , v1 , · · · , vn−1 , |w1 ×···×wn−1 |
is an orthonormal basis of Rn . This implies
 
v1
 ··· 
φ(v1 , vn−1 ) = det 
 vn−1
 = ±1.

w1 ×···×wn−1
|w1 ×···×wn−1 |

We can choose an orthonormal basis v1∗ , · · · , vn−1 ∗ ∗


in such a way that φ(v1∗ , · · · , vn−1 ) = 1. Denote
∗ ∗
[v1 , · · · , vn−1 ] by µ, then by Theorem 4-6, φ is the volume element determined by ⟨, ⟩ and µ.
I 4-11.

Proof. The property T (x, f (y)) ∑n= T (f (x), y) (∀x, y ∈ ∑ V ) holds if and only if T (v∑
n
i , f (vj )) = T (f (vi ), vj )
n
(1 ≤ i, j ≤ n). Since f (v
∑n j ) = a
k=1 jk k v and f (vi ) = a v
k=1 ik k , T (v i , f (v j )) = k=1 ajk T (vi , vk ) = aji
and T (f (vi ), vj ) = k=1 aik T (vk , vj ) = aij . So we have aji = aij .

4.2 Fields and Forms


I 4-13.
Proof. (a) For any vp ∈ Rnp ,

(g ◦ f )∗ (vp ) = (D(g ◦ f )(p)(v))g◦f (p) = (Dg(f (p)) ◦ Df (p)(v))g◦f (p) = g∗ [(Df (p)(v))f (p) ] = g∗ ◦ f∗ (vp ).

13
For any ω ∈ Λk (Rpg◦f (p) ),

(g ◦ f )∗ (ω) = ω((g ◦ f )∗ ) = ω(g∗ ◦ f∗ ) = g ∗ ω(f∗ ) = f ∗ ◦ g ∗ ω.

So (g ◦ f )∗ = f ∗ ◦ g ∗ .
(b) Apply Theorem 4-10(2) with k = l = 0.

I 4-14.
∑n ∑n
Proof. Let p = c(t), then f∗ (v) = Df (p)(v) = ( i=1 Di f ′ (p)v i , · · · , i=1 Di f m (p)v i ). Meanwhile, the
tangent vector to f ◦ c at t is


n ∑
n
((f ′ ◦ c)′ (t), · · · , (f m ◦ c)′ (t)) = ( Di f ′ (p)(ci )′ (t), · · · , Di f m (p)(ci )′ (t))
i=1 i=1

n ∑
n
= ( Di f ′ (p)v i , · · · , Di f m (p)v i ).
i=1 i=1

So the tangent vector to f ◦ c at t is f∗ (v).

I 4-18.
Proof. Let g(t) = p + tv, then

f (p + tv) − f (p)
Dv f (p) = lim
t→0 t
f ◦ g(t) − f ◦ g(0)
= lim
t→0 t
= Df (g(0))Dg(0)
 
v1
= (D1 f (p), · · · , Dn f (p)) · · ·
vn
∑n
= Di f (p)v i
i=1
= ⟨v, w⟩.

After normalizing v to unit vector and by Cauchy-Schwartz inequality, Dv f (p)| ≤ |w| = D |w|
w f (p). This

shows the gradient of f at p is the direction in which f is changing fastest at p.

I 4-20.
Proof. We only describe the intuition. We define a continuous transformation that pastes together the
segments AB and CD, where A = (0, 0), B = (1, 0), C = (1, 1), and D = (0, 1). The resulting image is a
ring. This solves the case n = 1.
For general n ≥ 2, divide the unit cube vertically into strips of equal size, and identify all strips with one
while getting the orientation of edges “right”.
I 4-21.
∫ ∫k ∫
Proof. c◦p ω = I (c ◦ p)∗ ω = I k p∗ ◦ c∗ ω. Suppose c∗ ω = hdx1 ∧ · · · ∧ dxn . So by change of variable formula
and Theorem 4-9,
∫ ∫ ∫ ∫ ∫ ∫
∗ ∗ ′ ′ ∗
p ◦c ω = h ◦ p detp = h ◦ p |detp | = h= c ω = ω.
Ik Ik Ik Ik Ik c

14
4.3 Geometric Preliminaries
I 4-24.
Proof. We fix a line L that goes through 0 and partition [0, 1] so that each c([ti−1 , ti ]) (1 ≤ i ≤ m) is
contained on one side of L. Let p(t) = {(x, y) : x2 + y 2 = 1} ∩ Oc(t) where O denotes the origin (0, 0). Then
c([ti−1 , ti ]) and p([ti−1 , ti ]) are contained in the same side of L. Define a singular 2-cube ci ⊂ R2 − 0 as

ci (t, s) = tc(ti s + ti−1 (t − s)) + (1 − t)p(ti s + ti−1 (1 − s)).

Then

∂ci = [tc(ti−1 ) + (1 − t)p(ti−1 )] + c(ti s + ti−1 (1 − s)) − [tc(ti ) + (1 − t)p(ti )] − p(ti s + ti−1 (1 − s)).

Define a singular 2-cube c2 ⊂ R2 − 0 by


s − ti−1
c2 (t, s) = ci (t, ), if ti−1 ≤ s < ti .
ti − ti−1

Then it’s easy to see ∂c2 = c − c1,n for some integer n.


Remark: Basically, the idea is to use homotopy to construct singular 2-cube in R2 − 0 and paste them
together through boundaries. After cancelation, ∂c2 becomes c − c1,n .

4.4 The Fundamental Theorem of Calculus


I 4-26.

Proof.
∫ ∫ ( )
−y x
dθ = c∗R,n dx + 2 dy
cR,n [0,1] x2 + y 2 x + y2

−R sin 2πnt R cos 2πnt
= (R cos 2πnt)′ dt + (R sin 2πnt)′ dt
[0,1] R2 R2

= (sin 2πnt)2 2πn + (cos 2πnt)2 2πn
[0,1]
= 2πn.
∫ ∫
By Stokes’ Theorem, for any 2-chain c in R2 − 0, ∂c
dθ = c
d(dθ) = 0. This shows cR,n ̸= ∂c.
I 4-27.
∫ ∫ ∫ ∫ ∫
Proof. By c − c1 = ∂c2 and Stokes’ Theorem, dθ − dθ = dθ = 0. So dθ = dθ = 2πn. This
∫ c c1,n ∂c2 c c1,n
shows n is unique (= c dθ/2π).
I 4-31.

Proof. Suppose for any ω∫ ̸= 0, we ∫can find a∫ chain c(ω)
∫ such that c ω ̸= 0. Then if d2 ω ̸= 0, we can find
a chain c such that 0 ̸= c d2 ω = ∂c dω = ∂ 2 c ω = 0 ω = 0. Contradiction. To construct c(ω), suppose
ω = f dx1 ∧ · · · dxn , then f ̸≡ 0. ∫So we can find a point x0 such that f (x0 ) ̸= 0. Let c(ω) be a sufficiently
small cube centered at x0 . Then c(ω) ω ̸= 0.

I 4-32.

15
Proof. (a) c(s, t) = (1−t)c1 (s)+tc2 (s), then c(s, 0) = c1 (s), c(s, 1) = c2 (s), c(0, t) = c1 (0), and c(1, t) = c1 (1).
Let c3 = c1 (t) and c4 = c1 (0), then ∂c = c1 − c2 + c3 − c4 . This implies
∫ ∫ ∫ ∫ ∫
ω= ω− ω+ ω− ω.
∂c c1 c2 c3 c4

If we suppose ω∫ = u(x, ∫y)dx + ∫v(x, y)dy, then c∗3 (ω) = u ◦ c3 (c′3 (t))∫′ dt + v ◦ c3 (c23 (t))′ dt = 0. Similarly,
c∗4 (ω) = 0. So ∂c ω = c1 ω − c2 ω. There are two ways to show ∂c ω = 0. First, if ω is well-defined
∫ ∫
in c, then Stokes’ Theorem gives ∂c ω = c dω = 0, provided ω is closed. Second, if ω is not well-defined
everywhere in c, but is exact, say ω = dω1 , then Stokes’ Theorem gives
∫ ∫ ∫
ω= dω1 = ω1 = 0.
∂c ∂c ∂2c

Problem 4-21 gives the counter example.


∫ x (b) Suppose ∫ωy = u(x, y)dx + v(x, y)dy. Choose a point (x0 , y0 ) in the domain of ω, and define ω1 =
u(ξ, y0 )dξ + y0 v(x, η)dη. This is the integration of ω along first [x0 , x] and then [y0 , y]. So dω1 =
x0 ∫y ∂
u(x, y0 )dx + v(x, y)dy + y0 ∂x v(x, η)dη · dx, which implies ∂w
∂y = v(x, y). By condition, we can also write
1
∫y ∫x
ω1 as ω1 = y0 v(x0 , η)dη + x0 u(ξ, y)dξ. This is the integration of ω along first [y0 , y] and then [x0 , x]. So
∫x
∂y (ξ, y)dξ · dy, which implies ∂x = u(x, y).
∂w1
dω1 can also be written as dω1 = v(x0 , y)dy + u(x, y)dx + x0 ∂u
Combined, we conclude dω1 = ω.

I 4-33.

Proof. (a) We only show f (z) = z̄ is not analytic.

f (z) − f (z0 ) z̄ − z̄0 (x − x0 ) − i(y − y0 )


= = .
z − z0 z − z0 (x − x0 ) + i(y − y0 )

If z → z0 along the line x = x0 , the limit is −1; if z → z0 along the line y = y0 , the limit is 1. So
limz→z0 f (z)−f (z0 )
does not exist.
z−z0 ( )( ) ( )
a b x ax + by
(c) Let z = x + yi, then T (z) = = . If z0 = x0 + y0 i is a complex number
c d y cx + dy
and T (z) = z0 z = (x0 + y0 i)(x + yi) = (x0 x − y0 y) + (x0 y + y0 x)i, we must have ax + by = x0 x − y0 y,
cx + dy = x0 y + y0 x. Since x and y are arbitrary, this implies a = x0 , b = −y0 , c = y0 and d = x0 , or
equivalently, a = d, b = −c. By part(b) Cauchy-Riemann equations:
( ) ( )
Dx u(z0 ) Dy u(z0 ) Dx u(z0 ) Dy u(z0 )
Df (z0 ) = = .
Dx v(z0 ) Dy v(z0 ) −Dy u(z0 ) Dx u(z0 )

Df (z0 ) is a multiplication by the complex number Dx u(z0 ) − iDy u(z0 ).


(d)

d(f dz) = d[(u + iv)(dx + idy)]


= d[udx − vdy + i(vdx + udy)]
∂u ∂v ∂v ∂u
= dy ∧ dx − dx ∧ dy + i[ dy ∧ dx + dx ∧ dy]
∂y ∂x ∂y ∂x
∂u ∂v ∂u ∂v
= −[ + ]dx ∧ dy + i( − )dx ∧ dy.
∂y ∂x ∂x ∂y

∂x = ∂y , ∂y ∫= − ∂x . ∫
So d(f dz) = 0 if and only if ∂u ∂v ∂u ∂v

(e) By part(d) and Stokes’ Theorem, c f dz = ∂c′ f dz = ∂c′ d(f dz) = 0.

16
(f)

1 x − yi xdx + ydy xdy − ydx 1


gdz = (dx + idy) = 2 (dx + idy) = + i = d log(x2 + y 2 ) + idθ.
x + yi x + y2 x2 + y 2 x2 + y 2 2

Let h = 1
2 log(x2 + y 2 ) : C − 0 → R. Then gdz = idθ + dh. So by Problem 4-26,
∫ ∫ ∫
dz
= idθ + dh = 2πni + c∗R,n (dh).
cR,n z cR,n [0,1]

Since c∗R,n (dh) = dc∗R,n (h) = d 21 log R2 = 0. So cR,n dz z = 2πni.
(g) By Problem 4-23, Stokes’ Theorem and part(d), for some singular 2-cube c, cR1 ,n − cR2 ,n = ∂c and
∫ ∫ ∫ ( )
f (z) f (z) f (z)
dz = dz = d dz = 0.
cR1 ,n −cR2 ,n z ∂c z c z

According to part(f), ∀ε > 0, when R is sufficiently small


∫ ∫ ( ) ∫
dz
f (z) f (z) f (0)
dz − 2πinf (0) = − dz ≤ ε = ε2πn.
cR,n z cR,n z z cR,n z
∫ f (z) ∫ f (z)
So limR→0 = 2πnif (0). By the first part of (g),
z dz dz ≡ 2πnf (0)i. By Problem 4-24
cR,n
∫ ∫
cR,n z
∫ ( ) ∫
(note we can further require c2 ⊂ C − 0), c−c1,n f (z) z dz = ∂c2
f (z)
z dz = c2
d f (z)
z dz = 0. So c f (z)
z dz =
∫ f (z) 1
∫ f (z)
c1,n z
dz = 2πnif (0), i.e. nf (0) = 2πi c z
dz = 0.

5 Integration on Manifolds
5.1 Manifolds
I 5-1.

Proof. We note that ∂M in this problem is interpreted as “boundary of a manifold”, not “boundary of a
set”.
For any x ∈ ∂M , there is an open set U containing x, an open set V ⊂ Rn , and a diffeomorphism
h : U → V such that

h(U ∩ ∂M ) = h(U ∩ M ∩ ∂M ) = V ∩ (Hk × {0}) ∩ {yk = 0} = {y ∈ V : y k = y k+1 = · · · = y n = 0}.

By definition, ∂M is a (k − 1)-dimensional manifold. Similarly, for any x ∈ M − ∂M , there is an open set


U containing x, an open set V ⊂ Rn , and a diffeomorphism h : U → V such that

h(U ∩(M −∂M )) = h(U ∩M ∩(M −∂M )) = V ∩(Hk ×{0})∩{yk > 0} = {y ∈ V : y k > 0, y k+1 = · · · = y n = 0}.

Define V ′ = V ∩ {y k > 0} and U ′ = h−1 (V ′ ). Then h remains a diffeomorphism from U ′ to V ′ and


h(U ′ ∩ (M − ∂M )) = {y ∈ V ′ : y k+1 = · · · = y n = 0}. This shows M − ∂M is a k-dimensional manifold.
I 5-2.

Proof. See, for example, [2] §23, Example 3.


I 5-3. (a)

17
Proof. We first clarify that “boundary A” means set boundary. We show any point in boundary A belongs
to one of the two types of points as illustrated by the problem’s counter example.
Indeed, for any point x in boundary A, by the definition of boundary A being an (n − 1)-dimensional
manifold, we can find an open set U of Rn containing x, and a diffeomorphism h : U → V (V := h(U )) such
that h(U ∩ boundary A) = {y ∈ V : y n = 0}. Without loss of generality, we assume U is connected. Then
V is also a connected open set. Since V − V ∩ {y n = 0} is the disjoint union of two connected open sets,
U − boundary A = h−1 (V − V ∩ {y n = 0}) is also the disjoint union of two connected open sets, say U1 and
U2 .
Since x is a (set) boundary point of A, any neighborhood that contains x must contains points of A.
So at least one of U1 and U2 must contain points of A. Because A is open, after proper shrinking, we can
assume that the Ui that contains points of A is entirely contained by A, i = 1, 2.
Then there are two cases to analyze. Case one, both U1 and U2 contain points of A. This implies
x ∈ U ⊂ [U1 ∪ U2 ∪ boundary A] ⊂ N . That is, x is an interior point of N and hence satisfies condition (M).
Case two, only one of Ui ’s contains points of A. Without loss of generality, we assume U1 contains points
of A but U2 does not. Then x does not satisfy condition (M) but satisfies condition (M’). Combining these
two cases, we can conclude any point in boundary A satisfies either condition (M) or condition (M’).
It is clear that any point in A satisfies condition (M). So, we can conclude N is an n-dimensional
manifold-with-boundary.
I 5-4.
Proof. For any x ∈ M , there is an open set U containing x, an open set V ⊂ Rn , and a diffeomorphism
h : U → V such that

h(U ∩ M ) = V ∩ ({Rk × {0}}) = {y ∈ V : y k+1 = · · · = y n = 0}.

Let p : V → Rn−k be the projection onto the last (n − k) coordinates. Define g = p ◦ h. Then g : U → Rn−k
is differentiable and g −1 (0) = h−1 ◦ p−1 (0) = h−1 ({y ∈ V : y k+1 = · · · = y n = 0}) = U ∩ M . Since
Dg = Dp ◦ Dh, rank of Dg(y) = rank of Dp(y) = n − k when g(y) = 0.
Remark: It’s a partial converse because g is only defined locally.
I 5-5.
Proof. Let X be a k-dimensional subspace of Rn . Choose an orthonormal basis a1 , · · · , ak and extend it to
be an orthonormal basis a1 , · · · , ak , ak+1 , · · · , an of Rn . Define h to be the orthogonal transformation that
maps X to {x ∈ Rn : xk+1 = · · · = xn = 0}. Then condition (M) is satisfied by h.
I 5-6.
Proof. Consider g : Rn × Rm →[ Rm defined ]by g(x, y) = y − f (x). Suppose f is differentiable, then g
is differentiable and Dg(x, y) = Dx f (x) Im . So Dg has rank m. By Theorem 5-1, g −1 (0) = {(x, y) :
y = f (x)} is an n-dimensional manifold. Conversely, if {(x, y) : y = f (x)} is a manifold, by Theorem 5-2
f (x) is necessarily differentiable. Therefore the graph of f is an n-dimensional manifold if and only if f is
differentiable.
I 5-8. (a)
Proof. Use the fact that in Rn , any open covering of a set has a countable sub-covering, i.e. the second
countability axiom for a separable metric space.
(b)
Proof. For any x ∈ boundary M (set boundary), x ∈ M by the closedness of M . Clearly x cannot satisfy
condition (M), so it must satisfy condition (M’). This implies x ∈ ∂M (manifold boundary). Conversely it
is always true, by definition, that ∂M ⊂ boundary M . So boundary M = ∂M .
The counter example is already given in Problem 5-3(a).
(c)

18
Proof. By part (b), boundary M agrees with ∂M . By part (a) and Problem 5-1, ∂M has measure zero.
Then we are done. (Compactness is used for boundedness of M , which is inherent in the definition of
Jordan-measurability.)

5.2 Fields and Forms on Manifolds


I 5-9.

Proof. We suppose f is a coordinate system around x and f (a) = x. Let X be the collection of curves in
M with c(0) = x. We need to show f∗ (Rka ) = span{c′ (0) : c ∈ X }. Indeed, let X̄ be the collection of curves
in Rk with c̄(0) = a, then f establishes a one-to-one correspondence between X̄ and X by f : c̄ → f (c̄). By
Problem 4-14, the tangent vector to c = f (c̄) at 0 is f∗ (c̄′ (0)). So f∗ (Rka ) ⊃ span{c′ (0) : c ∈ X }. For “c”,
note f∗ (ei ) = c′i (0), where ci (t) = f (a + tei ).
I 5-10.

Proof. We first define an orientation on M . ∀x ∈ M , suppose f ∈ C is a coordinate system around x, such


that f (a) = x for some a ∈ Rk . Define µx = [f∗ ((e1 )a ), · · · , f∗ ((ek )a )]. We need to check such an orientation
µx is independent of the choice of f . Indeed, if g ∈ C is another coordinate patch around x with g(b) = x,
by the fact det(f −1 ◦ g)′ > 0,

[(f −1 ◦ g)∗ ((e1 )b ), · · · , (f −1 ◦ g)∗ ((ek )b )] = [(e1 )a , · · · , (ek )a ].

Apply f∗ to both sides, we have

[g∗ ((e1 )b ), · · · , g∗ ((ek )b )] = [f∗ ((e1 )a ), · · · , f∗ ((ek )a )].

Second, by the very definition of µ, ∀f ∈ C, f is clearly orientation-preserving. Moreover, the requirement


that any element of C is orientation-preserving dictates that the orientation has to be defined in this way.
Finally, we show µ is consistent. Suppose h : W → Rn is a coordinate patch and a, b ∈ W . Let x = h(a)
and y = h(b). If [h∗ ((e1 )a ), · · · , h∗ ((ek )a )] = µh(a) , there exists some f ∈ C, such that [h∗ ((e1 )a ), · · · , h∗ ((ek )a )] =
µx = [f∗ ((e1 )α ), · · · , f∗ ((ek )α )] (f (α) = x = h(a)). This implies det(f −1 ◦ h)′ (a) > 0 (see the argument
on page 118-119). Since both f and h are non-singular, det(f −1 ◦ h)′ does not change sign throughout its
domain. In particular, if h(b) = y falls within the range of f , we must have det(f −1 ◦h)′ (b) > 0, which implies
[h∗ ((e1 )b ), · · · , h∗ ((ek )b )] = [f∗ ((e1 )β ), · · · , f∗ ((ek )β )] = µy (h(b) = f (β) = y). If h(b) = y is not within the
range of f , we use a sequence of elements from C to “connect” x and y, provided M is connected.
I 5-16. Let g : A → Rp be as in Theorem 5-1. If f : Rn → R is differentiable and the maximum (or
minimum) of f on g −1 (0) occurs at a, show that there are λ1 , · · · , λp ∈ R, such that


p
(1)Dj f (a) = λi Dj g i (a), j = 1, · · · , n.
i=1
∑n
Hint: This equation can be written df (a) = i=1 λi dg i (a) and is obvious if g(x) = (xn−p+1 , · · · , xn ).
The maximum of f on g −1 (0) is sometimes called the maximum of f subject to the constraints g i = 0.
One can attempt to find a by solving the system of equations (1). In particular, if g : A → R, we must solve
n + 1 equations

Dj f (a) = λDj g(a),


g(a) = 0,

in n + 1 unknowns a1 , · · · , an , λ, which is often very simple if we leave the equation g(a) = 0 for last. This
is Lagrange’s method, and the useful but irrelevant λ is called a Lagrangian multiplier. The following
problem gives a nice theoretical use for Lagrangian multipliers.

19
Proof. First, (1) can be re-written as Df = (λ1 , · · · , λp )Dg, where Df and Dg are the Jacobian matrices
of f and g, respectively. If g is the projection: g(x) = (xn−p+1 , · · · , xn ), then Dg(a) = (0p×(n−p) , Ip×p ) and
g −1 (0) = {x ∈ Rn : xn−p+1 = · · · = xn = 0}. So for any x ∈ g −1 (0), f (x) = f (x1 , · · · , xn−p , 0, · · · , 0). When
f achieves its maximum (or minimum) on g −1 (0) at a, we must have D1 f (a) = · · · = Dn−p f (a) = 0. Define
λi = Dn−p+i f (a) (1 ≤ i ≤ p), then

Df (a) = (0, · · · , 0, Dn−p+1 f (a), · · · , Dn f (a))


= (0, · · · , 0, λ1 , · · · , λp )
= (λ1 , · · · , λp )(0p×(n−p) , Ip×p )
= (λ1 , · · · , λp )Dg(a).

For general g, by Theorem 2-13, there is an open set U ⊂ Rn containing a and a differentiable function
h : U → Rn with differentiable inverse such that g ◦ h(x1 , · · · , xn ) = (xn−p+1 , · · · , xn ). Suppose h(x0 ) = a,
then f ◦ h achieves its maximum (or minimum) on (g ◦ h)−1 (0) at x0 . By previous argument, there exists
λ1 , · · · , λp such that
D(f ◦ h)(x0 ) = (λ1 , · · · , λp )D(g ◦ h)(x0 ).
By Theorem 2-2 (chain rule), D(f ◦ h)(x0 ) = Df (a)Dh(x0 ) and D(g ◦ h)(x0 ) = Dg(a)Dh(x0 ). Since Dh(x0 )
is invertible, we have Df (a) = (λ1 , · · · , λp )Dg(a).
I 5-17. (a) Let T : Rn → Rn be self-adjoint
∑ ∑n with matrix A = (aij ), so that aij = aji . If f (x) = ⟨T x, x⟩ =
aij xi xj , show that Dk f (x) = 2 j=1 akj xj . By considering the maximum of ⟨T x, x⟩ on S n−1 show that
there is x ∈ S n−1 and λ ∈ R with T x = λx.
(b) If V = {y ∈ Rn : ⟨x, y⟩ = 0}, show that T (V ) ⊂ V and T : V → V is self-adjoint.
(c) Show that T has a basis of eigenvectors.
(a)

Proof. Define g(x) = |x|2 − 1. Then g is a differentiable function with g −1 (0) = S n−1 = {x ∈ Rn : |x| = 1}.
So g ′ (x) = 2x has rank 1 whenever g(x) = 0. S n−1 is a compact set, so f (x) = ⟨T x, x⟩ achieves its maximum
and minimum on S n−1 , say at xmax and xmin , respectively. By Problem 5-16, ∑nthere exists∑ λ ∈ R, such
j n i
that Df (xmin ) = λDg(xmin ) and Df (xmax ) = λDg(xmax ). Note Dk f (x) = a x + i=1 aik x =
  j=1 kj

∑n ak1
2 j=1 akj xj = 2(x1 , · · · , xn )  · · · , we can re-write the above equation as
akn
 
a11 a21 ··· an1
(x10 , · · · , xn0 )  · · · ··· ··· · · ·  = λx0 ,
a1n a2n ··· ann

where x0 = xmin or xmax . This shows xmin and xmax are eigenvectors of T on S n−1 . Since x0 ∈ {x : ||x|| = 1},
λ = ⟨Ax0 , x0 ⟩. This is the so-called Maximum Principle. See Principle of Applied Mathematics, by Robert
Keener, page 17.

(b)

Proof. ∀y ∈ V , ∠x, T y⟩ = ⟨T x, y⟩ = λ⟨x, y⟩ = 0. So T (V ) ⊂ V . Since the corresponding matrix A of T is


symmetric, T : V → V is self-adjoint.

(c)
Proof. Part(a) shows T has at least one eigenvector x1 . Part(b) shows T remains a self-adjoint linear
operator on the (n − 1)-dimensional space {x1 }⊥ . So we can find a second eigenvector x2 ∈ {x1 }⊥ . Then
V ′ = {y ∈ Rn : ⟨x1 , y⟩ = ⟨x2 , y⟩ = 0} = {x1 , x2 }⊥ = {x1 }⊥ ∩ {x2 }⊥ again satisfies the properties enjoyed
by V in part(b). So we can find a third eigenvector x3 ∈ V ′ . Continue this procedure, we can find a basis
consisting of eigenvectors of T .

20
5.3 Stokes’ Theorem on Manifolds
5.4 The Volume Element
I 5-23.
Proof. Suppose c∗ (ds) = α(t)dt. By definition,

α(t) = α(t)dt(1)
= c∗ (ds)(1)
= ds(c∗ (1))
= ds(((c1 )′ , · · · , (cn )′ ))
v ( )
u n
u∑ ((c 1 ′
) , · · · , (c n ′
) )
= t [(ci )′ ]2 ds √∑n
i ′ 2
i=1 i=1 [(c ) ]
v
u n
u∑
= t [(ci )′ ]2 ,
i=1

where the last “=” is by the definition of length element and the fact that c is orientation-preserving.

I 5-24.
Proof. dV (e1 , · · · , en ) = 1 = dx1 ∧ · · · ∧ dxn (e1 , · · · , en ).

I 5-25.

Proof. Suppose M is an oriented (n − 1)-dimensional manifold in Rn . ∀x ∈ M and let n(x) be the outward
unit normal to M . We define ω ∈ Λn−1 (Mx ) by
 
n(x)
 v1 
ω(v 1 , · · · , v n−1 ) = det  
 · · ·  , ∀v , · · · , v
1 n−1
∈ Mx .
v n−1

So if v 1 , · · · , v n−1 is an orthonormal basis of Mx with [n(x), v 1 , · · · , v n−1 ] equal to the usual orientation of
Rnx , ω(v 1 , · · · , v n−1 ) = 1. This shows dA = ω.
On the other hand, Mx is a subspace of Rnx . So Λn−1 (Mx ) is a subspace of Λn−1 (Rnx ). This implies dA
can be represented as dA = α1 dx2 ∧· · ·∧dxn +(−1)α2 dx1 ∧dx3 ∧· · ·∧dx  +·· ·+(−1)
n n+1
αn dx1 ∧· · ·∧dxn−1 .
α
 v1 
So ∀v , · · · , v
1 n−1
∈ Mx , by Problem 4-1(b), dA(v , · · · , v
1 n−1 
) = det  . So αi (−1)1+i = ni .
··· 
v n−1
I 5-26.

Proof. (a) Define c : [a, b] × [0, 2π] → R3 by c(x, θ) = (x, f (x) cos θ, f (x) sin θ). So
 
1 0
Dc = f ′ (x) cos θ −f (x) sin θ .
f ′ (x) sin θ f (x) cos θ

Using the notation in the text, we have E = 1 + (f ′ (x))2 , G = f 2 (x), F = 0. so the area element
√ ∫ ∫b √
dA = 1 + (f ′ (x))2 f (x)dx ∧ dθ and S dA = a 2πf (x) 1 + (f ′ (x))2 dx.

21

(b) Let a = −1, b = 1, f (x) = 1 − x2 , then
∫ 1 √ 1
Area(S 2 ) = 2π 1 − x2 √ dx = 4π.
−1 1 − x2

I 5-27.

Proof. Denote by dV1 and dV2 the volume elements of M and T (M ), respectively. The problem is reduced
to proving dV1 = T ∗ (dV2 ). Indeed, ∀x ∈ M and suppose (v1 , · · · , vn ) is an orthonormal basis of Mx with
[v1 , · · · , vn ] = µx . Then (T (v1 ), T (v2 ), · · · , T (vn )) is an orthonormal basis of T (M )T (x) as well. To see
[T (v1 ), T (v2 ), · · · , T (vn )] = T (µx ), note if v = Aw, then T (v) = T Aw = T AT −1 (T w). Combined, by
definition of volume element, we have T ∗ (dV2 )(v1 , · · · , vn ) = dV2 (T (v1 ), · · · , T (vn )) = 1. By the uniqueness
of volume element on M , we conclude T ∗ (dV2 ) = dV1 .

5.5 The Classical Theorems

References
[1] Peter Lax. Linear algebra. John Wiley & Sons, Inc., New York, 1997. 2
[2] J. Munkres. Analysis on manifolds, Westview Press, 1997. 5, 17

22

Вам также может понравиться