Вы находитесь на странице: 1из 39

Methods

Paul Metcalfe
Summer 1998

These notes are maintained by Paul Metcalfe.


Comments and corrections to pdm23@cam.ac.uk.
Revision: 1.8
Date: 1999/09/17 17:44:23

The following people have maintained these notes.

– date Paul Metcalfe


Contents

Introduction v

1 Fourier series 1
1.1 Properties of sine and cosine . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Definition of Fourier series . . . . . . . . . . . . . . . . . . . . . . . 1
1.2.1 The meaning of good behaviour . . . . . . . . . . . . . . . . 2
1.3 Complex Fourier series . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Sine and cosine series . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Parseval’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2 The Wave Equation 5


2.1 Waves on an elastic string . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Separation of variables . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 Oscillation energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4 Solution in characteristic co-ordinates . . . . . . . . . . . . . . . . . 8
2.5 Wave reflection and transmission . . . . . . . . . . . . . . . . . . . . 8

3 Green’s Functions 11
3.1 The Dirac delta function . . . . . . . . . . . . . . . . . . . . . . . . 11
3.1.1 Representations . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2 Second order linear ODEs . . . . . . . . . . . . . . . . . . . . . . . 12
3.3 Definition of Green’s function . . . . . . . . . . . . . . . . . . . . . 12
3.3.1 Defining properties . . . . . . . . . . . . . . . . . . . . . . . 13
3.4 Constructing G(x, ξ): boundary value problems . . . . . . . . . . . . 13
3.4.1 Derivation of jump conditions . . . . . . . . . . . . . . . . . 13
3.4.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.5 Constructing G(x, ξ): initial value problems . . . . . . . . . . . . . . 14
3.5.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

4 Sturm-Liouville Theory 17
4.1 Self-adjoint form and boundary values . . . . . . . . . . . . . . . . . 17
4.2 Eigenfunction expansions . . . . . . . . . . . . . . . . . . . . . . . . 18
4.2.1 Real eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . 18
4.2.2 Orthogonal eigenfunctions . . . . . . . . . . . . . . . . . . . 18
4.2.3 Complete eigenfunctions . . . . . . . . . . . . . . . . . . . . 18
4.3 Example: Legendre polynomials . . . . . . . . . . . . . . . . . . . . 19
4.4 Inhomogeneous boundary value problem . . . . . . . . . . . . . . . . 19

iii
iv CONTENTS

5 Applications: Laplace’s Equation 21


5.1 Cartesians . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.2 Plane polars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.3 Spherical polars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.3.1 The full glory of spherical polars . . . . . . . . . . . . . . . . 24

6 Calculus of Variations 25
6.1 The problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
6.2 Euler-Lagrange equations . . . . . . . . . . . . . . . . . . . . . . . . 25
6.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
6.4 Principle of Least Action . . . . . . . . . . . . . . . . . . . . . . . . 27
6.5 Generalisations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
6.6 Integral constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

7 Cartesian Tensors in R3 29
7.1 Tensors? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
7.2 Transformation laws . . . . . . . . . . . . . . . . . . . . . . . . . . 29
7.3 Tensor algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
7.4 Quotient Laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
7.5 Isotropic tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
7.5.1 Spherically symmetric integrals . . . . . . . . . . . . . . . . 31
7.6 Symmetric and antisymmetric tensors . . . . . . . . . . . . . . . . . 32
7.7 Physical Applications . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Introduction

These notes are based on the course “Methods” given by Dr. E.P. Shellard in Cam-
bridge in the Michælmas Term 1996. These typeset notes are totally unconnected with
Dr. Shellard. They are more vaguely based on the course than my notes usually are,
and I have mainly used Dr. Shellard’s notes to get a sense of ordering and content.
Other sets of notes are available for different courses. At the time of typing these
courses were:
Probability Discrete Mathematics
Analysis Further Analysis
Methods Quantum Mechanics
Fluid Dynamics 1 Quadratic Mathematics
Geometry Dynamics of D.E.’s
Foundations of QM Electrodynamics
Methods of Math. Phys Fluid Dynamics 2
Waves (etc.) Statistical Physics
General Relativity Dynamical Systems
Combinatorics Bifurcations in Nonlinear Convection

They may be downloaded from


http://www.istari.ucam.org/maths/.

v
vi INTRODUCTION
Chapter 1

Fourier series

1.1 Properties of sine and cosine


Consider the set of functions
nπx nπx
gn (x) = cos hn (x) = sin
L L

with n ∈ N. These functions are periodic on [0, 2L] and are also mutually orthog-
onal:

(
2L
m, n 6= 0
Z
nπx mπx Lδmn
sin sin dx =
0 L L 0 m = n = 0.
Z 2L
nπx mπx
cos
sin dx = 0
0 L L
Z 2L (
nπx mπx Lδmn m, n 6= 0
cos cos dx =
0 L L 2Lδ0n m = 0.

These properties are easy to verify by direct integration.


In fact the functions gn , hn form a complete orthogonal set; they span the space of
functions periodic on [0, 2L].

1.2 Definition of Fourier series


We can expand any sufficiently well-behaved real periodic function f (x) with period
2L as
∞ ∞
1 X πnx X πnx
f (x) = a0 + an cos + bn sin , (1.1)
2 n=1
L n=1
L

where an and bn are constants such that the series is convergent for all x. They are
called Fourier coefficients and can be found using the results on orthogonality of sin
and cos:

1
2 CHAPTER 1. FOURIER SERIES

Z 2L ∞
mπx X
f (x) sin dx = bn δnm = Lbm
0 L n=1
Z 2L
mπx
f (x) cos dx = Lam .
0 L
1
Note that the 2 a0is (1.1) is not a typo, but the 12 is required for the above integral
to work for all n. Note also that the particular interval used doesn’t matter, provided it
is of length 2L.

Example: sawtooth wave


Define f (x) by
f (x) = x −L<x≤L
and let f be periodic elsewhere. We have
Z L
1 nπx
an = x cos dx = 0 (odd function),
L −L L
but

2 L
Z
nπx 2L
bn = x sin dx = (−1)n+1 (integrate by parts).
L 0 L nπ
Therefore the Fourier series is
 
2L πx 1 2πx 1 3πx
f (x) = sin − sin + sin + ... .
π L 2 L 3 L
We can plot the approximation
N
2L X iπx
f (x) ≈ (−1)i+1 sin .
π i=1 L

This is shown in figure 1.1. We see that as N increases the following occurs.
• The approximation improves away from the discontinuity — it is convergent
where f is continuous.
• The Fourier series tends to 0 at x = L — the midpoint of the discontinuity.
• The Fourier series has a persistent overshoot at x = L of approximately 9%
(Gibbs’ phenomenon).

1.2.1 The meaning of good behaviour


The Dirichlet conditions are sufficiency conditions for a well-behaved function f (x) to
have a convergent Fourier series.
Theorem 1.1. If f (x) is a bounded periodic function with period 2L with a finite num-
ber of maxima, minima and discontinuities in [0, 2L] then its Fourier series converges
to f (x) at all points where f is continuous. At discontinuities the series converges to
the midpoint of the discontinuity: 21 (f (x− ) + f (x+ )).
1.3. COMPLEX FOURIER SERIES 3

1.5
exact
N=10
N=20
N=50
1

0.5

-0.5

-1

-1.5
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2

Figure 1.1: Fourier series approximation showing Gibbs’ phenomenon (L = 1)

Proof. Omitted.
Note that these are very weak conditions (compare Taylor’s theorem). Pathological
functions (e.g. x−1 , sin x−1 ) are excluded. The converse to this theorem is not true:
sin x−1 has a convergent Fourier series.

1.3 Complex Fourier series


It is obvious that we can rewrite (1.1) as
ınπx
X
f (x) = cn e L ,
n∈Z
where Z 2L
1 ınπx
cn = f (x)e− L dx.
2L 0
This is sometimes useful (and also makes the analogy with Fourier transforms
slightly more obvious).

1.4 Sine and cosine series


Consider a function f (x) defined only on the half interval [0, L]. We can extend its
range in two obvious ways by making it either odd or even on [−L, L].
If we make it odd then we put an = 0 and
2 L
Z
nπx
bn = f (x) sin dx
L 0 L
4 CHAPTER 1. FOURIER SERIES

in (1.1).
If we make it even then bn = 0 and
Z L
2 nπx
an = f (x) cos dx.
L 0 L

1.5 Parseval’s theorem


This is a relation between the average of the square of a function and its Fourier coef-
ficients.

∞ ∞
!2
Z 2L Z 2L
2 1 X πnx X πnx
f (x) dx = a0 + an cos + bn sin dx
0 0 2 n=1
L n=1
L
∞ ∞
!
Z 2L
1 2 X 2 πnx X 2 πnx
= a0 + an cos2 + bn sin2 dx
0 4 n=1
L n=1
L

!
a20 X 2
an + b2n .

=L +
2 n=1

This is also called a completeness relation.

Example: sawtooth wave


2L n+1
Recall the sawtooth wave (page 2). Here we had an = 0 and bn = nπ (−1) . Then
applying Parseval’s relation gives
L ∞
4L2
Z
2 3 X
L = x2 dx = L ,
3 −L n=1
n2 π 2
and so

X π2
n−2 = .
n=1
6
Chapter 2

The Wave Equation

2.1 Waves on an elastic string


Consider small displacements on a stretched string with the endpoints fixed and the
initial conditions (displacement and velocity) given.

Resolve horizontally to get

T1 cos θ1 = T2 cos θ2 .
∂y 2
Now for small θ, cos θ ≈ 1 − 21 θ2 , and so T1 = T2 with error O( ∂x ) .
Resolving vertically,
!
∂2y

∂y ∂y
FT = T1 sin θ1 + T2 sin θ2 = T − = T 2 dx.
∂x x+dx
∂x x ∂x

Therefore (from Newton II)

∂2y ∂2y
µdx = T dx,
∂t2 ∂x2
and so
∂2y T ∂2y
= .
∂t2 µ ∂x2
q
This is the wave equation, with c = Tµ . In general, the 1D wave equation is

∂2y 2
2∂ y
= c . (2.1)
∂t2 ∂x2

5
6 CHAPTER 2. THE WAVE EQUATION

2.2 Separation of variables


We want to solve (2.1) given the boundary values

y(0, t) = 0 y(L, t) = 0
and the initial conditions

∂y
y(x, 0) = p(x) = q(x).
∂t (x,0)
We try a substitution y = X(x)T (t) in (2.1). This gives

T̈ X 00
c−2 = .
T X
Since the LHS depends only on t and the RHS only on x they must both be equal
to a constant λ.
We have therefore split the PDE into two ODEs:

X 00 − λX = 0 and T̈ − c2 λT = 0.
We solve the x equation first:

X 00 − λX = 0 X(0) = X(L) = 0.
Since we don’t know anything about λ we have to learn something...
√ √
• If λ > 0 the solution is X = A cosh λx + B sinh λx. If we apply the
boundary values now we see that A = B = 0 — so this is not a useful solution.
• If λ = 0 the solution is X = A + Bx, and as before A = B = 0 on substituting
the boundary values.

The only possibility now is λ = −ν 2 , which gives solutions

X = Aν cos νx + Bν sin νx.


Applying the boundary values gives A = 0 and Bν sin νL = 0. If Bν = 0 then the
entire solution is trivial, so the only useful solution has
nπ n2 π 2
sin νL = 0 ⇒ ν = ⇒λ=− 2 .
L L
These special values of λ are eigenvalues and their eigenfunctions are
nπx
Xn = Bn sin .
L
These are the normal modes. Now all we need to do is to solve the t equation using
these values for λ:

n2 π 2 c2
T̈ + T = 0.
L2
This has a general solution
nπct nπct
Tn = Cn cos + Dn sin .
L L
2.3. OSCILLATION ENERGY 7

Thus we have a specific solution of (2.1): yn = Tn Xn . Since (2.1) is linear we can


add solutions to get the general solution

∞  
X nπct nπct nπx
y(x, t) = Cn cos + Dn sin sin . (2.2)
n=1
L L L

This satisfies the boundary values by construction. The only thing left to do is to
satisfy the initial conditions:


X nπx
y(x, 0) = p(x) = Cn sin
n=1
L

∂y X Dn nπc nπx
= q(x) = sin .
∂t (x,0)

n=1
L L

Cn and Dn can now be found using the orthogonality relations for sin. They turn
out to be

Z L Z L
2 nπx 2 nπx
Cn = p(x) sin dx Dn = q(x) sin dx.
L 0 L nπc 0 L

2.3 Oscillation energy


A vibrating string has both KE and PE. The KE is

Z L
1
µ ẏ 2 dx
2 0

and the PE is
Z L Z L
p  1
T 02
1 + y − 1 dx ≈ T y 02 dx.
0 2 0

Since c2 = T µ−1 the total sum is


Z L
1 2
E= µ ẏ 2 + (cy 0 ) dx,
2 0

which eventually evaluates as



1 X n2 π 2 c2 X
Cn2 + Dn2 =

µ 2
energy in mode.
4 n=1 L
normal modes

The energy is conserved in time — there is no dissipation. Further, there is no


transfer of energy between modes.
8 CHAPTER 2. THE WAVE EQUATION

2.4 Solution in characteristic co-ordinates


Consider the 1D wave equation (2.1)

∂2y 2
−2 ∂ y
− c =0
∂x2 ∂t2
and make the change of variables ξ = x + ct, η = x − ct. Using the chain rule this
becomes
∂2y
4 = 0,
∂ξ∂η
with a general solution y(ξ, η) = f (ξ) + g(η). Thus the general solution to (2.1) is

y(x, t) = f (x + ct) + g(x − ct).


This is a superposition of left and right moving waves.
Travelling waves (e.g. g(x − ct)) move with a constant speed c and retain their
shape along characteristics (e.g. the line x − ct = const).

2.5 Wave reflection and transmission


Suppose there is a density discontinuity in the string, say at x = 0. This becomes a
discontinuity in c (although T is a constant). Let
(
c− x < 0
c=
c+ x > 0.

 
Consider a given harmonic incident wave A exp ıω t − cx− . We want to find the
   
reflected wave B exp ıω t + cx− and the transmitted wave C exp ıω t − cx+ .
The string does not break at x = 0, so that y is continuous for all t. This gives
A + B = D.
We further want the forces to balance at x = 0:

∂y ∂y
T =T ,
∂x x=0− ∂x x=0+
∂y
and so ∂x is continuous for all time. This condition gives

A B D
− + =− .
c− c− c+
2.5. WAVE REFLECTION AND TRANSMISSION 9

We can now solve to find


c+ − c− 2c+
B= A D= A.
c+ + c− c+ + c−

Note that the phase of the wave can (and generically does) change.
10 CHAPTER 2. THE WAVE EQUATION
Chapter 3

Green’s Functions

3.1 The Dirac delta function


Define a generalised function δ(x − ξ) with the properties δ(x − ξ) = 0 for x 6= ξ and
Z ∞
δ(x − ξ) dx = 1.
−∞

These two properties imply


Z ∞
f (x)δ(x − ξ) dx = f (ξ).
−∞

Note that

• δ is not a function, but is classified as a distribution.1


• It is always employed in an integrand as a linear operator, where it is well defined.

3.1.1 Representations
We can represent the delta function as some sort of functional limit. A discontinuous
representation is

0
 x < − 2
−1
δ (x) =  − 2 ≤ x ≤ 2
x > 2 ,

0

and a continuous representation is


1 x2
δ (x) = √ e− 2 .
 π
These are obviously both with  → 0. Examples with n → ∞ are
Z n
sin nx 1
δn (x) = = eıkx dx
πx 2π −n
1 See PDE’s IIB for more details (than you could possibly want).

11
12 CHAPTER 3. GREEN’S FUNCTIONS

and δn (x) = n2 sech2 nx.


The Heaviside step function is
(
0 x<0
H(x) =
1 x ≥ 0,

and can be seen to be


Z x
H(x) = δ(ξ) dξ.
−∞

Thus (in some suitably refined sense) H 0 (x) = δ(x). We can also define the deriva-
tive of the delta function such that we can integrate it by parts:

Z ∞ Z ∞
0
f (x)δ (x − ξ) dx = [f (x)δ(x − ξ)]∞
−∞ − f 0 (x)δ(x − ξ) dx = −f 0 (ξ).
−∞ −∞

3.2 Second order linear ODEs


We wish to solve the general second order linear ODE:

Ly ≡ y 00 + b(x)y 0 + c(x)y = f (x). (3.1)

We know that the homogeneous equation (with f ≡ 0) has two linearly independent
solutions y1 and y2 , which give the homogeneous equation the complementary function
solution yc = Ay1 + By2 . The inhomogeneous equation also has a particular solution
yp . The general solution of (3.1) is then yc + yp . Two boundary values (or initial
conditions) are required to find A and B.
We hope to solve the boundary value problem. We will restrict to homogeneous
boundary values: y(a) = y(b) = 0. More general values can be turned into homoge-
neous ones by judicious use of the complementary function.

3.3 Definition of Green’s function


The Green’s function G(x, ξ) is the solution of

LG(x, ξ) = δ(x, ξ)

with G ≡ 0 at endpoints. By linearity we can now construct the solution of (3.1)


for general f :
Z
y(x) = f (ξ)G(x, ξ) dx.

Now y clearly satisfies the homogeneous boundary values, and it is also easy to see
that Ly = f .
3.4. CONSTRUCTING G(X, ξ): BOUNDARY VALUE PROBLEMS 13

3.3.1 Defining properties


G(x, ξ) splits into two halves
(
G1 (x, ξ) a < x < ξ
G(x, ξ) =
G2 (x, ξ) ξ < x < b.
such that G solves the homogeneous equation for x 6= ξ, is continuous at x = ξ
ξ+
and satisfies [G0 ]ξ− = 1. Note that there are many different conventions for this jump
condition.

3.4 Constructing G(x, ξ): boundary value problems


There is a solution to the homogeneous problem y− (x) such that y− (a) = 0. Then
G1 (x, ξ) = Cy− (x). Similarly there is a solution y+ (x) such that y+ (b) = 0 and so
G1 (x, ξ) = Dy+ (x). Now impose continuity at x = ξ to give

Cy− (ξ) = Dy+ (ξ).

The other equation comes from the jump condition:

0 0
Dy+ (ξ) − Cy− (ξ) = 1.
We can solve these equations to give

y+ (ξ) y− (ξ)
C= D= ,
W (ξ) W (ξ)

where W (ξ) is the Wronskian:

0 0
W (ξ) = y− (ξ)y+ (ξ) − y+ (ξ)y− (ξ).
Thus
(y
− (x)y+ (ξ)
W (ξ) x<ξ
G(x, ξ) = y+ (x)y− (ξ)
W (ξ) x > ξ,

and the solution of Ly = f , y(a) = y(b) = 0 is


Z x Z b
f (ξ)y− (ξ) f (ξ)y+ (ξ)
y(x) = y+ (x) dξ + y− (x) dξ.
a W (ξ) x W (ξ)

3.4.1 Derivation of jump conditions


First suppose G(x, ξ) is discontinuous at x = ξ, so that near x = ξ,

G(x, ξ) ∝ H(x − ξ) G0 (x, ξ) ∝ δ(x − ξ) G00 (x, ξ) ∝ δ 0 (x − ξ).

Then the equation LG = δ(x − ξ) becomes

αδ 0 (x − ξ) + βδ(x − ξ) + γH(x − ξ) = δ(x − ξ),


14 CHAPTER 3. GREEN’S FUNCTIONS

which is certainly not possible. So G(x, ξ) is continuous at x = ξ. The jump


condition in G0 can be derived by integrating LG = δ(x − ξ) across x = ξ:
Z ξ+
ξ+ ξ+
[G0 ]ξ− + b(ξ) [G]ξ− + (c − b0 )G dx = 1.
ξ−
| {z }
→0 as ξ − ,ξ + →ξ

Therefore
ξ+
[G0 ]ξ− = 1.

3.4.2 Example
Suppose we wish to solve

y 00 = f (x) y(0) = y(L) = 0.


The homogeneous solutions are y = Ax+B and so G1 = Cx and G2 = D(x−L).
Applying the continuity condition

Cξ = D(ξ − L)

and then the jump condition


D−C =1
ξ ξ−L
gives D = L and C = L .

3.5 Constructing G(x, ξ): initial value problems


Greens’ function methods can also solve initial value problems. Suppose we wish to
solve Ly = f , y(0) = y 0 (0) = 0. Split G into G1 and G2 as before.
Since G1 (a) = G01 (a) = 0 and LG1 = 0 then G1 ≡ 0. Therefore G2 (ξ) = 0 and
0
G2 (ξ) = 0, so that
(
0 x<ξ
G(x, ξ) = y(x)
y 0 (ξ) x > ξ,
where L(y) = 0 and y(ξ) = 0. The solution is then
Z x
f (ξ)
y(x) = y(x) 0
dξ.
a y (ξ)

We see that causality is built in to the solution.

3.5.1 Example
Solve y 00 − y = f (x), x > 0, y(0) = y 0 (0) = 0.
In x < ξ, G(x, ξ) = 0 and in x > ξ we have

G(x, ξ) = Aex + Be−x .

Continuity at x = ξ gives G(x, ξ) = C sinh(x − ξ) in x > ξ. Now y 0 (ξ) = C and


so C = 1. Hence
3.5. CONSTRUCTING G(X, ξ): INITIAL VALUE PROBLEMS 15

Z x
y(x) = f (ξ) sinh(x − ξ) dξ.
0
16 CHAPTER 3. GREEN’S FUNCTIONS
Chapter 4

Sturm-Liouville Theory

4.1 Self-adjoint form and boundary values


We wish to solve the general eigenvalue problem

Ly = y 00 + b(x)y 0 + c(x)y = −λd(x)y (4.1)


with specified boundary conditions. This often occurs after separation of variables
in a PDE. One classic example is the Schrödinger equation:

~2 2
 
∂ψ
− ∇ + V (x) ψ = ı~ .
2m ∂t
ıEt
We try a solution ψ = U (x)e− ~ . Substituting into the Schrödinger equation
gives
~2 2
 
− ∇ + V (x) U = EU.
2m
E is the energy eigenvalue.1
The analysis greatly simplifies is L is in self-adjoint form: that is if (4.1) can be
re-expressed in Sturm-Liouville form:

Ly = −(py 0 )0 + qy = λwy, (4.2)


where the weighting function w(x) is
R xassumed positive. We can easily put (4.1) in
Sturm-Liouville form: multiply by exp b(ξ) dξ.
Definition 4.1. L is self-adjoint on the interval a < x < b iff for all pairs of functions
y1 , y2 satisfying appropriate boundary values we have
Z b Z b
y1 Ly2 dx = y2 Ly1 dx. (4.3)
a a

If we substitute (4.2) into (4.3) we see that “appropriate boundary values” means
b
[−y1 py20 + y2 py10 ]a = 0,
which includes y(a) = y(b) = 0, y 0 (a) = y 0 (b) = 0, y + ky 0 = 0, y(a) = y(b),
p(a) = p(b) = 0 or combinations of the above.
1 See the Quantum Mechanics course for more details.

17
18 CHAPTER 4. STURM-LIOUVILLE THEORY

4.2 Eigenfunction expansions


Self-adjoint operators have three important properties.

4.2.1 Real eigenvalues


Suppose Lyn = λn yn , and so Lyn∗ = λ∗n yn∗ . Then
Z b Z b
yn∗ λn wyn dx − λ∗n yn yn∗ dx = 0
a a

2
and so λ∗n = λn , since
R
w |yn | 6= 0 for non-trivial w, yn .

4.2.2 Orthogonal eigenfunctions


Suppose λm 6= λn . Then
Z b
(λn − λm ) wym yn dx = 0
a
R
and so wym yn = 0. yn , ym are thus orthogonal on [a, b] wrt the weighting
function w(x).

4.2.3 Complete eigenfunctions


We can write sufficiently nice f (x) as
X
f (x) = an yn (x),
n

with
Z b Z b
f (x)yn (x) dx = an wyn2 dx.
a a

The eigenfunctions are sometimes normalised to unit modulus for convenience.


We also have Parseval’s identity, which in this form is


!2
Z b X
f− an yn w dx = 0,
a n=1

or
Z b ∞ Z
X b
wf 2 dx = wyn2 dx. (4.4)
a n=1 a

The expansions needed converge if the eigenfunctions are complete. If the eigen-
functions are not complete then the LHS of (4.4) is greater than its RHS. This is Bessel’s
inequality.
4.3. EXAMPLE: LEGENDRE POLYNOMIALS 19

4.3 Example: Legendre polynomials


Consider Legendre’s equation

(1 − x2 )y 00 − 2xy 0 + λy = 0, (4.5)
which can be rewritten in Sturm-Liouville form as
d
(1 − x2 )y 0 = λy.


dx
It is motivated by separation of variables in spherical polars. The boundary condi-
tions are that y is finite at x = ±1. We try a power series solution about x = 0,

X
y= cn xn ,
n=0

which gives (prove this)


n(n + 1) − λ
cn+2 = cn .
(n + 1)(n + 2)
Specifying c0 and c1 yields linearly independent solutions, one of which is odd and
the other even.
As n → ∞, cn+2 cn → 1 and so we get a geometric series, which is divergent at
x = ±1. One of the two series must terminate and so λ = m(m + 1) for m ∈ N.
The eigenfunctions on −1 ≤ x ≤ 1 are the Legendre polynomials Pn . Pn is usually
normalised so that Pn (1) = 1: with this normalisation we have

n λ Pn
0 0 1
1 2 x
1 2
2 6 2 (3x − 1)
1 3
3 12 2 (5x − 3x)

The orthogonality relation is


Z 1
2
Pn Pm dx = δmn .
−1 2n + 1

4.4 Inhomogeneous boundary value problem


(L − µw)y = f (x).
Consider the above inhomogeneous ODE with homogeneous boundary values and
a fixed µ (not an eigenvalue).
Now we can expand f (x) in terms of eigenfunctions of L:

X
f (x) = w(x) an yn ,
n=1

where Z b
an = f yn dx
a
20 CHAPTER 4. STURM-LIOUVILLE THEORY

yn2 dx = 1. We seek a solution


R
and the eigenfunctions are normalised to
X
f= bn y n .
n

Substituting we find bn (λn − µ) = an (by orthogonality) and so provided µ is not


an eigenvalue,
Z b
X an X yn (x)
y= yn (x) = f yn dx0 .
n
λn − µ n
λn − µ a

If µ is an eigenvalue then this is a resonant frequency: the amplitude grows without


limit and there is no solution consistent with the boundary values.
Chapter 5

Applications: Laplace’s
Equation

We seek to solve

∇2 φ = 0 (5.1)

by the method of separation of variables.


φ can represent the electrostatic potential, gravitational potential, heat and so on.
(5.1) is the homogeneous version of the Poisson equation ∇2 φ = ρ.
Boundary values can be given on

• φ : Dirichlet boundary conditions

• n · ∇φ : von Neumann boundary conditions,

specified on a boundary surface in 3D, boundary curve in 2D or endpoints in 1D.

5.1 Cartesians
∂2 ∂2 ∂2
In Cartesians, ∇2 = ∂x2 + ∂y 2 + ∂z 2 . We seek a solution φ = X(x)Y (y)Z(z) and
get

X 00 Y 00 Z 00
=− − = λl
X Y Z
00 00
Similarly YY = λm and ZZ = λn , where λl + λm + λn = 0. We can then find
eigenfunction solutions satisfying the given boundary values: φlmn = Xl Ym Zn , so
that then the general solution is
X
φ= clmn Xl Ym Zn .
lmn

21
22 CHAPTER 5. APPLICATIONS: LAPLACE’S EQUATION

Example: heat conduction


Solve the system

∇2 φ = 0 in z > 0
φ=0 x = 0, 1 or y = 0, 1
φ=1 at z = 0
φ→0 as z → ∞.

This models heat conduction on a semi-infinite square bar.


We separate variables to get Xn = sin lπx and Ym = sin mπx, with λl = −l2 π 2
and λm = −m2 π 2 . Then we have

Z 00
= π 2 (l2 + m2 ),
Z

l2 +m2
and so Zl,m = e−πz (to satisfy the bc at infinity). Therefore
X √
l2 +m2
φ= Al,m sin lπx sin mπy e−πz .
l,m

To find Al,m use the boundary condition at z = 0:


X
1= Al,m sin lπx sin mπy.
l,m

Now Z 1
1
sin lπt sin mπt dt = δlm
0 2
and so
Z 1 Z 1
Al,m
sin lπx sin mπy dxdy = .
0 0 4
Thus
(
16
π 2 lm l, m odd
Al,m =
0 otherwise.

Note that in this case we have degenerate eigenvalues: both X1 Y2 and X2 Y1 give
the same constant in the z equation. Despite this, we can always choose orthogonal
eigenfunctions.

5.2 Plane polars


In plane polars, Laplace’s equation becomes
 
1 ∂ ∂φ 1 ∂φ
r + 2 2 = 0. (5.2)
r ∂r ∂r r ∂r
5.3. SPHERICAL POLARS 23

We seek φ = R(r)Θ(θ), which gives


r(rR0 )0 Θ00
=λ = −λ.
R Θ
Consider a drum surface with a distorted rim, with unit radius. The height of the
surface is given by φ such that ∇2 φ = 0 and φ(1, θ) = f (θ).
Now the θ equation is

Θ00 + λΘ = 0
and since Θ must be periodic, λ = n2 for n ∈ N.
The solution to this equation is

Θn = an cos nθ + bn sin nθ.


If n = 0 then the solution is Θ0 = a0 + b0 θ : b0 = 0 from the periodic boundary
conditions.
The r equation is
r(rR0 )0 − n2 R = 0,
which has solutions R = cn rn + dn r−n . Thus dn = 0 to keep the solution finite
in r < 1. When n = 0 the solution is c0 + d0 log r and so d0 = 0. Thus

1 X
φ(r, θ) = a0 + (an cos nθ + bn sin nθ) rn .
2 n=1
an and bn can be found using φ(1, θ) = f (θ).

5.3 Spherical polars


The Laplace equation becomes

∂2Φ
   
1 ∂ 2 ∂Φ 1 ∂ ∂Φ 1
2
r + 2
sin θ + 2 2 = 0. (5.3)
r ∂r ∂r r ∂θ ∂θ r sin θ ∂φ2
We seek separable solution R(r)Θ(θ)ψ(φ) and specialise to the axisymmetric case:
ψ = 1. Then we have

(r2 R0 )0 − λR = 0 (sin θΘ0 )0 + λ sin θ = 0.


Putting x = cos θ in the θ equation gives
 
d 2 dΘ
(1 − x ) + λΘ = 0.
dx dx
This is Legendre’s equation (4.5), and so from the earlier analysis we know λn =
n(n + 1). The radial equation becomes

(r2 R0 )0 − n(n + 1)R = 0,


Trying a solution rm given m = n or m = −n − 1, so the eigenfunction expansion
of Φ is
X
An rn + Bn r−n−1 Pn (cos θ).

Φ=
n
An and Bn can be determined from boundary conditions on a spherical surface.
24 CHAPTER 5. APPLICATIONS: LAPLACE’S EQUATION

5.3.1 ** The full glory of spherical polars **


If we drop the assumption of axisymmetry things become more complicated. The az-
imuthal eigenfunctions are ψm = eımφ and the polar eigenfunctions Plm (cos θ) satisfy
the associated Legendre equation

m2
   
d 2 dΘ
(1 − x ) + l(l + 1) − Θ = 0.
dx dx 1 − x2
We combine the azimuthal and polar eigenfunctions to get the spherical harmonics:
s
(2l + 1)!(l − m)! m
Ylm (θ, φ) = Pl (cos θ)eımφ ,
4π(l + m)!
for −l ≤ m ≤ l. The radial equation is the same as before, giving

Rlm = alm rl + blm r−l−1 .


Chapter 6

Calculus of Variations

6.1 The problem


Suppose we wish to minimise
Z x2
J[y] = F (x, y, y 0 ) dx (6.1)
x1
over all functions y such that y(x1 ) = y1 and y(x2 ) = y2 . This is clearly not just
an ordinary calculus minimization, but something slightly harder...

6.2 Euler-Lagrange equations


We will do this, as in ordinary minimization problems, by finding a function such that
the first order variation of J is zero. So, suppose y(x) is the answer and perturb it
slightly to y(x) + δy(x), where δy(x1 ) = δy(x2 ) = 0. Then
∂F ∂F
δF = δy + 0 δy 0 + higher order.
∂y ∂y
Hence

Z x2
∂F ∂F
δJ = + δy 0 0 dx
δy
x1 ∂y ∂y
Z x2    x
∂F d ∂F ∂F 2
= δy − dx + δy 0 .
x1 ∂y dx ∂y 0 ∂y x1
| {z }
=0

Thus for the first order variation to be zero we require


∂F d ∂F
= , (6.2)
∂y dx ∂y 0
since δy is arbitrary. This is an Euler-Lagrange equation.
One variant on this that is sometimes useful: (6.2) is equivalent to
 
d 0 ∂F ∂F
F −y = . (6.3)
dx ∂y 0 ∂x

25
26 CHAPTER 6. CALCULUS OF VARIATIONS

d
To prove this note that dx = ∂x∂
+ y 0 ∂y

+ y 00 ∂y

0.

There are three special cases:

• y 0 absent gives ∂F
∂y = 0, which can be solved for y.
∂F
• y absent gives ∂y 0 = const.

• x absent gives F − y 0 ∂y
∂F
0 = const (use (6.3)).

6.3 Examples
Geodesics
In Euclidean R2 we have a metric ds2 = dx2 + dy 2 and we seek to minimise
Z x2 Z x2 p
ds = 1 + y 02 dx.
x1 x1

We can immediately apply the Euler-Lagrange equations, noting that y is absent


and so

y0
p = const,
1 + y 02
which reduces to y 0 = const and so the geodesics in R2 are straight lines (which is
reassuring, if nothing else).
You can do something similar on the sphere, with

ds2 = dθ2 + sin2 θdφ2

and show that the geodesics are great circles.

Brachistochrone
Consider a frictionless bead on a wire path y(x) connecting two points A and B. What
path gives the shortest travel time from A to B?
Assume A is at y = 0. The time of travel is then
Z B Z B Z x2 s
ds ds 1 1 + y 02
T [y] = = √ =√ dx.
A V A 2gy 2g x1 y
x is absent and the Euler-Lagrange equations eventually give

y(1 + y 02 ) = const,
or  12
Z 
y
x=± dy.
c−y
The substitution y = c sin2 θ
2 = 2c (1 − cos θ) makes this integral doable and gives
c
x = ± (θ − sin θ)2 .
2
6.4. PRINCIPLE OF LEAST ACTION 27

6.4 Principle of Least Action


The action of a system is given by
Z
S= L dt,

where L = KE − PE. Trajectories minimize the action. Now suppose that KE =


1 2
2 mẋand PE = V (x). Then
1
L= mẋ2 − V (x)
2
and the Euler-Lagrange equations give

d
(mẋ) = V 0 ,
dt
which ought to be familiar...
Since t is absent, we know that L − ẋ ∂L
∂ ẋ is constant — in fact it is the total energy.
Something similar is Fermat’s principle, that light follows the path of minimum
time.
Least action principles are important all over physics — see the General Relativity
and Electrodynamics courses for more examples.

6.5 Generalisations
The trick with all of these is just to make the variation and see what happens, integrating
by parts where necessary.
The generalisation to several dependent variables is easiest: extremise
Z x2
J[y] = F (x, y, y0 ) dx.
x1

Performing the variation gives

∂F d ∂F
= .
∂yi dx ∂yi0

Generalisations to several dependent variables exist: but it’s easiest just to do the
variation explicitly.
The same is true of generalisations to more derivatives in F — just do the variation
and integrate by parts.

6.6 Integral constraints


we wish to extremise J = F (x, y, y 0 ) dx subject to the constraint K =
R
Suppose
G(x, y, y 0 ) dx constant. This is done by using Lagrange multipliers: extremising
R

Z
I = F (x, y, y 0 ) + λG(x, y, y 0 ) dx.

Examples are on the problem sheet.


28 CHAPTER 6. CALCULUS OF VARIATIONS
Chapter 7

Cartesian Tensors in R3

Summation convention is used throughout this chapter unless explicitly stated other-
wise.

7.1 Tensors?
A tensor is an object represented in a particular co-ordinate system by a set of functions
called components such that the components in a new co-ordinate system are related to
the components in the old co-ordinates in a prescribed way.
We will consider only orthogonal co-ordinate systems, and restrict the transforma-
tions to rotations and reflexions. More general transformations and co-ordinate systems
are possible — see the General Relativity course for details.
Consider a vector x with components xi in a given orthogonal basis:

x = xi ei .
Now consider new co-ordinates e0i , such that

e0i = (e0i · ej ) ej ,

and denote e0i · ej ≡ lij . Now

δij = e0i · e0j = (lik ek ) · (ljm em ) = lik ljm δkm = lik ljk .

Also,

ei = (ei · e0p )e0p = lpi e0p ,


and so
x = xi ei = xi lpi e0p .
Hence x0p = xi lpi .

7.2 Transformation laws


• Scalars remain invariant under a co-ordinate transformation. Scalars are zero-
rank tensors.

29
30 CHAPTER 7. CARTESIAN TENSORS IN R3

• A vector A is a set of three functions Ai given in a particular co-ordinate system


with the transformation property
A0p = lpi Ai ,
so that A0j is a vector in a co-ordinate system rotated by L. A vector is a 1st rank
tensor.
• A second rank tensor comprises 9 functions Aij in a given co-ordinate system
such that
A0pq = lip ljq Aij .
• An nth rank tensor comprises 3n functions of position such that

A0pq...r = lpi lqj . . . lrk Aij...k .


We can see that 0 ↔ 0, which means that tensor equations are preserved by change
of co-ordinate system. To see this, suppose A and B are tensors with Aij...k = Bij...k
in one co-ordinate system. Then A−B is a tensor — it’s zero, and so A0pq...r −Bpq...r
0
=
0 0
0 and hence Apq...r = Bpq...r . This is why tensors are so useful.

7.3 Tensor algebra


Proof of all of these is obvious — just show that they obey the transformation law.
• If A is a nth rank tensor then so is λA for scalar λ.
• If A and B are nth rank tensors then so is C = A + B.
• If A is an nth rank tensor and B is an mth rank tensor then the outer product
defined by
Cij...kab...c = Aij...k Bab...c
is an (n + m)th rank tensor.
• If Aijk...l is an nth rank tensor then the contraction Aiik...l is an (n − 2)th rank
tensor.

7.4 Quotient Laws


Theorem 7.1 (Quotient Theorem). If the inner product of some quantity A with an
arbitrary vector K is an nth rank tensor then A is an (n + 1)th rank tensor.
Proof. We know
Aij...k Ki = Bj...k
In a new co-ordinate system

0
Bq...r = A0pq...r Kp0
= lqj . . . lrk Bj...k
= lqj . . . lrk Aij...k Ki
= lqj . . . lrk Aij...k lpi Kp0 ,
and so, since K is arbitrary, A0pq...r = lpi lqj . . . lrk Aij...k .
7.5. ISOTROPIC TENSORS 31

This theorem generalises to any type of product — inner, outer or a mixture thereof.
The proof is as above.
This can be used to identify the transformation properties of physical quantities, for
instance in Ohm’s law
Ji = σij Ej ,
where J is the current vector and E the electric field vector, then the conductivity σ
must be a tensor.

7.5 Isotropic tensors


Isotropic tensors are invariant under all co-ordinate transformations:

A0pq...r = lpi lqj . . . lrk Aij...k = Apq...r .


Scalars are clearly isotropic. As for vectors, suppose A0p = lpi Ai = Ap . Then

(lpi − δpi )Ai = 0


for all lpi , so Ai = 0.

Theorem 7.2. The most general isotropic second rank tensor in R3 is λδij .

Proof. λδij is clearly isotropic, so we must prove that it the the most general isotropic
second rank tensor in R3 .
Let Aij be isotropic, so that

A0pq = lpi lqj Aij = Apq .

Rotate by 90◦ around the z-axis — i.e. take


 
0 1 0
L = −1 0 0
0 0 1

and then compare components. Do the same thing with the y axis.

Theorem 7.3. The only isotropic third rank tensor is the alternator ijk (or the product
of a scalar with the alternator).

Proof. The same as before, more or less.

Theorem 7.4. The most general isotropic fourth rank tensor is

Aijkl = λδij δkl + µδik δjl + νδil δjk .

7.5.1 Spherically symmetric integrals


Consider
Z
Aij = xi xj dV.
r<a

It is clearly isotropic, so Aij = λδij . Now contract over i and j to get


32 CHAPTER 7. CARTESIAN TENSORS IN R3

4πa5
Z
3λ = r2 dV = .
r<a 5
Therefore
4πa5
Z
xi xj dV = δij .
r<a 15
This is a surprisingly easy way of doing the above integral!

7.6 Symmetric and antisymmetric tensors


If Aij...k = Aji...k then A is said to be symmetric in i and j. If Aij...k = −Aji...k then
A is antisymmetric or skew symmetric in i and j.
If this is true in one co-ordinate system then it is true in all co-ordinate systems
(exercise).
Any second rank tensor can be decomposed into the sum of an symmetric tensor
and an antisymmetric tensor:
1 1
Tij = (Tij + Tji ) + (Tij − Tji ). (7.1)
2 2
Symmetric second rank tensors can be diagonalised.
Antisymmetric second rank tensors in R3 have only three independent components:
 
0 a b
Aij = −a 0 c  = ijk vk ,
−b −c 0
where vk = (c, −b, a). We can therefore continue the decomposition in (7.1) into
1
Tij = Tkk δij + ijk vk + S̃ij ,
3
the sum of a scalar part, a vector part and an irreducible tensor part.

7.7 Physical Applications


Tensors have a very wide range of physical applications. The relevant courses are:

• Dynamics, Principles of Dynamics: angular momentum tensor, moment of iner-


tia tensor.
• Fluid Dynamics 2, Waves in Fluid and Solid Media, Theoretical Geophysics:
stress tensor, strain tensor.
• General Relativity: metric tensor, Riemann tensor, Ricci tensor.
• Electrodynamics: Electromagnetic field tensor, stress-energy tensor.
References

◦ Arfken and Weber, Mathematical Methods for Physicists, Fourth ed., Academic
Press, 1995.
Quite a lot of people sing Arfken’s praises; I am not one of them. Although it is useful
I think it tries to do too much. If nothing else though it could be used to kill small
mammals and it does have everything in this course in it. A good book to buy if you
only want to buy one book in the next two years. Or if you have a problem with mice.

Related courses
Most of the applied courses over the next two years use this course to some extent. You
have been warned!

33

Вам также может понравиться