Академический Документы
Профессиональный Документы
Культура Документы
MSO-203-B
T. Muthukumar
tmk@iitk.ac.in
November 8, 2019
6 Sixth Week
Lecture Sixteen
Lecture Seventeen
Lecture Eighteen
T. Muthukumar tmk@iitk.ac.in Partial Differential EquationsMSO-203-B November 8, 2019 2 / 181
Raison d’être
∂ξ u(x) = ∇u(x) · ξ.
∂ξ u(x) = ∇u(x) · ξ.
∂ α1 ∂ αn ∂ |α|
∂α = . . . = .
∂x1 α1 ∂xn αn ∂x1 α1 . . . ∂xn αn
∂ α1 ∂ αn ∂ |α|
∂α = . . . = .
∂x1 α1 ∂xn αn ∂x1 α1 . . . ∂xn αn
If |α| = 0, then ∂ α f = f .
∂ α1 ∂ αn ∂ |α|
∂α = . . . = .
∂x1 α1 ∂xn αn ∂x1 α1 . . . ∂xn αn
If |α| = 0, then ∂ α f = f .
For each k ∈ N, D k u(x) := {∂ α u(x) | |α| = k}.
∂ α1 ∂ αn ∂ |α|
∂α = . . . = .
∂x1 α1 ∂xn αn ∂x1 α1 . . . ∂xn αn
If |α| = 0, then ∂ α f = f .
For each k ∈ N, D k u(x) := {∂ α u(x) | |α| = k}.
The case k = 1 is the gradient vector,
∇u(x) := D 1 u(x)
= ∂ (1,0,...,0) u(x), ∂ (0,1,0,...,0) u(x), . . . , ∂ (0,0,...,0,1) u(x)
∂u(x) ∂u(x) ∂u(x)
= , ,..., .
∂x1 ∂x2 ∂xn
Example
Let u(x, y ) : R2 → R be defined as u(x, y ) = ax 2 + by 2 .
Example
Let u(x, y ) : R2 → R be defined as u(x, y ) = ax 2 + by 2 . Then
Example
Let u(x, y ) : R2 → R be defined as u(x, y ) = ax 2 + by 2 . Then
and
2 uxx uyx 2a 0
D u= = .
uxy uyy 0 2b
Example
Let u(x, y ) : R2 → R be defined as u(x, y ) = ax 2 + by 2 . Then
and
2 uxx uyx 2a 0
D u= = .
uxy uyy 0 2b
Observe that ∇u : R2 → R2
Example
Let u(x, y ) : R2 → R be defined as u(x, y ) = ax 2 + by 2 . Then
and
2 uxx uyx 2a 0
D u= = .
uxy uyy 0 2b
2
Observe that ∇u : R2 → R2 and D 2 u : R2 → R4 = R2 .
Example
Let u(x, y ) : R2 → R be defined as u(x, y ) = ax 2 + by 2 . Then
and
2 uxx uyx 2a 0
D u= = .
uxy uyy 0 2b
2
Observe that ∇u : R2 → R2 and D 2 u : R2 → R4 = R2 .
k k−1
for each x ∈ Ω, where F : Rn × Rn × . . . × Rn × R × Ω → R is a given
map such that F depends, at least, on one k-th partial derivative u and is
independent of (k + j)-th partial derivatives of u for all j ∈ N.
k k−1
for each x ∈ Ω, where F : Rn × Rn × . . . × Rn × R × Ω → R is a given
map such that F depends, at least, on one k-th partial derivative u and is
independent of (k + j)-th partial derivatives of u for all j ∈ N.
k k−1
for each x ∈ Ω, where F : Rn × Rn × . . . × Rn × R × Ω → R is a given
map such that F depends, at least, on one k-th partial derivative u and is
independent of (k + j)-th partial derivatives of u for all j ∈ N.
The level of difficulty in solving a PDE may depend on its order k and
linearity of F .
The level of difficulty in solving a PDE may depend on its order k and
linearity of F .
Definition
A k-th order PDE is linear if F in (1.1) has the form
Fu := Lu − f (x)
where Lu(x) := |α|≤k aα (x)∂ α u(x) for given functions f and aα ’s. In
P
addition, if f ≡ 0 then the PDE is linear and homogeneous.
The level of difficulty in solving a PDE may depend on its order k and
linearity of F .
Definition
A k-th order PDE is linear if F in (1.1) has the form
Fu := Lu − f (x)
where Lu(x) := |α|≤k aα (x)∂ α u(x) for given functions f and aα ’s. In
P
addition, if f ≡ 0 then the PDE is linear and homogeneous.
Example
(i) xuy − yux = u is
Example
(i) xuy − yux = u is linear and homogeneous.
Example
(i) xuy − yux = u is linear and homogeneous.
(ii) xux + yuy = x 2 + y 2 is
Example
(i) xuy − yux = u is linear and homogeneous.
(ii) xux + yuy = x 2 + y 2 is linear.
Example
(i) xuy − yux = u is linear and homogeneous.
(ii) xux + yuy = x 2 + y 2 is linear.
(iii) utt − c 2 uxx = f (x, t) is
Example
(i) xuy − yux = u is linear and homogeneous.
(ii) xux + yuy = x 2 + y 2 is linear.
(iii) utt − c 2 uxx = f (x, t) is linear.
Example
(i) xuy − yux = u is linear and homogeneous.
(ii) xux + yuy = x 2 + y 2 is linear.
(iii) utt − c 2 uxx = f (x, t) is linear.
(iv) y 2 uxx + xuyy = 0 is
Example
(i) xuy − yux = u is linear and homogeneous.
(ii) xux + yuy = x 2 + y 2 is linear.
(iii) utt − c 2 uxx = f (x, t) is linear.
(iv) y 2 uxx + xuyy = 0 is linear and homogeneous.
Definition
A k-th order PDE is semilinear if F is linear only in the highest (k-th)
order, i.e., F has the form
X
aα (x)∂ α u(x) + f (D k−1 u(x), . . . , Du(x), u(x), x) = 0.
|α|=k
Definition
A k-th order PDE is semilinear if F is linear only in the highest (k-th)
order, i.e., F has the form
X
aα (x)∂ α u(x) + f (D k−1 u(x), . . . , Du(x), u(x), x) = 0.
|α|=k
Example
(i) ux + uy − u 2 = 0 is
Definition
A k-th order PDE is semilinear if F is linear only in the highest (k-th)
order, i.e., F has the form
X
aα (x)∂ α u(x) + f (D k−1 u(x), . . . , Du(x), u(x), x) = 0.
|α|=k
Example
(i) ux + uy − u 2 = 0 is semilinear.
Definition
A k-th order PDE is semilinear if F is linear only in the highest (k-th)
order, i.e., F has the form
X
aα (x)∂ α u(x) + f (D k−1 u(x), . . . , Du(x), u(x), x) = 0.
|α|=k
Example
(i) ux + uy − u 2 = 0 is semilinear.
(ii) ut + uux + uxxx = 0 is
Definition
A k-th order PDE is semilinear if F is linear only in the highest (k-th)
order, i.e., F has the form
X
aα (x)∂ α u(x) + f (D k−1 u(x), . . . , Du(x), u(x), x) = 0.
|α|=k
Example
(i) ux + uy − u 2 = 0 is semilinear.
(ii) ut + uux + uxxx = 0 is semilinear.
Definition
A k-th order PDE is semilinear if F is linear only in the highest (k-th)
order, i.e., F has the form
X
aα (x)∂ α u(x) + f (D k−1 u(x), . . . , Du(x), u(x), x) = 0.
|α|=k
Example
(i) ux + uy − u 2 = 0 is semilinear.
(ii) ut + uux + uxxx = 0 is semilinear.
2 +u
(iii) utt xxxx = 0 is
Definition
A k-th order PDE is semilinear if F is linear only in the highest (k-th)
order, i.e., F has the form
X
aα (x)∂ α u(x) + f (D k−1 u(x), . . . , Du(x), u(x), x) = 0.
|α|=k
Example
(i) ux + uy − u 2 = 0 is semilinear.
(ii) ut + uux + uxxx = 0 is semilinear.
2 +u
(iii) utt xxxx = 0 is semilinear.
Definition
A k-th order PDE is quasilinear if F has the form
X
aα (D k−1 u(x), . . . , u(x), x)∂ α u + f (D k−1 u(x), . . . , u(x), x) = 0,
|α|=k
i.e., the coefficient of its highest (k-th) order derivative depends on u and
its derivative only upto the previous (k − 1)-th orders.
Definition
A k-th order PDE is quasilinear if F has the form
X
aα (D k−1 u(x), . . . , u(x), x)∂ α u + f (D k−1 u(x), . . . , u(x), x) = 0,
|α|=k
i.e., the coefficient of its highest (k-th) order derivative depends on u and
its derivative only upto the previous (k − 1)-th orders.
Example
(i) ux + uuy − u 2 = 0 is
Definition
A k-th order PDE is quasilinear if F has the form
X
aα (D k−1 u(x), . . . , u(x), x)∂ α u + f (D k−1 u(x), . . . , u(x), x) = 0,
|α|=k
i.e., the coefficient of its highest (k-th) order derivative depends on u and
its derivative only upto the previous (k − 1)-th orders.
Example
(i) ux + uuy − u 2 = 0 is quasilinear.
Definition
A k-th order PDE is quasilinear if F has the form
X
aα (D k−1 u(x), . . . , u(x), x)∂ α u + f (D k−1 u(x), . . . , u(x), x) = 0,
|α|=k
i.e., the coefficient of its highest (k-th) order derivative depends on u and
its derivative only upto the previous (k − 1)-th orders.
Example
(i) ux + uuy − u 2 = 0 is quasilinear.
(ii) uux + uy = 2 is
Definition
A k-th order PDE is quasilinear if F has the form
X
aα (D k−1 u(x), . . . , u(x), x)∂ α u + f (D k−1 u(x), . . . , u(x), x) = 0,
|α|=k
i.e., the coefficient of its highest (k-th) order derivative depends on u and
its derivative only upto the previous (k − 1)-th orders.
Example
(i) ux + uuy − u 2 = 0 is quasilinear.
(ii) uux + uy = 2 is quasilinear.
Definition
A k-th order PDE is fully nonlinear if it depends nonlinearly on the highest
(k-th) order derivatives.
Definition
A k-th order PDE is fully nonlinear if it depends nonlinearly on the highest
(k-th) order derivatives.
Example
(i) ux uy − u = 0 is
Definition
A k-th order PDE is fully nonlinear if it depends nonlinearly on the highest
(k-th) order derivatives.
Example
(i) ux uy − u = 0 is nonlinear.
Definition
A k-th order PDE is fully nonlinear if it depends nonlinearly on the highest
(k-th) order derivatives.
Example
(i) ux uy − u = 0 is nonlinear.
(ii) ux2 + uy2 = 1 is
Definition
A k-th order PDE is fully nonlinear if it depends nonlinearly on the highest
(k-th) order derivatives.
Example
(i) ux uy − u = 0 is nonlinear.
(ii) ux2 + uy2 = 1 is nonlinear.
Definition
We say u : Ω → R is a solution to the PDE (1.1), if ∂ α u exists for all α
explicitly present in (1.1) and u satisfies the equation (1.1).
Definition
We say u : Ω → R is a solution to the PDE (1.1), if ∂ α u exists for all α
explicitly present in (1.1) and u satisfies the equation (1.1).
Example
Consider the first order equation ux (x, y ) = 0 in R2 .
Definition
We say u : Ω → R is a solution to the PDE (1.1), if ∂ α u exists for all α
explicitly present in (1.1) and u satisfies the equation (1.1).
Example
Consider the first order equation ux (x, y ) = 0 in R2 .Freezing the
y -variable, the PDE can be viewed as an ODE in x-variable.
Definition
We say u : Ω → R is a solution to the PDE (1.1), if ∂ α u exists for all α
explicitly present in (1.1) and u satisfies the equation (1.1).
Example
Consider the first order equation ux (x, y ) = 0 in R2 .Freezing the
y -variable, the PDE can be viewed as an ODE in x-variable. On
integrating both sides with respect to x, u(x, y ) = f (y ) for any arbitrary
function f : R → R.
Definition
We say u : Ω → R is a solution to the PDE (1.1), if ∂ α u exists for all α
explicitly present in (1.1) and u satisfies the equation (1.1).
Example
Consider the first order equation ux (x, y ) = 0 in R2 .Freezing the
y -variable, the PDE can be viewed as an ODE in x-variable. On
integrating both sides with respect to x, u(x, y ) = f (y ) for any arbitrary
function f : R → R.Therefore, for every choice of f : R → R, there is a
solution u of the PDE.
Definition
We say u : Ω → R is a solution to the PDE (1.1), if ∂ α u exists for all α
explicitly present in (1.1) and u satisfies the equation (1.1).
Example
Consider the first order equation ux (x, y ) = 0 in R2 .Freezing the
y -variable, the PDE can be viewed as an ODE in x-variable. On
integrating both sides with respect to x, u(x, y ) = f (y ) for any arbitrary
function f : R → R.Therefore, for every choice of f : R → R, there is a
solution u of the PDE.Note that the solution u is not necessarily in
C 1 (R2 ), in contrast to the situation in ODE.
Definition
We say u : Ω → R is a solution to the PDE (1.1), if ∂ α u exists for all α
explicitly present in (1.1) and u satisfies the equation (1.1).
Example
Consider the first order equation ux (x, y ) = 0 in R2 .Freezing the
y -variable, the PDE can be viewed as an ODE in x-variable. On
integrating both sides with respect to x, u(x, y ) = f (y ) for any arbitrary
function f : R → R.Therefore, for every choice of f : R → R, there is a
solution u of the PDE.Note that the solution u is not necessarily in
C 1 (R2 ), in contrast to the situation in ODE. By choosing a discontinuous
function f , one obtains a solution which is discontinuous in the
y -direction.
Definition
We say u : Ω → R is a solution to the PDE (1.1), if ∂ α u exists for all α
explicitly present in (1.1) and u satisfies the equation (1.1).
Example
Consider the first order equation ux (x, y ) = 0 in R2 .Freezing the
y -variable, the PDE can be viewed as an ODE in x-variable. On
integrating both sides with respect to x, u(x, y ) = f (y ) for any arbitrary
function f : R → R.Therefore, for every choice of f : R → R, there is a
solution u of the PDE.Note that the solution u is not necessarily in
C 1 (R2 ), in contrast to the situation in ODE. By choosing a discontinuous
function f , one obtains a solution which is discontinuous in the
y -direction.Similarly, a solution of uy (x, y ) = 0 is u(x, y ) = f (x) for any
choice of f : R → R (not necessarily continuous).
Example
Consider the first order equation ut (x, t) = u(x, t) in R × (0, ∞)
Example
Consider the first order equation ut (x, t) = u(x, t) in R × (0, ∞) such that
u(x, t) 6= 0, for all (x, t).
Example
Consider the first order equation ut (x, t) = u(x, t) in R × (0, ∞) such that
u(x, t) 6= 0, for all (x, t). Freezing the x-variable, the PDE can be viewed
as an ODE in t-variable. Integrating both sides with respect to t we
obtain u(x, t) = f (x)e t , for some arbitrary (not necessarily continuous)
function f : R → R.
Example
Consider the first order equation ux (x, y ) = uy (x, y ) in R2 .
Example
Consider the first order equation ux (x, y ) = uy (x, y ) in R2 .On first glance,
the PDE does not seem simple to solve.
Example
Consider the first order equation ux (x, y ) = uy (x, y ) in R2 .On first glance,
the PDE does not seem simple to solve. But, by change of coordinates,
the PDE can be rewritten in a simpler form. Choose the coordinates
w = x + y and z = x − y
Example
Consider the first order equation ux (x, y ) = uy (x, y ) in R2 .On first glance,
the PDE does not seem simple to solve. But, by change of coordinates,
the PDE can be rewritten in a simpler form. Choose the coordinates
w = x + y and z = x − y and, by chain rule, ux = uw + uz and
uy = uw − uz .
Example
Consider the first order equation ux (x, y ) = uy (x, y ) in R2 .On first glance,
the PDE does not seem simple to solve. But, by change of coordinates,
the PDE can be rewritten in a simpler form. Choose the coordinates
w = x + y and z = x − y and, by chain rule, ux = uw + uz and
uy = uw − uz . In the new coordinates, the PDE becomes uz (w , z) = 0
which is in the form considered in Example 12.
Example
Consider the first order equation ux (x, y ) = uy (x, y ) in R2 .On first glance,
the PDE does not seem simple to solve. But, by change of coordinates,
the PDE can be rewritten in a simpler form. Choose the coordinates
w = x + y and z = x − y and, by chain rule, ux = uw + uz and
uy = uw − uz . In the new coordinates, the PDE becomes uz (w , z) = 0
which is in the form considered in Example 12.Therefore, its solution is
u(w , z) = f (w ) for any arbitrary f : R → R
Example
Consider the first order equation ux (x, y ) = uy (x, y ) in R2 .On first glance,
the PDE does not seem simple to solve. But, by change of coordinates,
the PDE can be rewritten in a simpler form. Choose the coordinates
w = x + y and z = x − y and, by chain rule, ux = uw + uz and
uy = uw − uz . In the new coordinates, the PDE becomes uz (w , z) = 0
which is in the form considered in Example 12.Therefore, its solution is
u(w , z) = f (w ) for any arbitrary f : R → R and, hence,
u(x, y ) = f (x + y ).
Example
Consider the second order PDE ut (x, t) = uxx (x, t).
Example
Consider the second order PDE ut (x, t) = uxx (x, t).
(i) Note that u(x, t) = c is a solution of the PDE, for any constant
c ∈ R. This is a family of solutions indexed by c ∈ R.
Example
Consider the second order PDE ut (x, t) = uxx (x, t).
(i) Note that u(x, t) = c is a solution of the PDE, for any constant
c ∈ R. This is a family of solutions indexed by c ∈ R.
2
(ii) The function u : R2 → R defined as u(x, t) = x2 + t + c, for any
constant c ∈ R, is also a family of solutions of the PDE. Because
ut = 1, ux = x and uxx = 1.
Example
Consider the second order PDE ut (x, t) = uxx (x, t).
(i) Note that u(x, t) = c is a solution of the PDE, for any constant
c ∈ R. This is a family of solutions indexed by c ∈ R.
2
(ii) The function u : R2 → R defined as u(x, t) = x2 + t + c, for any
constant c ∈ R, is also a family of solutions of the PDE. Because
ut = 1, ux = x and uxx = 1.This family is not covered in the first
case.
Example
Consider the second order PDE ut (x, t) = uxx (x, t).
(i) Note that u(x, t) = c is a solution of the PDE, for any constant
c ∈ R. This is a family of solutions indexed by c ∈ R.
2
(ii) The function u : R2 → R defined as u(x, t) = x2 + t + c, for any
constant c ∈ R, is also a family of solutions of the PDE. Because
ut = 1, ux = x and uxx = 1.This family is not covered in the first
case.
(iii) The function u(x, t) = e c(x+ct) is also a family of solutions to the
PDE, for each c ∈ R. Because ut = c 2 u, ux = cu and uxx = c 2 u.
This family is not covered in the previous two cases.
Example
Consider the second order PDE ut (x, t) = uxx (x, t).
(i) Note that u(x, t) = c is a solution of the PDE, for any constant
c ∈ R. This is a family of solutions indexed by c ∈ R.
2
(ii) The function u : R2 → R defined as u(x, t) = x2 + t + c, for any
constant c ∈ R, is also a family of solutions of the PDE. Because
ut = 1, ux = x and uxx = 1.This family is not covered in the first
case.
(iii) The function u(x, t) = e c(x+ct) is also a family of solutions to the
PDE, for each c ∈ R. Because ut = c 2 u, ux = cu and uxx = c 2 u.
This family is not covered in the previous two cases.
Example
Consider the second order PDE ut (x, t) = uxx (x, t).
(i) Note that u(x, t) = c is a solution of the PDE, for any constant
c ∈ R. This is a family of solutions indexed by c ∈ R.
2
(ii) The function u : R2 → R defined as u(x, t) = x2 + t + c, for any
constant c ∈ R, is also a family of solutions of the PDE. Because
ut = 1, ux = x and uxx = 1.This family is not covered in the first
case.
(iii) The function u(x, t) = e c(x+ct) is also a family of solutions to the
PDE, for each c ∈ R. Because ut = c 2 u, ux = cu and uxx = c 2 u.
This family is not covered in the previous two cases.
and the initial conditions are given on the hyperplane {yn = 0}.
and the initial conditions are given on the hyperplane {yn = 0}.
Thus, the necessary condition is a(x, u(x)) · ∇φ 6= 0.
and the initial conditions are given on the hyperplane {yn = 0}.
Thus, the necessary condition is a(x, u(x)) · ∇φ 6= 0.
Recall that ∇φ is the normal to the hypersurface Γ.
Thus, the line is not a non-characteristic for m = 3/2, i.e., all lines with
slope 3/2 is not a non-characteristic.
Thus, instead of the normal derivative, one can prescribe the partial
derivatives ux and uy on Γ
Thus, instead of the normal derivative, one can prescribe the partial
derivatives ux and uy on Γ and reformulate the Cauchy problem (1.4) as
Puxx + 2Quxy + Ruyy = f (x, y , u, ux , uy ) in Ω
u(x, y ) = u0 (x, y ) on Γ
(1.5)
u x (x, y ) = u 11 (x, y ) on Γ
uy (x, y ) = u12 (x, y ) on Γ.
satisfying the compatibility condition u00 (s) = u11 γ10 (s) + u12 γ20 (s).
Thus, instead of the normal derivative, one can prescribe the partial
derivatives ux and uy on Γ and reformulate the Cauchy problem (1.4) as
Puxx + 2Quxy + Ruyy = f (x, y , u, ux , uy ) in Ω
u(x, y ) = u0 (x, y ) on Γ
(1.5)
u x (x, y ) = u 11 (x, y ) on Γ
uy (x, y ) = u12 (x, y ) on Γ.
satisfying the compatibility condition u00 (s) = u11 γ10 (s) + u12 γ20 (s). The
compatibility condition implies that among u0 , u11 , u12 only two can be
assigned independently, as expected for a second order equation.
Definition
We say a curve Γ ⊂ R2 is characteristic with respect to (1.5) if
P(γ20 )2 − 2Qγ10 γ20 + R(γ10 )2 = 0 where (γ1 (s), γ2 (s)) is a parametrisation
of Γ.
T. Muthukumar tmk@iitk.ac.in Partial Differential EquationsMSO-203-B November 8, 2019 32 / 181
Two Dimension
Definition
A second order quasilinear PDE in two dimension is of
(a) hyperbolic type at x if d(x) > 0,
Definition
A second order quasilinear PDE in two dimension is of
(a) hyperbolic type at x if d(x) > 0, has two families of real characteristic
curves,
Definition
A second order quasilinear PDE in two dimension is of
(a) hyperbolic type at x if d(x) > 0, has two families of real characteristic
curves,
(b) parabolic type at x if d(x) = 0,
Definition
A second order quasilinear PDE in two dimension is of
(a) hyperbolic type at x if d(x) > 0, has two families of real characteristic
curves,
(b) parabolic type at x if d(x) = 0, has one family of real characteristic
curves
Definition
A second order quasilinear PDE in two dimension is of
(a) hyperbolic type at x if d(x) > 0, has two families of real characteristic
curves,
(b) parabolic type at x if d(x) = 0, has one family of real characteristic
curves and
(c) elliptic type at x if d(x) < 0,
Definition
A second order quasilinear PDE in two dimension is of
(a) hyperbolic type at x if d(x) > 0, has two families of real characteristic
curves,
(b) parabolic type at x if d(x) = 0, has one family of real characteristic
curves and
(c) elliptic type at x if d(x) < 0, has no real characteristic curves.
Definition
We say the partial differential operator given in (2.1) is
hyperbolic at x ∈ Ω, if Z (x) = 0 and either P(x) = 1 or
P(x) = n − 1.
Definition
We say the partial differential operator given in (2.1) is
hyperbolic at x ∈ Ω, if Z (x) = 0 and either P(x) = 1 or
P(x) = n − 1.
elliptic, if Z (x) = 0 and either P(x) = n or P(x) = 0.
Definition
We say the partial differential operator given in (2.1) is
hyperbolic at x ∈ Ω, if Z (x) = 0 and either P(x) = 1 or
P(x) = n − 1.
elliptic, if Z (x) = 0 and either P(x) = n or P(x) = 0.
is ultra hyperbolic, if Z (x) = 0 and 1 < P(x) < n − 1.
Definition
We say the partial differential operator given in (2.1) is
hyperbolic at x ∈ Ω, if Z (x) = 0 and either P(x) = 1 or
P(x) = n − 1.
elliptic, if Z (x) = 0 and either P(x) = n or P(x) = 0.
is ultra hyperbolic, if Z (x) = 0 and 1 < P(x) < n − 1.
is parabolic if Z (x) > 0.
Example
The wave equation utt − ∆x u = f (x, t) for (x, t) ∈ Rn+1 is hyperbolic
because the (n + 1) × (n + 1) second order coefficient matrix is
−I 0
A :=
0t 1
has no zero eigenvalue and exactly one positive eigenvalue, where I is the
n × n identity matrix.
Example
The Laplace equation ∆u = f (∇u, u, x) for x ∈ Rn is elliptic because
∆u = I · D 2 u(x) where I is the n × n identity matrix. The eigen values are
all positive.
Consider the second order semilinear PDE not in standard form and
seek a change of coordinates w = w (x, y ) and z = z(x, y ), with
non-vanishing Jacobian, such that the reduced form is the standard
form.
Consider the second order semilinear PDE not in standard form and
seek a change of coordinates w = w (x, y ) and z = z(x, y ), with
non-vanishing Jacobian, such that the reduced form is the standard
form.
How does one choose such coordinates w and z.
Consider the second order semilinear PDE not in standard form and
seek a change of coordinates w = w (x, y ) and z = z(x, y ), with
non-vanishing Jacobian, such that the reduced form is the standard
form.
How does one choose such coordinates w and z. Recall the
coeeficients a, b and c obtained in the proof of Exercise 5 of the first
assignment!
If Q 2 − PR > 0, we have two characteristics. Thus, choose w and z
such that a = c = 0.
Consider the second order semilinear PDE not in standard form and
seek a change of coordinates w = w (x, y ) and z = z(x, y ), with
non-vanishing Jacobian, such that the reduced form is the standard
form.
How does one choose such coordinates w and z. Recall the
coeeficients a, b and c obtained in the proof of Exercise 5 of the first
assignment!
If Q 2 − PR > 0, we have two characteristics. Thus, choose w and z
such that a = c = 0.
This implies we have to choose w and z such that
√
wx −Q ± Q 2 − PR zx
= = .
wy P zy
ux = e −x uw + uz
uy = −e −y uw
uxx = e −2x uww + 2e −x uwz + uzz − e −x uw
uyy = e −2y uww + e −y uw
uxy = −e −y (e −x uww − uwz )
ux = e −x uw + uz
uy = −e −y uw
uxx = e −2x uww + 2e −x uwz + uzz − e −x uw
uyy = e −2y uww + e −y uw
uxy = −e −y (e −x uww − uwz )
e x e −y uzz = (e −y − e −x )uw
ux = uw /x, uy = uz /y
uxx = −uw /x + uww /x 22
Thus, the data vector field V (x, z) := (a(x, z), f (x, z)) ∈ Rn+1 is
perpendicular to the normal of S at every point of S.
T. Muthukumar tmk@iitk.ac.in Partial Differential EquationsMSO-203-B November 8, 2019 49 / 181
Integral Surface
The coefficient vector V must lie on the tangent plane of S, at each of its
point.
1
the union is in the sense that every point in the integral surface belongs to exactly
one characteristic
T. Muthukumar tmk@iitk.ac.in Partial Differential EquationsMSO-203-B November 8, 2019 50 / 181
Integral Surface
The coefficient vector V must lie on the tangent plane of S, at each of its
point. Hence, we seek a surface S such that the given coefficient field V is
tangential to S, at every point of S.
1
the union is in the sense that every point in the integral surface belongs to exactly
one characteristic
T. Muthukumar tmk@iitk.ac.in Partial Differential EquationsMSO-203-B November 8, 2019 50 / 181
Integral Surface
The coefficient vector V must lie on the tangent plane of S, at each of its
point. Hence, we seek a surface S such that the given coefficient field V is
tangential to S, at every point of S.
Definition
A smooth curve in Rn is said to be an integral curve w.r.t a given vector
field, if the vector field is tangential to the curve at each of its point.
1
the union is in the sense that every point in the integral surface belongs to exactly
one characteristic
T. Muthukumar tmk@iitk.ac.in Partial Differential EquationsMSO-203-B November 8, 2019 50 / 181
Integral Surface
The coefficient vector V must lie on the tangent plane of S, at each of its
point. Hence, we seek a surface S such that the given coefficient field V is
tangential to S, at every point of S.
Definition
A smooth curve in Rn is said to be an integral curve w.r.t a given vector
field, if the vector field is tangential to the curve at each of its point.
Further, a smooth surface in Rn is said to be an integral surface w.r.t a
given vector field, if the vector field is tangential to the surface at each of
its point.
1
the union is in the sense that every point in the integral surface belongs to exactly
one characteristic
T. Muthukumar tmk@iitk.ac.in Partial Differential EquationsMSO-203-B November 8, 2019 50 / 181
Integral Surface
The coefficient vector V must lie on the tangent plane of S, at each of its
point. Hence, we seek a surface S such that the given coefficient field V is
tangential to S, at every point of S.
Definition
A smooth curve in Rn is said to be an integral curve w.r.t a given vector
field, if the vector field is tangential to the curve at each of its point.
Further, a smooth surface in Rn is said to be an integral surface w.r.t a
given vector field, if the vector field is tangential to the surface at each of
its point.
1
the union is in the sense that every point in the integral surface belongs to exactly
one characteristic
T. Muthukumar tmk@iitk.ac.in Partial Differential EquationsMSO-203-B November 8, 2019 50 / 181
Integral Surface
The coefficient vector V must lie on the tangent plane of S, at each of its
point. Hence, we seek a surface S such that the given coefficient field V is
tangential to S, at every point of S.
Definition
A smooth curve in Rn is said to be an integral curve w.r.t a given vector
field, if the vector field is tangential to the curve at each of its point.
Further, a smooth surface in Rn is said to be an integral surface w.r.t a
given vector field, if the vector field is tangential to the surface at each of
its point.
1
the union is in the sense that every point in the integral surface belongs to exactly
one characteristic
T. Muthukumar tmk@iitk.ac.in Partial Differential EquationsMSO-203-B November 8, 2019 50 / 181
Method of Characteristics for Quasilinear
x(r , s) = as + c1 (r ), t(r , s) = s + c2 (r )
x(r , 0) = c1 (r ) = r
We solve for r , s in terms of x, t and set u(x, t) = z(r (x, t), s(x, t)).
x(r , s) = u0 (r )s + c1 (r ), t(r , s) = s + c2 (r )
Therefore,
x(r , s) = u0 (r )s + r , and t(r , s) = s.
x(r , s) = u0 (r )s + c1 (r ), t(r , s) = s + c2 (r )
Therefore,
x(r , s) = u0 (r )s + r , and t(r , s) = s.
2
compare an EVP with the notion of diagonalisation of matrices from Linear Algebra
T. Muthukumar tmk@iitk.ac.in Partial Differential EquationsMSO-203-B November 8, 2019 59 / 181
Eigen Value Problems
Definition
Let L denote a linear differential operator and I ⊂ R. Then we say
Ly (x) = λy (x) on I is an eigenvalue problem (EVP) corresponding to L
when both λ and y : I → R are unknown.
2
compare an EVP with the notion of diagonalisation of matrices from Linear Algebra
T. Muthukumar tmk@iitk.ac.in Partial Differential EquationsMSO-203-B November 8, 2019 59 / 181
Eigen Value Problems
Definition
Let L denote a linear differential operator and I ⊂ R. Then we say
Ly (x) = λy (x) on I is an eigenvalue problem (EVP) corresponding to L
when both λ and y : I → R are unknown.
Example
−d 2
For instance, if L = dx 2
then its corresponding eigenvalue problem is
−y 00 = λy .
2
compare an EVP with the notion of diagonalisation of matrices from Linear Algebra
T. Muthukumar tmk@iitk.ac.in Partial Differential EquationsMSO-203-B November 8, 2019 59 / 181
Eigen Value Problems
Definition
Let L denote a linear differential operator and I ⊂ R. Then we say
Ly (x) = λy (x) on I is an eigenvalue problem (EVP) corresponding to L
when both λ and y : I → R are unknown.
Example
−d 2
For instance, if L = dx 2
then its corresponding eigenvalue problem is
−y 00 = λy .
2
compare an EVP with the notion of diagonalisation of matrices from Linear Algebra
T. Muthukumar tmk@iitk.ac.in Partial Differential EquationsMSO-203-B November 8, 2019 59 / 181
Definition
A λ ∈ R, for which the EVP corresponding to L admits a non-trivial
solution yλ is called an eigenvalue of the operator L and yλ is said to be an
eigen function corresponding to λ. The set of all eigenvalues of L is called
the spectrum of L.
Exercise
Show that any second order ODE of the form
The function y (x) and λ are unknown quantities. The pair of boundary
conditions given above is called separated.
The function y (x) and λ are unknown quantities. The pair of boundary
conditions given above is called separated. The boundary conditions
corresponds to the end-point a and b, respectively.
The function y (x) and λ are unknown quantities. The pair of boundary
conditions given above is called separated. The boundary conditions
corresponds to the end-point a and b, respectively. Note that both c1 and
c2 cannot be zero simultaneously and, similar condition on d1 and d2 .
cy (a) + y 0 (a) = 0,
where c > 0 is a constant.
where p(x) = r (x) = x, q(x) = −n2 /x. This equation is not regular
where p(x) = r (x) = x, q(x) = −n2 /x. This equation is not regular
because p(0) = r (0) = 0 and q is not continuous in the closed interval
[0, a], since q(x) → −∞ as x → 0. Note that there is no boundary
condition corresponding to 0.
where p(x) = r (x) = x, q(x) = −n2 /x. This equation is not regular
because p(0) = r (0) = 0 and q is not continuous in the closed interval
[0, a], since q(x) → −∞ as x → 0. Note that there is no boundary
condition corresponding to 0.
0
(b) The Legendre equation, − (1 − x 2 )y 0 (x) = λy (x) with x ∈ (−1, 1)
where p(x) = r (x) = x, q(x) = −n2 /x. This equation is not regular
because p(0) = r (0) = 0 and q is not continuous in the closed interval
[0, a], since q(x) → −∞ as x → 0. Note that there is no boundary
condition corresponding to 0.
0
(b) The Legendre equation, − (1 − x 2 )y 0 (x) = λy (x) with x ∈ (−1, 1)
where p(x) = r (x) = x, q(x) = −n2 /x. This equation is not regular
because p(0) = r (0) = 0 and q is not continuous in the closed interval
[0, a], since q(x) → −∞ as x → 0. Note that there is no boundary
condition corresponding to 0.
0
(b) The Legendre equation, − (1 − x 2 )y 0 (x) = λy (x) with x ∈ (−1, 1)
where p(x) = r (x) = x, q(x) = −n2 /x. This equation is not regular
because p(0) = r (0) = 0 and q is not continuous in the closed interval
[0, a], since q(x) → −∞ as x → 0. Note that there is no boundary
condition corresponding to 0.
0
(b) The Legendre equation, − (1 − x 2 )y 0 (x) = λy (x) with x ∈ (−1, 1)
y (−π) = y (π)
0
y (−π) = y 0 (π).
We shall now state without proof the spectral theorem for regular S-L
problem. Our aim, in this course, is to check the validity of the theorem
through some examples.
We shall now state without proof the spectral theorem for regular S-L
problem. Our aim, in this course, is to check the validity of the theorem
through some examples.
Theorem
For a regular S-L problem, there exists an increasing sequence of
eigenvalues 0 < λ1 < λ2 < λ3 < . . . < λk < . . . with λk → ∞, as k → ∞.
We shall now state without proof the spectral theorem for regular S-L
problem. Our aim, in this course, is to check the validity of the theorem
through some examples.
Theorem
For a regular S-L problem, there exists an increasing sequence of
eigenvalues 0 < λ1 < λ2 < λ3 < . . . < λk < . . . with λk → ∞, as k → ∞.
Example
Consider the boundary value problem,
y 00 + λy = 0 x ∈ (0, a)
y (0) = y (a) = 0.
and λk = (kπ/a)2 .
Theorem
For a periodic S-L problem, there exists an increasing sequence of
eigenvalues 0 ≤ λ1 < λ2 < λ3 < . . . < λk < . . . with λk → ∞, as k → ∞.
Moreover, W1 = Wλ1 , the eigen space corresponding to the first eigen
value is one dimensional.
Theorem
For a periodic S-L problem, there exists an increasing sequence of
eigenvalues 0 ≤ λ1 < λ2 < λ3 < . . . < λk < . . . with λk → ∞, as k → ∞.
Moreover, W1 = Wλ1 , the eigen space corresponding to the first eigen
value is one dimensional.
Example
Consider the boundary value problem,
00
y + λy = 0 in (−π, π)
y (−π) = y (π)
0
y (−π) = y 0 (π).
Theorem
For a periodic S-L problem, there exists an increasing sequence of
eigenvalues 0 ≤ λ1 < λ2 < λ3 < . . . < λk < . . . with λk → ∞, as k → ∞.
Moreover, W1 = Wλ1 , the eigen space corresponding to the first eigen
value is one dimensional.
Example
Consider the boundary value problem,
00
y + λy = 0 in (−π, π)
y (−π) = y (π)
0
y (−π) = y 0 (π).
Theorem
For a periodic S-L problem, there exists an increasing sequence of
eigenvalues 0 ≤ λ1 < λ2 < λ3 < . . . < λk < . . . with λk → ∞, as k → ∞.
Moreover, W1 = Wλ1 , the eigen space corresponding to the first eigen
value is one dimensional.
Example
Consider the boundary value problem,
00
y + λy = 0 in (−π, π)
y (−π) = y (π)
0
y (−π) = y 0 (π).
The characteristic
√ equation is m2 + λ = 0. Solving for m, we get
m = ± −λ.
Theorem
For a periodic S-L problem, there exists an increasing sequence of
eigenvalues 0 ≤ λ1 < λ2 < λ3 < . . . < λk < . . . with λk → ∞, as k → ∞.
Moreover, W1 = Wλ1 , the eigen space corresponding to the first eigen
value is one dimensional.
Example
Consider the boundary value problem,
00
y + λy = 0 in (−π, π)
y (−π) = y (π)
0
y (−π) = y 0 (π).
The characteristic
√ equation is m2 + λ = 0. Solving for m, we get
m = ± −λ. Note that the λ can be either zero, positive or negative.
and
√ √ √ √
−α sin(− λπ) + β cos(− λπ) = −α sin( λπ) + β cos( λπ).
and
√ √ √ √
−α sin(− λπ) + β cos(− λπ) = −α sin( λπ) + β cos( λπ).
√ √
Thus, β sin( λπ) = α sin( λπ) = 0.
and λk = k 2 .
P∞ 1. We
convergence look for power series form of solutions
k
y (x) = k=0 ak x .
P∞ 1. We
convergence look for power series form of solutions
k
y (x) = k=0 ak x . Differentiating (twice) the series term by term,
substituting in the Legendre equation and equating like powers of x,
P∞ 1. We
convergence look for power series form of solutions
k
y (x) = k=0 ak x . Differentiating (twice) the series term by term,
substituting in the Legendre equation and equating like powers of x, we
(2−λ)a1
get a2 = −λa2 , a3 =
0
6 and for k ≥ 2,
(k(k + 1) − λ)ak
ak+2 = .
(k + 2)(k + 1)
provided the series converge. Note from the recurrence relation that if a
coefficient is zero at some stage, then every alternate coefficient,
subsequently, is zero.
provided the series converge. Note from the recurrence relation that if a
coefficient is zero at some stage, then every alternate coefficient,
subsequently, is zero. Thus, there are two possibilities of convergence here:
provided the series converge. Note from the recurrence relation that if a
coefficient is zero at some stage, then every alternate coefficient,
subsequently, is zero. Thus, there are two possibilities of convergence here:
provided the series converge. Note from the recurrence relation that if a
coefficient is zero at some stage, then every alternate coefficient,
subsequently, is zero. Thus, there are two possibilities of convergence here:
3
needs proof
T. Muthukumar tmk@iitk.ac.in Partial Differential EquationsMSO-203-B November 8, 2019 79 / 181
EVP of Bessel’s Operator
Consider the EVP, for each fixed n = 0, 1, 2, . . .,
( 2
− (xy 0 (x))0 = − nx + λx y (x) x ∈ (0, a)
y (a) = 0.
As before, since this is a singular S-L problem we shall look for solutions y
such that y and its derivative y 0 are continuous in the closed interval [0, a].
3
needs proof
T. Muthukumar tmk@iitk.ac.in Partial Differential EquationsMSO-203-B November 8, 2019 79 / 181
EVP of Bessel’s Operator
Consider the EVP, for each fixed n = 0, 1, 2, . . .,
( 2
− (xy 0 (x))0 = − nx + λx y (x) x ∈ (0, a)
y (a) = 0.
As before, since this is a singular S-L problem we shall look for solutions y
such that y and its derivative y 0 are continuous in the closed interval [0, a].
We shall assume that the eigenvalues are all real3 ! Thus, λ may be zero,
positive or negative.
3
needs proof
T. Muthukumar tmk@iitk.ac.in Partial Differential EquationsMSO-203-B November 8, 2019 79 / 181
EVP of Bessel’s Operator
Consider the EVP, for each fixed n = 0, 1, 2, . . .,
( 2
− (xy 0 (x))0 = − nx + λx y (x) x ∈ (0, a)
y (a) = 0.
As before, since this is a singular S-L problem we shall look for solutions y
such that y and its derivative y 0 are continuous in the closed interval [0, a].
We shall assume that the eigenvalues are all real3 ! Thus, λ may be zero,
positive or negative.
When λ = 0, the given ODE reduces to the Cauchy-Euler form
0 n 2
− xy 0 (x) + y (x) = 0
x
or equivalently,
x 2 y 00 (x) + xy 0 (x) − n2 y (x) = 0.
3
needs proof
T. Muthukumar tmk@iitk.ac.in Partial Differential EquationsMSO-203-B November 8, 2019 79 / 181
The above second order ODE with variable coefficients can be converted
to an ODE with constant coefficients by the substitution x = e s (or
s = ln x). Then, by chain rule,
dy dy ds dy
y0 = = = e −s
dx ds dx ds
and
0
2
00 −s dy −s d −s dy −2s d y dy
y =e =e e =e − .
ds ds ds ds 2 ds
Therefore,
y 00 (s) − n2 y (s) = 0,
where y is now a function of the new variable s.
dy dy ds dy
y0 = = = e −s
dx ds dx ds
and
0
2
00 −s dy −s d −s dy −2s d y dy
y =e =e e =e − .
ds ds ds ds 2 ds
Therefore,
y 00 (s) − n2 y (s) = 0,
where y is now a function of the new variable s. For n = 0, the general
solution is y (s) = αs + β, for some arbitrary constants. Thus,
y (x) = α ln x + β.
dy dy ds dy
y0 = = = e −s
dx ds dx ds
and
0
2
00 −s dy −s d −s dy −2s d y dy
y =e =e e =e − .
ds ds ds ds 2 ds
Therefore,
y 00 (s) − n2 y (s) = 0,
where y is now a function of the new variable s. For n = 0, the general
solution is y (s) = αs + β, for some arbitrary constants. Thus,
y (x) = α ln x + β. The requirement that both y and y 0 are continuous on
[0, a] forces α = 0. Thus, y (x) = β.
dy dy ds dy
y0 = = = e −s
dx ds dx ds
and
0
2
00 −s dy −s d −s dy −2s d y dy
y =e =e e =e − .
ds ds ds ds 2 ds
Therefore,
y 00 (s) − n2 y (s) = 0,
where y is now a function of the new variable s. For n = 0, the general
solution is y (s) = αs + β, for some arbitrary constants. Thus,
y (x) = α ln x + β. The requirement that both y and y 0 are continuous on
[0, a] forces α = 0. Thus, y (x) = β. But y (a) = 0 and hence β = 0,
yielding the trivial solution.
Theorem
For each non-negative integer n, Jn has infinitely many positive roots.
Theorem
For each non-negative integer n, Jn has infinitely many positive roots.
Theorem
For each non-negative integer n, Jn has infinitely many positive roots.
Observe that for a regular S-L problem the differential operator can be
written as
−1 d d q(x)
L= p(x) − .
r (x) dx dx r (x)
Let V denote the set of all solutions of (3.1). Necessarily, 0 ∈ V and
V ⊂ C 2 (a, b). We define the inner product4 h·, ·i : V × V → R on V as,
Z b
hf , g i := r (x)f (x)g (x) dx.
a
4
a generalisation of the usual scalar product of vectors
T. Muthukumar tmk@iitk.ac.in Partial Differential EquationsMSO-203-B November 8, 2019 83 / 181
Definition
We say two functions f and g are perpendicular or orthogonal p with weight
r if hf , g i = 0. We say f is of unit length if its norm kf k = hf , f i = 1.
Theorem
With respect to the inner product defined above in V , the eigen functions
corresponding to distinct eigenvalues of the S-L problem are orthogonal.
y 00 + λy = 0 x ∈ (0, a)
y (0) = y (a) = 0
to be (yk , λk ) where
kπx
yk (x) = sin
a
and λk = (kπ/a)2 , for each k ∈ N.
y 00 + λy = 0 x ∈ (0, a)
y (0) = y (a) = 0
to be (yk , λk ) where
kπx
yk (x) = sin
a
and λk = (kπ/a)2 , for each k ∈ N. For m, n ∈ N such that m 6= n, we
need to check that ym and yn are orthogonal. Since r ≡ 1, we consider
Z a mπx nπx
hym (x), yn (x)i = sin sin dx
0 a a
where zni is the i-th positive zero of the Bessel function (of order n) Jn .
We are using the “≈” symbol to highlight the fact that the issue of
convergence of the series is ignored.
T. Muthukumar tmk@iitk.ac.in Partial Differential EquationsMSO-203-B November 8, 2019 89 / 181
If the eigenvectors (or eigen functions) yk involves only sin or cos
terms, as in regular S-L problem (cf. Example 43), then the series is
called Fourier Sine or Fourier Cosine series.
We isolate the properties of the trigonometric functions, viz., sin, cos etc.
We isolate the properties of the trigonometric functions, viz., sin, cos etc.
Definition
A function f : R → R is said to be periodic of period T , if T > 0 is the
smallest number such that
f (t + T ) = f (t) ∀t ∈ R.
We isolate the properties of the trigonometric functions, viz., sin, cos etc.
Definition
A function f : R → R is said to be periodic of period T , if T > 0 is the
smallest number such that
f (t + T ) = f (t) ∀t ∈ R.
Example
The trigonometric functions sin t and cos t are 2π-periodic functions, while
sin 2t and cos 2t are π-periodic functions.
In fact, for any positive integer k, sin 2πkt and cos 2πkt
T T are T -periodic
functions.
In fact, for any positive integer k, sin 2πkt and cos 2πkt
T T are T -periodic
functions.
Exercise
If f : R → R is a T -periodic function, then show that
(i) f (t − T ) = f (t), for all t ∈ R.
(ii) f (t + kT ) = f (t), for all k ∈ Z.
(iii) g (t) = f (αt + β) is (T /α)-periodic, where α > 0 and β ∈ R.
where
Z T
2 2πkt
ak = f (t) cos dt (4.2a)
T 0 T
Z T
2 2πkt
bk = f (t) sin dt (4.2b)
T 0 T
Z T
1
a0 = f (t) dt. (4.2c)
T 0
Note the use of “≈” symbol in (4.3). This is because we have the
following issues once we have the definition of Fourier series of f , viz.,
(a) Will the Fourier series of f always converge?
(b) If it converges, will it converge to f ?
(c) If so, is the convergence point-wise or uniform5 .
5
because our derivation of formulae for Fourier coefficients assumed uniform
convergence of the series
T. Muthukumar tmk@iitk.ac.in Partial Differential EquationsMSO-203-B November 8, 2019 98 / 181
Answering these question, in all generality, is beyond the scope of this
course.
For each k ∈ N, Z π
1
ak = c cos kt dt = 0
π −π
and Z π
1
bk = c sin kt dt = 0.
π −π
For each k ∈ N, Z π
1
ak = sin t cos kt dt = 0
π −π
and (
π
6 1
Z
1 0 k=
bk = sin t sin kt dt =
π −π 1 k = 1.
Similarly, for f (t) = cos t on (−π, π), all Fourier coefficients are zero,
except a1 = 1.
For each k ∈ N,
1 π
Z
ak = t cos kt dt
π −π
Z π
1
= − sin kt dt + (π sin kπ − (−π) sin k(−π))
kπ −π
1 π
Z
bk = t sin kt dt
π −π
Z π
1
= cos kt dt − (π cos kπ − (−π) cos k(−π))
kπ −π
1 h i (−1)k+1 2
= 0 − π(−1)k + π(−1)k =
kπ k
Therefore, t as a 2π-periodic function defined in (−π, π) has the Fourier
series expansion
∞
X (−1)k+1
t≈2 sin kt
k
k=1
1 π
Z
π
a0 = t dt = .
π 0 2
For each k ∈ N,
2 π
Z Z π
2
ak = t cos 2kt dt = − sin 2kt dt + (π sin 2kπ − 0)
π 0 2kπ 0
1 1
= (cos 2kπ − cos(0)) = 0 and
kπ 2k
2 π
Z Z π
2
bk = t sin 2kt dt = cos 2kt dt − (π cos 2kπ − 0)
π 0 2kπ 0
1 1 −1
= (sin 2kπ − sin(0)) − π = .
kπ 2k k
Note that difference in Fourier expansion of the same function when the
periodicity changes.
lim ak = lim bk = 0.
k→∞ k→∞
lim ak = lim bk = 0.
k→∞ k→∞
By the uniform continuity of f on [−π, π], the maximum will tend to zero
as k → ∞. Hence |bk | → 0. Exactly, similar arguments hold for ak .
T. Muthukumar tmk@iitk.ac.in Partial Differential EquationsMSO-203-B November 8, 2019 107 / 181
Piecewise Smooth Functions
Definition
A function f : [a, b] → R is said to be piecewise continuously differentiable
if it has a continuous derivative f 0 in (a, b), except at finitely many points
in the interval [a, b] and at each these finite points, the right-hand and
left-hand limit for both f and f 0 exist.
Definition
A function f : [a, b] → R is said to be piecewise continuously differentiable
if it has a continuous derivative f 0 in (a, b), except at finitely many points
in the interval [a, b] and at each these finite points, the right-hand and
left-hand limit for both f and f 0 exist.
Example
Consider f : [−1, 1] → R defined as f (t) = |t| is continuous.
Definition
A function f : [a, b] → R is said to be piecewise continuously differentiable
if it has a continuous derivative f 0 in (a, b), except at finitely many points
in the interval [a, b] and at each these finite points, the right-hand and
left-hand limit for both f and f 0 exist.
Example
Consider f : [−1, 1] → R defined as f (t) = |t| is continuous.It is not
differentiable at 0, but it is piecewise continuously differentiable.
lim ak = lim bk = 0.
k→∞ k→∞
Corollary
If f : R → R is a continuously differentiable (derivative f 0 exists and is
continuous) T -periodic function, then the Fourier series of f converges to
f (t), for every t ∈ R.
Then, Z π
1 c
a0 = c dt = .
2π 0 2
Then, Z π
1 c
a0 = c dt = .
2π 0 2
For each k ∈ N, Z π
1
ak = c cos kt dt = 0
π 0
and
π
c(1 + (−1)k+1 )
Z
1 c 1
bk = c sin kt dt = (− cos kπ + cos(0)) = .
π 0 π k kπ
e ıt + e −ıt e ıt − e −ıt
cos t = , sin t = .
2 2i
Note that we can rewrite the Fourier series expansion of f as
∞
a0 X
f (t) = + (ak cos kt + bk sin kt)
2
k=1
Example
e0 , ek and fk are all of unit length. he0 , ek i = 0 and he0 , fk i = 0. Also,
hem , en i = 0 and hfm , fn i = 0, for m 6= n. Further, hem , fn i = 0 for all m, n.
Check and compare these properties with the standard basis vectors of Rn !
Definition
We say a function f : R → R is odd if f (−t) = −f (t) and even if
f (−t) = f (t).
Example
All constant functions are even functions. For all k ∈ N, sin kt are odd
functions and cos kt are even functions.
Exercise
Any odd function is always orthogonal to an even function.
hf , sin kti = 0
1 T
Z
1 πkt πkt
bk = fo , sin = fo (t) sin dt
T T T −T T
Z 0 Z T
1 πkt πkt
= −f (−t) sin dt + f (t) sin dt
T −T T 0 T
Z 0 Z T
1 πkt πkt
= −f (t) sin dt + f (t) sin dt
T T T 0 T
Z T
2 πkt
= f (t) sin dt.
T 0 T
2 π
Z Z π
2
bk = t sin kt dt = cos kt dt − (π cos kπ − 0)
π 0 kπ 0
(−1)k+1 2
2 1 k+1
= (sin kπ − sin(0)) + π(−1) = .
kπ k k
where Z T
2 πkt
ak = f (t) cos dt
T 0 T
and Z T
1
a0 = f (t) dt.
T 0
2 π
Z Z π
2
ak = t cos kt dt = − sin kt dt + (π sin kπ − 0)
π 0 kπ 0
2[(−1)k − 1]
2 1
= (cos kπ − cos(0)) = .
kπ k k 2π
Compare the result with the Fourier series of the function f (t) = |t| on
(−π, π).
where
1 ∞
Z
a(ξ) = f (t) cos ξt dt
π −∞
1 ∞
Z
b(ξ) = f (t) sin ξt dt.
π −∞
Consider the wave equation utt = c 2 uxx on R × (0, ∞), describing the
vibration of an infinite string.
The equation is hyperbolic and has the two family of characteristics
x ± ct= a constant.
Introduce the new coordinates w = x + ct, z = x − ct and set
u(w , z) = u(x, t). Thus, we have the following relations, using chain
rule:
ux = uw wx + uz zx = uw + uz
ut = uw wt + uz zt = c(uw − uz )
uxx = uww + 2uzw + uzz
utt = c 2 (uww − 2uzw + uzz )
C
D
B
A
x
γ α β δ
Theorem
Given g ∈ C 2 (R) and h ∈ C 1 (R), there is a unique C 2 solution u of the
Cauchy initial value problem (IVP) of the wave equation,
1 x
Z
1
F (x) = g (x) + h(y ) dy + c1
2 c 0
and Z x
1 1
G (x) = g (x) − h(y ) dy + c2 .
2 c 0
1 x
Z
1
F (x) = g (x) + h(y ) dy + c1
2 c 0
and Z x
1 1
G (x) = g (x) − h(y ) dy + c2 .
2 c 0
Notice that the above first order PDE obtained is in the form of
homogeneous linear transport equation, which we have already solved.
Notice that the above first order PDE obtained is in the form of
homogeneous linear transport equation, which we have already solved.
Hence, for some smooth function φ,
Since u(x, 0) = g (x) and a = −c, in our case the solution reduces to,
Z t
u(x, t) = g (x + ct) + φ(x + c(t − s) − cs) ds
0
Z t
= g (x + ct) + φ(x + ct − 2cs) ds
0
−1 x−ct
Z
= g (x + ct) + φ(y ) dy
2c x+ct
Z x+ct
1
= g (x + ct) + φ(y ) dy .
2c x−ct
T. Muthukumar tmk@iitk.ac.in Partial Differential EquationsMSO-203-B November 8, 2019 134 / 181
Aliter
(x0 , t0 )
x
(x0 − ct0 , 0) (x0 , 0) (x0 + ct0 , 0)
T. Muthukumar tmk@iitk.ac.in Partial Differential EquationsMSO-203-B November 8, 2019 136 / 181
Range of Influence
Given a point (p0 , 0) on the x-axis, what is the region in the
x − t-plane where the solution u depends on the value of g (p0 , 0) and
h(p0 , 0)?
p0
+
=
ct
ct
=
−
p0
x
(p0 , 0)
T. Muthukumar tmk@iitk.ac.in Partial Differential EquationsMSO-203-B November 8, 2019 137 / 181
Finite Speed of Propagation
If the initial data g and h are “supported” in the interval Bx0 (R)
If the initial data g and h are “supported” in the interval Bx0 (R)
then the solution u at (x, t) is supported in the region Bx0 (R + ct).
If the initial data g and h are “supported” in the interval Bx0 (R)
then the solution u at (x, t) is supported in the region Bx0 (R + ct).
This phenomenon is called the finite speed of propagation.
Note that the solution is expressed as series, which raises the question of
convergence of the series. Another concern is whether all solutions of
(5.3) have this form. We ignore these two concerns at this moment.
Note that the solution is expressed as series, which raises the question of
convergence of the series. Another concern is whether all solutions of
(5.3) have this form. We ignore these two concerns at this moment.
Since we know the initial position of the string as the graph of g , we get
∞
X kπx
g (x) = u(x, 0) = ak sin .
L
k=1
Note that the solution is expressed as series, which raises the question of
convergence of the series. Another concern is whether all solutions of
(5.3) have this form. We ignore these two concerns at this moment.
Since we know the initial position of the string as the graph of g , we get
∞
X kπx
g (x) = u(x, 0) = ak sin .
L
k=1
This expression is again troubling and rises the question: Can any arbitrary
function g be expressed as an infinite sum of trigonometric functions?
Note that the solution is expressed as series, which raises the question of
convergence of the series. Another concern is whether all solutions of
(5.3) have this form. We ignore these two concerns at this moment.
Since we know the initial position of the string as the graph of g , we get
∞
X kπx
g (x) = u(x, 0) = ak sin .
L
k=1
This expression is again troubling and rises the question: Can any arbitrary
function g be expressed as an infinite sum of trigonometric functions? But
we have answered it in the study of “Fourier series”.
T. Muthukumar tmk@iitk.ac.in Partial Differential EquationsMSO-203-B November 8, 2019 145 / 181
Therefore, the constants ak are given as
Z L
2 kπx
ak = g (x) sin .
L 0 L
and hence Z L
2 kπx
bk = h(x) sin .
kcπ 0 L
where c is a constant,
and, hence,
w 0 (t) v 00 (x)
= .
c 2 w (t) v (x)
Since LHS, a function of t, and RHS, a function x, are equal they must be
equal to some constant, say λ. Thus,
w 0 (t) v 00 (x)
= = λ.
c 2 w (t) v (x)
and, hence,
w 0 (t) v 00 (x)
= .
c 2 w (t) v (x)
Since LHS, a function of t, and RHS, a function x, are equal they must be
equal to some constant, say λ. Thus,
w 0 (t) v 00 (x)
= = λ.
c 2 w (t) v (x)
2 L
Z
kπx
βk = g (x) sin .
L 0 L
Let Ω be a circle (circular wire) of radius one insulated along its sides.
Let Ω be a circle (circular wire) of radius one insulated along its sides. Let
the initial temperature of the wire, at time t = 0, be given by a
2π-periodic function g : R → R.
Let Ω be a circle (circular wire) of radius one insulated along its sides. Let
the initial temperature of the wire, at time t = 0, be given by a
2π-periodic function g : R → R. Then there is a solution u(r , θ) of
where c is a constant.
We now use the initial temperature on the circle to find the constants.
Since u(θ, 0) = g (θ),
∞
a0 X
g (θ) = u(θ, 0) = + [ak cos(kθ) + bk sin(kθ)] .
2
k=1
is z(t) = z0 e −ct .
As a first step, for each s ∈ (0, ∞), consider w (x, t; s) as the solution of
the homogeneous problem (auxiliary)
s
wt (x, t) − c 2 ∆w s (x, t) = 0 in Ω × (s, T )
w s (x, t) = 0 in ∂Ω × (s, T )
s
w (x, s) = f (x, s) on Ω × {s}.
wt (x, r ) − c 2 ∆w (x, r ) = 0
in Ω × (0, T − s)
w (x, r ) = 0 in ∂Ω × (0, T − s)
w (x, 0) = f (x, s) on Ω.
wt (x, r ) − c 2 ∆w (x, r ) = 0
in Ω × (0, T − s)
w (x, r ) = 0 in ∂Ω × (0, T − s)
w (x, 0) = f (x, s) on Ω.
solves (6.1).
Similarly, Z t
∆u(x, t) = ∆w (x, t − s) ds.
0
Pn ∂2
The Laplacian is the trace of the Hessain matrix, ∆ := i=1 ∂x 2 .
i
d2
Note that in one dimension, ∆ = dx 2
.
Pn ∂2
The Laplacian is the trace of the Hessain matrix, ∆ := i=1 ∂x 2 .
i
d2
Note that in one dimension, ∆ = dx 2
.
Our interest is to solve, for any open subset Ω ⊂ Rn , −∆u(x) = f (x).
Pn ∂2
The Laplacian is the trace of the Hessain matrix, ∆ := i=1 ∂x 2 .
i
d2
Note that in one dimension, ∆ = dx 2
.
Our interest is to solve, for any open subset Ω ⊂ Rn , −∆u(x) = f (x).
Since a Cauchy problem for elliptic is over-determined, it is reasonable
to seek the solution of −∆u(x) = f (x) in Ω and one of the following
(or mixture) on ∂Ω.
(i) (Dirichlet condition) u = g ;
(ii) (Neumann condition) ∇u · ν = g , where ν(x) is the unit outward
normal of x ∈ ∂Ω;
(iii) (Robin condition) ∇u · ν + cu = g for any c > 0.
(iv) (Mixed condition) u = g on Γ1 and ∇u · ν = h on Γ2 , where
Γ1 ∪ Γ2 = ∂Ω and Γ1 ∩ Γ2 = ∅.
Theorem (Gauss)
Let Ω be an open bounded subset of Rn with C 1 boundary. If
V = (v1 , . . . , vn ) on Ω is a vector field such that vi ∈ C 1 (Ω), for all
1 ≤ i ≤ n, then Z Z
∇ · V dx = V · ν dσ. (6.2)
Ω ∂Ω
6
v ∈ C 2 (a, b) has a local maximum at x ∈ (a, b) then v 0 (x) = 0 and v 00 (x) ≤ 0
T. Muthukumar tmk@iitk.ac.in Partial Differential EquationsMSO-203-B November 8, 2019 165 / 181
Maximum Principle
6
v ∈ C 2 (a, b) has a local maximum at x ∈ (a, b) then v 0 (x) = 0 and v 00 (x) ≤ 0
T. Muthukumar tmk@iitk.ac.in Partial Differential EquationsMSO-203-B November 8, 2019 165 / 181
Maximum Principle
6
v ∈ C 2 (a, b) has a local maximum at x ∈ (a, b) then v 0 (x) = 0 and v 00 (x) ≤ 0
T. Muthukumar tmk@iitk.ac.in Partial Differential EquationsMSO-203-B November 8, 2019 165 / 181
Maximum Principle
6
v ∈ C 2 (a, b) has a local maximum at x ∈ (a, b) then v 0 (x) = 0 and v 00 (x) ≤ 0
T. Muthukumar tmk@iitk.ac.in Partial Differential EquationsMSO-203-B November 8, 2019 165 / 181
Maximum Principle
6
v ∈ C 2 (a, b) has a local maximum at x ∈ (a, b) then v 0 (x) = 0 and v 00 (x) ≤ 0
T. Muthukumar tmk@iitk.ac.in Partial Differential EquationsMSO-203-B November 8, 2019 165 / 181
Maximum Principle
6
v ∈ C 2 (a, b) has a local maximum at x ∈ (a, b) then v 0 (x) = 0 and v 00 (x) ≤ 0
T. Muthukumar tmk@iitk.ac.in Partial Differential EquationsMSO-203-B November 8, 2019 165 / 181
Maximum Principle
6
v ∈ C 2 (a, b) has a local maximum at x ∈ (a, b) then v 0 (x) = 0 and v 00 (x) ≤ 0
T. Muthukumar tmk@iitk.ac.in Partial Differential EquationsMSO-203-B November 8, 2019 165 / 181
Maximum Principle
6
v ∈ C 2 (a, b) has a local maximum at x ∈ (a, b) then v 0 (x) = 0 and v 00 (x) ≤ 0
T. Muthukumar tmk@iitk.ac.in Partial Differential EquationsMSO-203-B November 8, 2019 165 / 181
Maximum Principle
The above inequality is true for all ε > 0. Thus, u(x) ≤ maxx∈∂Ω u(x), for
all x ∈ Ω.
The above inequality is true for all ε > 0. Thus, u(x) ≤ maxx∈∂Ω u(x), for
all x ∈ Ω. Therefore, maxΩ u ≤ maxx∈∂Ω u(x) and hence we have equality.
u(x) < M ∀x ∈ Ω
or u ≡ M is constant in Ω.
u(x) < M ∀x ∈ Ω
or u ≡ M is constant in Ω.
By the strong maximum principle, if Ω is connected and g ≥ 0 and
g (x) > 0 for some x ∈ ∂Ω then u(x) > 0 for all x ∈ Ω.
Proof.
Note that u1 − u2 is a harmonic function
Proof.
Note that u1 − u2 is a harmonic function and hence, by maximum
principle, should attain its maximum on ∂Ω.
Proof.
Note that u1 − u2 is a harmonic function and hence, by maximum
principle, should attain its maximum on ∂Ω. But u1 − u2 = 0 on ∂Ω.
Proof.
Note that u1 − u2 is a harmonic function and hence, by maximum
principle, should attain its maximum on ∂Ω. But u1 − u2 = 0 on ∂Ω.
Thus u1 − u2 ≤ 0 in Ω.
Proof.
Note that u1 − u2 is a harmonic function and hence, by maximum
principle, should attain its maximum on ∂Ω. But u1 − u2 = 0 on ∂Ω.
Thus u1 − u2 ≤ 0 in Ω. Now, repeat the argument for u2 − u1 , we get
u2 − u1 ≤ 0 in Ω.
Proof.
Note that u1 − u2 is a harmonic function and hence, by maximum
principle, should attain its maximum on ∂Ω. But u1 − u2 = 0 on ∂Ω.
Thus u1 − u2 ≤ 0 in Ω. Now, repeat the argument for u2 − u1 , we get
u2 − u1 ≤ 0 in Ω. Thus, we get u1 − u2 = 0 in Ω.
Theorem
Let Ω be an open bounded connected subset of Rn and g ∈ C (∂Ω). Then
the Dirichlet problem
∆u(x) = 0 x ∈Ω
(6.3)
u(x) = g (x) x ∈ ∂Ω.
The fact that there is atmost one solution to the Dirichlet problem follows
from Theorem 85.
The fact that there is atmost one solution to the Dirichlet problem follows
from Theorem 85. Let w = u1 − u2 . Then w is harmonic.
The fact that there is atmost one solution to the Dirichlet problem follows
from Theorem 85. Let w = u1 − u2 . Then w is harmonic. (a) Note that
w = g1 − g2 ≥ 0 on ∂Ω.
The fact that there is atmost one solution to the Dirichlet problem follows
from Theorem 85. Let w = u1 − u2 . Then w is harmonic. (a) Note that
w = g1 − g2 ≥ 0 on ∂Ω. Since g1 (x0 ) > g2 (x0 ) for some x0 ∈ ∂Ω, then
w (x) > 0 for all x ∈ ∂Ω. This proves the comparison result.
(b) Again, by maximum principle, we have
v (0) = v (a) = 0,
the eigen value problem for the second order differential operator.
v (0) = v (a) = 0,
the eigen value problem for the second order differential operator. Note
that the λ can be either zero, positive or negative.
If λ = 0, then v 00 = 0 and the general solution is v (x) = αx + β, for some
constants α and β. Since v (0) = 0, we get β = 0, and v (a) = 0 and a 6= 0
implies that α = 0. Thus, v ≡ 0 and hence u ≡ 0. But, this can not be a
solution to (6.3).
v (0) = v (a) = 0,
the eigen value problem for the second order differential operator. Note
that the λ can be either zero, positive or negative.
If λ = 0, then v 00 = 0 and the general solution is v (x) = αx + β, for some
constants α and β. Since v (0) = 0, we get β = 0, and v (a) = 0 and a 6= 0
implies that α = 0. Thus, v ≡ 0 and hence u ≡ 0. But, this can not be a
solution to (6.3). √ √
If λ > 0, then v (x) = αe λx + βe − λx . Equivalently,
√ √
v (x) = c1 cosh( λx) + c2 sinh( λx)
such that α = (c1 + c2 )/2 and β = (c1 − c2 )/2. Using the boundary
condition v (0) = 0, we get c1 = 0 and hence
√
v (x) = c2 sinh( λx).
√
Now using v (a) = 0, we have c2 sinh λa = 0. Thus, c2 = 0 and
v (x) = 0. We have seen this cannot be a solution.
T. Muthukumar tmk@iitk.ac.in Partial Differential EquationsMSO-203-B November 8, 2019 176 / 181
√
If λ < 0, then set ω = −λ. We need to solve
00
v (x) + ω 2 v (x) = 0 x ∈ (0, a)
(6.5)
v (0) = v (a) = 0.
1 ∂2
1 ∂ ∂
∆ := r + 2 2
r ∂r ∂r r ∂θ
∂2u 1 ∂ 2 u 1 ∂u
∆u = + + .
∂r 2 r 2 ∂θ2 r ∂r
T. Muthukumar tmk@iitk.ac.in Partial Differential EquationsMSO-203-B November 8, 2019 181 / 181