Академический Документы
Профессиональный Документы
Культура Документы
Contents I
Preliminary Background
Contents IV Contents V
Constrained Optimization First Order Ordinary Differential Equations
Linear and Quadratic Programming Problems* Second Order Linear Homogeneous ODEs
Minimax Approximation*
Analytic Functions
Applied Mathematical Methods Preliminary Background 9, Applied Mathematical Methods Preliminary Background 10,
Applied Mathematical Methods Preliminary Background 11, Applied Mathematical Methods Preliminary Background 12,
If you have the time, need and interest, then you may consult
I Applied linear algebra I individual books on individual topics;
I Multivariate calculus and vector calculus I another umbrella volume, like Kreyszig, McQuarrie, ONeil
I Numerical methods or Wylie and Barrett;
I Differential equations + + I a good book of numerical analysis or scientific computing, like
Acton, Heath, Hildebrand, Krishnamurthy and Sen, Press et
I Complex analysis
al, Stoer and Bulirsch;
I friends, in joint-study groups.
Applied Mathematical Methods Preliminary Background 13, Applied Mathematical Methods Preliminary Background 14,
Applied Mathematical Methods Preliminary Background 15, Applied Mathematical Methods Preliminary Background 16,
Applied Mathematical Methods Matrices and Linear Transformations 17, Applied Mathematical Methods Matrices and Linear Transformations 18,
Outline Matrices
Geometry and Algebra
Linear Transformations
Matrices Matrices
Geometry and Algebra
Linear Transformations
Matrix Terminology Matrix Terminology
Matrices Matrices
Geometry and Algebra
Linear Transformations
Geometry and Algebra Matrices
Geometry and Algebra
Linear Transformations
Consider these definitions: Matrix Terminology Matrix Terminology
Let vector x = [x1 x2 x3 ]T denote a point (x1 , x2 , x3 ) in
I y = f (x)
3-dimensional space in frame of reference OX 1 X2 X3 .
I y = f (x) = f (x1 , x2 , , xn )
Example: With m = 2 and n = 3,
I yk = fk (x) = fk (x1 , x2 , , xn ), k = 1, 2, , m
I y = f(x)
y1 = a11 x1 + a12 x2 + a13 x3
I y = Ax .
y2 = a21 x1 + a22 x2 + a23 x3
Further Answer:
A matrix is the definition of a linear vector function of a Plot y1 and y2 in the OY1 Y2 plane.
vector variable. A: R3 R
2
Anything deeper? X3
Y2
y
x Y1
O X2
O
X1
Domain Codomain
Caution: Matrices do not define vector functions whose components are
Figure: Linear transformation: schematic illustration
of the form
What is matrix A doing?
yk = ak0 + ak1 x1 + ak2 x2 + + akn xn .
Applied Mathematical Methods Matrices and Linear Transformations 21, Applied Mathematical Methods Matrices and Linear Transformations 22,
Applied Mathematical Methods Matrices and Linear Transformations 23, Applied Mathematical Methods Matrices and Linear Transformations 24,
Consider A R mn as a mapping
A : R n R m, Ax = y, x R n, y R m.
Applied Mathematical Methods Operational Fundamentals of Linear Algebra 27, Applied Mathematical Methods Operational Fundamentals of Linear Algebra 28,
Figure: Range and null space: schematic representation where V = [v1 v2 vr ] and k = [k1 k2 kr ]T ?
Answer: Not necessarily.
Question: What is the dimension of a vector space?
Linear dependence and independence: Vectors x 1 , x2 , , xr Span, denoted as < v1 , v2 , , vr >: the subspace
in a vector space are called linearly independent if described/generated by a set of vectors.
k1 x1 + k 2 x2 + + k r xr = 0 k1 = k2 = = kr = 0. Basis:
A basis of a vector space is composed of an ordered
minimal set of vectors spanning the entire space.
Range(A) = {y : y = Ax, x R n }
Null(A) = {x : x R n , Ax = 0} The basis for an n-dimensional space will have exactly n
Rank(A) = dim Range(A) members, all linearly independent.
Nullity(A) = dim Null(A)
Applied Mathematical Methods Operational Fundamentals of Linear Algebra 29, Applied Mathematical Methods Operational Fundamentals of Linear Algebra 30,
Applied Mathematical Methods Operational Fundamentals of Linear Algebra 33, Applied Mathematical Methods Operational Fundamentals of Linear Algebra 34,
Applied Mathematical Methods Systems of Linear Equations 35, Applied Mathematical Methods Systems of Linear Equations 36,
To diagnose the non-existence of a solution, apply a series of elementary row transformations on A to reduce it
To determine the unique solution, or to the A,
To describe infinite solutions; the row-reduced echelon form or RREF.
decouple the equations using elementary transformations. Features of RREF:
1. The first non-zero entry in any row is a 1, the leading 1.
For solving Ax = b, apply suitable elementary row transformations
2. In the same column as the leading 1, other entries are zero.
on both sides, leading to
3. Non-zero entries in a lower row appear later.
Rq Rq1 R2 R1 Ax = Rq Rq1 R2 R1 b,
Variables corresponding to columns having leading 1s
or, [RA]x = Rb;
are expressed in terms of the remaining variables.
such that matrix [RA] is greatly simplified. u1
In the best case, with complete reduction, RA = I n , and u2
components of x can be read off from Rb. Solution of Ax = 0: x = z1 z2 znk
Applied Mathematical Methods Systems of Linear Equations 39, Applied Mathematical Methods Systems of Linear Equations 40,
To get 1 at diagonal (or leading) position, with 0 elsewhere. Equation Ax = y can be written as
Key step: division by the diagonal (or leading) entry.
Consider x
Ik . . . . . A11 A12 A13 1 y1
x2 = ,
. . . . . A21 A22 A23 y2
x3
. . . . BIG .
A = .
. big . . . .
with x1 , x2 etc being themselves vectors (or matrices).
. . . . . .
I For a valid partitioning, block sizes should be consistent.
. . . . . .
I Elementary transformations can be applied over blocks.
Cannot divide by zero. Should not divide by .
I Block operations can be computationally economical at times.
I partial pivoting: row interchange to get big in place of I Conceptually, different blocks of contributions/equations can
I complete pivoting: row and column interchanges to get be assembled for mathematical modelling of complicated
BIG in place of coupled systems.
Complete pivoting does not give a huge advantage over partial pivoting,
but requires maintaining of variable permutation for later unscrambling.
Applied Mathematical Methods Systems of Linear Equations 41, Applied Mathematical Methods Gauss Elimination Family of Methods 42,
Applied Mathematical Methods Gauss Elimination Family of Methods 45, Applied Mathematical Methods Gauss Elimination Family of Methods 46,
Applied Mathematical Methods Gauss Elimination Family of Methods 47, Applied Mathematical Methods Gauss Elimination Family of Methods 48,
A square matrix with non-zero leading minors is LU-decomposable. Question: How to LU-decompose a given matrix?
No reference to a right-hand-side (RHS) vector!
To solve Ax = b, denote y = Ux and split as
l11 0 0 0 u11 u12 u13 u1n
l21 l22 0 0 0 u22 u23 u2n
Ax = b LUx = b l31 l32 l33 0 0 0 u33 u3n
L= and U =
Ly = b and Ux = y. .. .. .. .. .. .. .. .. .. ..
. . . . . . . . . .
ln1 ln2 ln3 lnn 0 0 0 unn
Forward substitutions:
i 1
X Elements of the product give
1
yi = bi lij yj for i = 1, 2, 3, , n;
lii i
X
j=1
lik ukj = aij for i j,
Back-substitutions: k=1
Xj
n
X and lik ukj = aij for i > j.
1
xi = yi uij xj for i = n, n 1, n 2, , 1. k=1
uii
j=i +1
n2 equations in n 2 + n unknowns: choice of n unknowns
Applied Mathematical Methods Gauss Elimination Family of Methods 49, Applied Mathematical Methods Gauss Elimination Family of Methods 50,
Evaluation proceeds in column order of the matrix (for storage) LU-decompose a permutation of its rows
u11 u12 u13 u1n 0 1 2 0 1 0 3 1 2
l21 u22 u23 3 1 2 = 1 0 0 0 1 2
u2n
2 1 3 0 0 1 2 1 3
A = l31 l32 u33 u3n
.. .. .. .. .. 0 1 0 1 0 0 3 1 2
. . . . . = 1 0 0 0 1 0 0 1 2 .
ln1 ln2 ln3 unn 0 0 1 2 1
1 0 0 1
3 3
Applied Mathematical Methods Gauss Elimination Family of Methods 51, Applied Mathematical Methods Special Systems and Special Methods 52,
Applied Mathematical Methods Special Systems and Special Methods 53, Applied Mathematical Methods Special Systems and Special Methods 54,
I What is a sparse matrix? I Concepts and criteria of positive definiteness and positive
I Bandedness and bandwidth semi-definiteness
I Efficient storage and processing I Cholesky decomposition method in symmetric positive definite
I Updates systems
I Sherman-Morrison formula I Nature of sparsity and its exploitation
1 T 1
(A u)(v A )
(A + uvT )1 = A1
1 + vT A1 u
I Woodbury formula Necessary Exercises: 1,2,4,7
I Conjugate gradient method
I efficiently implemented matrix-vector products
Applied Mathematical Methods Numerical Aspects in Linear Systems 57, Applied Mathematical Methods Numerical Aspects in Linear Systems 58,
I Weighted norm q
kxkw = xT Wx
where weight matrix W is symmetric and positive definite.
Applied Mathematical Methods Numerical Aspects in Linear Systems 59, Applied Mathematical Methods Numerical Aspects in Linear Systems 60,
AT Ax = AT b x = (AT A)1 AT b
(b)
X
o X1 o X1
X
(a)
X
(a)
Square of error norm
1 1
(a) Reference system (b) Parallel shift U(x) = kAx bk2 = (Ax b)T (Ax b)
2 2
X2 X2 1 T T 1
(2) (2) (2d)
= x A Ax xT AT b + bT b
(1) (1)
2 2
(c)
Least square error solution:
o X X1
o X1
U
= AT Ax AT b = 0
x
(c) Guess validation (d) Singularity
A# = (AT A)1 AT
Applied Mathematical Methods Numerical Aspects in Linear Systems 63, Applied Mathematical Methods Numerical Aspects in Linear Systems 64,
Extremum of the Lagrangian L(x, ) = 21 xT x T (Ax b) is Coefficient matrix: symmetric and positive definite!
given by The idea: Immunize the system, paying a small price.
L L Issues:
= 0, = 0 x = AT , Ax = b.
x I The choice of ?
Solution x = AT (AAT )1 b gives foot of the perpendicular on the I When m < n, computational advantage by
solution plane and the pseudoinverse
(AAT + 2 Im ) = b, x = AT
A# = AT (AAT )1
here is a right-inverse!
Applied Mathematical Methods Numerical Aspects in Linear Systems 65, Applied Mathematical Methods Numerical Aspects in Linear Systems 66,
Av = v
Eigenvalues and Eigenvectors
Eigenvalue Problem Eigenvector (v) and eigenvalue (): eigenpair (, v)
Generalized Eigenvalue Problem algebraic eigenvalue problem
Some Basic Theoretical Results
Power Method (I A)v = 0
For non-trivial (non-zero) solution v,
det(I A) = 0
Applied Mathematical Methods Eigenvalues and Eigenvectors 69, Applied Mathematical Methods Eigenvalues and Eigenvectors 70,
Free vibration of n-dof system: Caution: Eigenvectors of A and AT need not be same.
Mx + Kx = 0, Diagonal and block diagonal matrices
Natural frequencies and corresponding modes? Eigenvalues of a diagonal matrix are its diagonal entries.
Assuming a vibration mode x = sin(t + ), Corresponding eigenvectors: natural basis members (e 1 , e2 etc).
Applied Mathematical Methods Eigenvalues and Eigenvectors 71, Applied Mathematical Methods Eigenvalues and Eigenvectors 72,
Applied Mathematical Methods Eigenvalues and Eigenvectors 75, Applied Mathematical Methods Diagonalization and Similarity Transformations 76,
Applied Mathematical Methods Diagonalization and Similarity Transformations 77, Applied Mathematical Methods Diagonalization and Similarity Transformations 78,
Diagonalizability Diagonalizability
Canonical Forms
Symmetric Matrices
Diagonalizability Diagonalizability
Canonical Forms
Symmetric Matrices
Similarity Transformations Similarity Transformations
1
J1 1
Jordan canonical form (JCF)
J2 .
J= .. , J r =
..
Diagonal (canonical) form . ..
. 1
Jk
Triangular (canonical) form
The key equation AS = SJ in extended form gives
..
.
Other convenient forms A[ Sr ] = [ Sr ] Jr ,
Tridiagonal form ..
.
Hessenberg form
where Jordan block Jr is associated with the subspace of
Sr = [v w2 w3 ]
Applied Mathematical Methods Diagonalization and Similarity Transformations 81, Applied Mathematical Methods Diagonalization and Similarity Transformations 82,
1
1 Diagonal form
[Av Aw2 Aw3 ] = [v w2 w3 ]
.. Special case of Jordan form, with each Jordan block of 1 1
.
I
.. size
.
I Matrix is diagonalizable
Columnwise equality leads to I Similarity transformation matrix S is composed of n linearly
independent eigenvectors as columns
Av = v, Aw2 = v + w2 , Aw3 = w2 + w3 ,
I None of the eigenvectors admits any generalized eigenvector
Generalized eigenvectors w2 , w3 etc: I Equal geometric and algebraic multiplicities for every
eigenvalue
(A I)v = 0,
(A I)w2 = v and (A I)2 w2 = 0,
(A I)w3 = w2 and (A I)3 w3 = 0,
Applied Mathematical Methods Diagonalization and Similarity Transformations 83, Applied Mathematical Methods Diagonalization and Similarity Transformations 84,
A real symmetric matrix has all real eigenvalues and Proposition: Eigenvalues of a real symmetric matrix must be real.
is diagonalizable through an orthogonal similarity
transformation. Take A R nn such that A = AT , with eigenvalue = h + ik.
Since I A is singular, so is
Eigenvalues must be real.
B = (I A) (I A) = (hI A + ikI)(hI A ikI)
A complete set of eigenvectors exists.
Eigenvectors corresponding to distinct eigenvalues are = (hI A)2 + k 2 I
necessarily orthogonal.
Corresponding to repeated eigenvalues, orthogonal eigenvectors For some x 6= 0, Bx = 0, and
are available.
xT Bx = 0 xT (hI A)T (hI A)x + k 2 xT x = 0
In all cases of a symmetric matrix, we can form an
orthogonal matrix V, such that V T AV = is a real Thus, k(hI A)xk2 + kkxk2 = 0
diagonal matrix.
k = 0 and = h
Further, A = VVT .
Similar results for complex Hermitian matrices.
Applied Mathematical Methods Diagonalization and Similarity Transformations 87, Applied Mathematical Methods Diagonalization and Similarity Transformations 88,
Applied Mathematical Methods Diagonalization and Similarity Transformations 89, Applied Mathematical Methods Diagonalization and Similarity Transformations 90,
T
A = VV
Symmetric Tridiagonal Triangular
1 v1T
2 T
v2
= [v1 v2 vn ] .. .. Symmetric Tridiagonal
. .
n vnT Diagonal
X n
= 1 v1 v1T + 2 v2 v2T + + n vn vnT = i vi viT Figure: Eigenvalue problem: forms and steps
i =1
I Reconstruction from a sum of rank-one components How to find suitable similarity transformations?
I Efficient storage with only large eigenvalues and corresponding 1. rotation
eigenvectors 2. reflection
I Deflation technique
3. matrix decomposition or factorization
I Stable and effective methods: easier to solve the eigenvalue
problem 4. elementary transformation
Applied Mathematical Methods Diagonalization and Similarity Transformations 91, Applied Mathematical Methods Jacobi and Givens Rotation Methods 92,
Applied Mathematical Methods Jacobi and Givens Rotation Methods 93, Applied Mathematical Methods Jacobi and Givens Rotation Methods 94,
Y Y/
Orthogonal change of basis:
P (x, y)
0
x cos sin x
r= = = <r0
y sin cos y0
y
y /
L x M
Mapping of position vectors with
X
O
x /
cos sin
K N
<1 = <T =
sin cos
X/
Applied Mathematical Methods Jacobi and Givens Rotation Methods 95, Applied Mathematical Methods Jacobi and Givens Rotation Methods 96,
Applied Mathematical Methods Jacobi and Givens Rotation Methods 99, Applied Mathematical Methods Jacobi and Givens Rotation Methods 100,
Contrast between Jacobi and Givens rotation methods Rotation transformation on symmetric matrices
I What happens to intermediate zeros? I Plane rotations provide orthogonal change of basis that can
I What do we get after a complete sweep? be used for diagonalization of matrices.
I How many sweeps are to be applied? I For small matrices (say 4 n 8), Jacobi rotation sweeps
I What is the intended final form of the matrix? are competitive enough for diagonalization upto a reasonable
tolerance.
I How is size of the matrix relevant in the choice of the method?
I For large matrices, one sweep of Givens rotations can be
applied to get a symmetric tridiagonal matrix, for efficient
Fast forward ... further processing.
I Housholder method accomplishes tridiagonalization more
efficiently than Givens rotation method.
Necessary Exercises: 2,3,4
I But, with a half-processed matrix, there come situations in
which Givens rotation method turns out to be more efficient!
Applied Mathematical Methods Householder Transformation and Tridiagonal Matrices 101, Applied Mathematical Methods Householder Transformation and Tridiagonal Matrices 102,
Hk = Ik 2wwT
is symmetric tridiagonal.
Applied Mathematical Methods Householder Transformation and Tridiagonal Matrices 105, Applied Mathematical Methods Householder Transformation and Tridiagonal Matrices 106,
Applied Mathematical Methods Householder Transformation and Tridiagonal Matrices 107, Applied Mathematical Methods Householder Transformation and Tridiagonal Matrices 108,
i+1 i 2 1
Proof
j+1 j j1
p1 () has a single root, d1 . j
j+1
Since 1 > 1 , pi (1 ) is of the same sign as pi (), i.e. positive. Examine sequence P(w ) = {p0 (w ), p1 (w ), p2 (w ), , pn (w )}.
Therefore, pi +2 (1 ) = ei2+2 pi (1 ) is negative. If pk (w ) and pk+1 (w ) have opposite signs then pk+1 () has one
But, pi +2 () is clearly positive. root more than pk () in the interval (w , ).
Hence, 1 (1 , ). Number of roots of pn () above w = number of sign
Similarly, i +2 (, i +1 ). changes in the sequence P(w ).
Question: Where are the rest of the i roots of p i +2 ()?
Consequence: Number of roots of pn () in (a, b) = difference
pi +2 (j ) = (j di +2 )pi +1 (j ) ei2+2 pi (j ) = ei2+2 pi (j ) between numbers of sign changes in P(a) and P(b).
a+b
pi +2 (j+1 ) = ei2+2 pi (j+1 ) Bisection method: Examine the sequence at 2 .
Separate roots, bracket each of them and then squeeze
That is, pi and pi +2 are of opposite signs at each . the interval!
Refer figure.
Over [i +1 , 1 ], pi +2 () changes sign over each sub-interval Any way to start with an interval to include all eigenvalues?
[j+1 , j ], along with pi (), to maintain opposite signs at each .
Conclusion: pi +2 () has exactly one root in (j+1 , j ). |i | bnd = max {|ej | + |dj | + |ej+1 |}
1jn
Applied Mathematical Methods Householder Transformation and Tridiagonal Matrices 111, Applied Mathematical Methods Householder Transformation and Tridiagonal Matrices 112,
Note: The algorithm is based on Sturmian sequence property . Necessary Exercises: 2,4,5
Applied Mathematical Methods QR Decomposition Method 113, Applied Mathematical Methods QR Decomposition Method 114,
Outline QR Decomposition
QR Iterations
Conceptual Basis of QR Method*
QR Decomposition QR Decomposition
QR Iterations
Conceptual Basis of QR Method*
QR Algorithm with Shift* QR Algorithm with Shift*
Decomposition (or factorization) A = QR into two factors,
orthogonal Q and upper-triangular R:
(a) It always exists.
(b) Performing this decomposition is pretty straightforward.
QR Decomposition Method (c) It has a number of properties useful in the solution of the
QR Decomposition eigenvalue problem.
QR Iterations r11 r1n
Conceptual Basis of QR Method* .. ..
[a1 an ] = [q1 qn ] . .
QR Algorithm with Shift* rnn
A simple method based on Gram-Schmidt P orthogonalization:
Considering columnwise equality a j = ji =1 rij qi ,
for j = 1, 2, 3, , n;
j1
X
rij = qT
i aj i < j, a0j = aj rij qi , rjj = ka0j k;
i =1
a0j /rjj , if rjj 6= 0;
qj =
any vector satisfying qTi qj = ij for 1 i j, if rjj = 0.
Applied Mathematical Methods QR Decomposition Method 115, Applied Mathematical Methods QR Decomposition Method 116,
QR Decomposition QR Decomposition
QR Iterations
Conceptual Basis of QR Method*
QR Decomposition QR Decomposition
QR Iterations
Conceptual Basis of QR Method*
QR Algorithm with Shift* QR Algorithm with Shift*
Practical method: one-sided Householder transformations,
starting with
Alternative method useful for tridiagonal and Hessenberg
u0 v 0
u0 = a1 , v0 = ku0 ke1 R n and w0 = matrices: One-sided plane rotations
ku0 v0 k
I rotations P12 , P23 etc to annihilate a21 , a32 etc in that
and P0 = Hn = In 2w0 w0T . sequence
Givens rotation matrices!
ka1 k
Pn2 Pn3 P2 P1 P0 A = Pn2 Pn3 P2 P1
0 A0
Application in solution of a linear system: Q and R factors of
r11
a matrix A come handy in the solution of Ax = b
= Pn2 Pn3 P2 r22 = = R
A1 QRx = b Rx = QT b
With
needs only a sequence of back-substitutions.
Q = (Pn2 Pn3 P2 P1 P0 )T = P0 P1 P2 Pn3 Pn2 ,
we have QT A = R A = QR.
Applied Mathematical Methods QR Decomposition Method 117, Applied Mathematical Methods QR Decomposition Method 118,
QR Iterations QR Decomposition
QR Iterations
Conceptual Basis of QR Method*
QR Iterations QR Decomposition
QR Iterations
Conceptual Basis of QR Method*
QR Algorithm with Shift* QR Algorithm with Shift*
Multiplying Q and R factors in reverse, Quasi-upper-triangular form:
1 ??
0 T
A = RQ = Q AQ, 2 ??
..
. ??
an orthogonal similarity transformation.
r ??
1. If A is symmetric, then so is A0 . ,
2. If A is in upper Hessenberg form, then so is A 0 .
Bk
.. .. ..
. .
3. If A is symmetric tridiagonal, then so is A0. .
Complexity of QR iteration: O(n) for a symmetric tridiagonal
matrix, O(n 2 ) operation for an upper Hessenberg matrix and
O(n3 ) for the general case. with |1 | > |2 | > .
I Diagonal blocks Bk correspond to eigenspaces of equal/close
Algorithm: Set A1 = A and for k = 1, 2, 3, , (magnitude) eigenvalues.
I decompose Ak = Qk Rk , I 2 2 diagonal blocks often correspond to pairs of complex
I reassemble Ak+1 = Rk Qk . eigenvalues (for non-symmetric matrices).
I For symmetric matrices, the quasi-upper-triangular form
As k , Ak approaches the quasi-upper-triangular form.
reduces to quasi-diagonal form.
Applied Mathematical Methods QR Decomposition Method 119, Applied Mathematical Methods QR Decomposition Method 120,
QR Decomposition QR Decomposition
Conceptual Basis of QR Method* QR Iterations
Conceptual Basis of QR Method*
QR Algorithm with Shift* QR Iterations
Conceptual Basis of QR Method*
QR Algorithm with Shift* k
QR Algorithm with Shift*
i
QR decomposition algorithm operates on the basis of the relative For i < j , entry aij decays through iterations as j .
magnitudes of eigenvalues and segregates subspaces. With shift,
With k , Ak = Ak k I;
Ak = Qk Rk , Ak+1 = Rk Qk ;
Ak Range{e1 } = Range{q1 } Range{v1 }
Ak+1 = Ak+1 + k I.
and (a1 )k QT T
k Aq1 = 1 Qk q1 = 1 e1 .
Resulting transformation is
Further,
Ak+1 = Rk Qk + k I = QT
k Ak Qk + k I
Ak Range{e1 , e2 } = Range{q1 , q2 } Range{v1 , v2 }. = QT T
k (Ak k I)Qk + k I = Qk Ak Qk .
(1 2 )1 For the iteration,
T
and (a2 )k Qk Aq2 = 2 .
i k
0 convergence ratio = j k .
And, so on ...
Question: How to find a suitable value for k ?
Applied Mathematical Methods QR Decomposition Method 121, Applied Mathematical Methods Eigenvalue Problem of General Matrices 122,
Applied Mathematical Methods Eigenvalue Problem of General Matrices 123, Applied Mathematical Methods Eigenvalue Problem of General Matrices 124,
I A general (non-symmetric) matrix may not be diagonalizable. Methods to find appropriate similarity transformations
We attempt to triangularize it. 1. a full sweep of Givens rotations,
I With real arithmetic, 2 2 diagonal blocks are inevitable 2. a sequence of n 2 steps of Householder transformations, and
signifying complex pair of eigenvalues. 3. a cycle of coordinated Gaussian elimination.
I Higher computational complexity, slow convergence and lack
of numerical stability. Method based on Gaussian elimination or elementary
transformations:
A non-symmetric matrix is usually unbalanced and is prone to The pre-multiplying matrix corresponding to the
higher round-off errors. elementary row transformation and the post-multiplying
matrix corresponding to the matching column
Balancing as a pre-processing step: multiplication of a row and transformation must be inverses of each other.
division of the corresponding column with the same number,
ensuring similarity. Two kinds of steps
Note: A balanced matrix may get unbalanced again through I Pivoting
similarity transformations that are not orthogonal! I Elimination
Applied Mathematical Methods Eigenvalue Problem of General Matrices 125, Applied Mathematical Methods Eigenvalue Problem of General Matrices 126,
Introductory Remarks Introductory Remarks
Reduction to Hessenberg Form* Reduction to Hessenberg Form*
QR Algorithm on Hessenberg Matrices*
QR Algorithm on Hessenberg Matrices*
Reduction to Hessenberg Form*
QR Algorithm on Hessenberg Matrices*
Inverse Iteration Inverse Iteration
Recommendation Recommendation
Pivoting step: A = Prs APrs = P1
rs APrs . QR iterations: O(n 2 ) operations for upper Hessenberg form.
I Permutation Prs : interchange of r -th and s-th columns. Whenever a sub-diagonal zero appears, the matrix is split
I P1
rs = Prs : interchange of r -th and s-th rows. into two smaller upper Hessenberg blocks, and they are
I Pivot locations: a21 , a32 , , an1,n2 . processed separately, thereby reducing the cost drastically.
Elimination step: A = G1
r AGr with elimination matrix
Particular cases:
Ir 0 0 Ir 0 0 I an,n1 0: Accept ann = n as an eigenvalue, continue with
Gr = 0 1 0 1
and Gr = 0 1 0 . the leading (n 1) (n 1) sub-matrix.
0 k Inr 1 0 k Inr 1
0: Separately find the eigenvalues n1 and n
an1,n2
I
an1,n1 an1,n
from , continue with the leading
I G1
r : Row (r + 1 + i) Row (r + 1 + i) k i Row (r + 1) an,n1 an,n
for i = 1, 2, 3, , n r 1 (n 2) (n 2) sub-matrix.
I 1) Column (r + 1)+
Gr : Column (r + P
nr 1 Shift strategy: Double QR steps.
i =1 [ki Column (r + 1 + i) ]
Applied Mathematical Methods Eigenvalue Problem of General Matrices 127, Applied Mathematical Methods Eigenvalue Problem of General Matrices 128,
n
X n
X
(i )0 : a good estimate of an eigenvalue i of A.
j [A (i )0 I]vj = j vj
Purpose: To find i precisely and also to find vi . j=1 j=1
j
Step: Select a random vector y0 (with ky0 k = 1) and solve j [j (i )0 ] = j j = .
j (i )0
[A (i )0 I]y = y0 . i is typically large and eigenvector v i dominates y.
Avi = i vi gives [A (i )0 I]vi = [i (i )0 ]vi . Hence,
Result: y is a good estimate of vi and
[i (i )0 ]y [A (i )0 I]y = y0 .
1
(i )1 = (i )0 +
y0T y Inner product with y0 gives
Applied Mathematical Methods Eigenvalue Problem of General Matrices 129, Applied Mathematical Methods Eigenvalue Problem of General Matrices 130,
Start with estimate (i )0 , guess y0 (normalized). Table: Eigenvalue problem: summary of methods
For k = 0, 1, 2, Type Size Reduction Algorithm Post-processing
General Small Definition: Polynomial Solution of
I Solve [A (i )k I]y = yk . (up to 4) Characteristic root finding linear systems
polynomial (eigenvalues) (eigenvectors)
y
I Normalize yk+1 = kyk .
Symmetric Intermediate Jacobi sweeps Selective
(say, 412) Jacobi rotations
1 Tridiagonalization Sturm sequence Inverse iteration
I Improve (i )k+1 = (i )k + yT y
. (Givens rotation property: (eigenvalue
k or Householder Bracketing and improvement
method) bisection and eigenvectors)
I If kyk+1 yk k < , terminate. (rough eigenvalues)
Large Tridiagonalization QR decomposition
(usually iterations
Important issues Householder method)
Balancing, and then
I Update eigenvalue once in a while, not at every iteration. Non-
symmetric
Intermediate
Large
Reduction to
Hessenberg form
QR decomposition
iterations
Inverse iteration
(eigenvectors)
(Above methods or (eigenvalues)
I Use some acceptable small number as artificial pivot. Gaussian elimination)
General Very large Power method,
I The method may not converge for defective matrix or for one (selective shift and deflation
requirement)
having complex eigenvalues.
I Repeated eigenvalues may inhibit the process.
Applied Mathematical Methods Eigenvalue Problem of General Matrices 131, Applied Mathematical Methods Singular Value Decomposition 132,
Do not ask for similarity. Focus on the form of the decomposition. For A R mn ,
Guaranteed decomposition with orthogonal U, V, and
non-negative diagonal entries in . AT A = (VT UT )(UVT ) = VT VT = VVT ,
Applied Mathematical Methods Singular Value Decomposition 135, Applied Mathematical Methods Singular Value Decomposition 136,
All three factors in the decomposition are constructed, as desired. Rank of a matrix is the same as the number of its
non-zero singular values.
Applied Mathematical Methods Singular Value Decomposition 137, Applied Mathematical Methods Singular Value Decomposition 138,
Applied Mathematical Methods Singular Value Decomposition 141, Applied Mathematical Methods Singular Value Decomposition 142,
r r
I (A# )# = A. X X
x = V# UT b = k vk uT
k b = (uT
k b/k )vk
I If A is invertible, then A# = A1 .
k=1 k=1
I A# b gives the correct unique solution.
I If Ax = b is an under-determined consistent system, then Minimize
A# b selects the solution x with the minimum norm. 1 1 1
E (x) = (Ax b)T (Ax b) = xT AT Ax xT AT b + bT b
I If the system is inconsistent, then A # b minimizes the least 2 2 2
square error kAx bk.
Condition of vanishing gradient:
I If the minimizer of kAx bk is not unique, then it picks up
that minimizer which has the minimum norm kxk among such E
minimizers. = 0 AT Ax = AT b
x
Contrast with Tikhonov regularization: V(T )VT x = VT UT b
Pseudoinverse solution for precision and diagnosis. (T )VT x = T UT b
Tikhonovs solution for continuity of solution over k2 vkT x = k uT
k b
variable A and computational efficiency. vkT x = uT for k = 1, 2, 3, , r .
k b/k
Applied Mathematical Methods Singular Value Decomposition 143, Applied Mathematical Methods Singular Value Decomposition 144,
With V = [vr +1 vr +2 vn ], then Using basis V for domain and U for co-domain, the variables are
transformed as
r
X
VT x = y and UT b = c.
x= (uT
k b/k )vk + Vy = x + Vy.
k=1 Then,
Ax = b UVT x = b VT x = UT b y = c.
How to minimize kxk2 subject to E (x) minimum?
A completely decoupled system!
Minimize E1 (y) = kx + Vyk2 . Usable components: yk = ck /k for k = 1, 2, 3, , r .
For k > r ,
I completely redundant information (c k = 0)
Since x and Vy are mutually orthogonal,
I purely unresolvable conflict (c k 6= 0)
E1 (y) = kx + Vyk2 = kx k2 + kVyk2
SVD extracts this pure redundancy/inconsistency.
is minimum when Vy = 0, i.e. y = 0. Setting k = 0 for k > r rejects it wholesale!
At the same time, kyk is minimized, and hence kxk too.
Applied Mathematical Methods Singular Value Decomposition 145, Applied Mathematical Methods Vector Spaces: Fundamental Concepts* 146,
Applied Mathematical Methods Vector Spaces: Fundamental Concepts* 147, Applied Mathematical Methods Vector Spaces: Fundamental Concepts* 148,
Group Group
Field
Vector Space
Field Group
Field
Vector Space
Linear Transformation Linear Transformation
Isomorphism Isomorphism
Inner Product Space Inner Product Space
A set G and a binary operation, say +, fulfilling
Function Space A set F and two binary operations, say + and , satisfying
Function Space
I Subgroup I Subfield
Applied Mathematical Methods Vector Spaces: Fundamental Concepts* 149, Applied Mathematical Methods Vector Spaces: Fundamental Concepts* 150,
Examples: R n , C n , m n real matrices etc. Question: Will this process ever end?
Suppose the above process ends after n choices of linearly T(a + b) = T(a) + T(b) , F and a, b V
independent vectors.
where V and W are vector spaces over the field F .
= 1 1 + 2 2 + + n n
Question: How to describe the linear transformation T?
Applied Mathematical Methods Vector Spaces: Fundamental Concepts* 153, Applied Mathematical Methods Vector Spaces: Fundamental Concepts* 154,
= x 1 1 + x 2 2 + + x n n
I Vector is an actual object in the set V and the column
x R n is merely a list of its coordinates.
Coordinates in a column: x = [x1 x2 xn ]T I T : V W is the linear transformation and the matrix A
simply stores coefficients needed to describe it.
Mapping:
I By changing bases of V and W, the same vector and the
T() = x1 T(1 ) + x2 T(2 ) + + xn T(n ), same linear transformation are now expressed by different x
and A, respectively.
with coordinates Ax, as we know!
Applied Mathematical Methods Vector Spaces: Fundamental Concepts* 155, Applied Mathematical Methods Vector Spaces: Fundamental Concepts* 156,
Isomorphism Group
Field
Vector Space
Isomorphism Group
Field
Vector Space
Linear Transformation Linear Transformation
Consider T : V W that establishes a one-to-one correspondence.
Isomorphism Isomorphism
Inner Product Space
Function Space
Consider vector spaces V and W over the same field F and of the
Inner Product Space
Function Space
I Linear transformation T defines a one-one onto mapping, same dimension n.
which is invertible.
I dim V = dim W Question: Can we define an isomorphism between them?
I Inverse linear transformation T1 : W V Answer: Of course. As many as we want!
I T defines (is) an isomorphism.
I Vector spaces V and W are isomorphic to each other. The underlying field and the dimension together
completely specify a vector space, up to an isomorphism.
I Isomorphism is an equivalence relation. V and W are
equivalent!
If we need to perform some operations on vectors in one vector I All n-dimensional vector spaces over the field F are
space, we may as well isomorphic to one another.
1. transform the vectors to another vector space through an I In particular, they are all isomorphic to F n .
isomorphism, I The representation (columns) can be considered as the
2. conduct the required operations there, and objects (vectors) themselves.
3. map the results back to the original space through the inverse.
Applied Mathematical Methods Vector Spaces: Fundamental Concepts* 157, Applied Mathematical Methods Vector Spaces: Fundamental Concepts* 158,
Inner product space: a vector space possessing an inner product A distance function or metric: dV : V V R such that
I Euclidean space: over R dV (a, b) = ka bk
I Unitary space: over C
Applied Mathematical Methods Vector Spaces: Fundamental Concepts* 159, Applied Mathematical Methods Vector Spaces: Fundamental Concepts* 160,
Correspondingly, does the set F of continuous functions I Thus, F forms a vector space over R.
over [a, b] form a vector space? I Every function in this space is an (infinite dimensional) vector.
infinite dimensional vector space I Listing of values is just an obvious basis.
Applied Mathematical Methods Vector Spaces: Fundamental Concepts* 161, Applied Mathematical Methods Vector Spaces: Fundamental Concepts* 162,
Applied Mathematical Methods Topics in Multivariate Calculus 165, Applied Mathematical Methods Topics in Multivariate Calculus 166,
Applied Mathematical Methods Topics in Multivariate Calculus 167, Applied Mathematical Methods Topics in Multivariate Calculus 168,
f (x + x) = f (x) + f 0 (x)x f f f
1 1 1 df = [f (x)]T dx = dx1 + dx2 + + dxn
+ f 00 (x)x 2 + + f (n1) (x)x n1 + f (n) (xc )x n x1 x2 xn
2! (n 1)! n!
where xc = x + tx with 0 t 1 Ordinary derivative or total derivative:
Mean value theorem: existence of xc df dx
Taylors series: = [f (x)]T
dt dt
1
f (x + x) = f (x) + f 0 (x)x + f 00 (x)x 2 + f T dx
2! For f (t, x(t)), total derivative: df dt = t + [f (x)] dt
For a multivariate function, For f (v, x(v)) = f (v1 , v2 , , vm , x1 (v), x2 (v), , xn (v)),
1 T
f (x + x) = f (x) + [xT ]f (x) + [xT ]2 f (x) + f f f x f x
2! (v, x(v)) = + (v, x) = +[x f (v, x)] T
1 1 vi vi x vi vi vi
+ [xT ]n1 f (x) + [xT ]n f (x + tx) x x
(n 1)! n!
2 T
1 f x
f (x + x) f (x) + [f (x)]T x + xT (x) x f (v, x(v)) = v f (v, x) + (v) x f (v, x)
2 x 2 v
Applied Mathematical Methods Topics in Multivariate Calculus 169, Applied Mathematical Methods Topics in Multivariate Calculus 170,
Applied Mathematical Methods Topics in Multivariate Calculus 171, Applied Mathematical Methods Topics in Multivariate Calculus 172,
Applied Mathematical Methods Topics in Multivariate Calculus 173, Applied Mathematical Methods Topics in Multivariate Calculus 174,
where
0 az ay
ay
a = and a = az 0 ax
ax
ay ax 0
Applied Mathematical Methods Vector Analysis: Curves and Surfaces 177, Applied Mathematical Methods Vector Analysis: Curves and Surfaces 178,
Applied Mathematical Methods Vector Analysis: Curves and Surfaces 179, Applied Mathematical Methods Vector Analysis: Curves and Surfaces 180,
Curvature: The rate at which the direction changes with arc Binormal: b = u p
length.
(s) = ku0 (s)k = kr00 (s)k Serret-Frenet frame: Right-handed triad {u, p, b}
Unit principal normal: I Osculating, rectifying and normal planes
1 0
p= u (s) Torsion: Twisting out of the osculating plane
With general parametrization, I rate of change of b with respect to arc length s
dkr0 k du dkr0 k b0 = u0 p + u p0 = (s)p p + u p0 = u p0
r00 (t) = u(t) + kr0 (t)k = u(t) + (t)kr0 k2 p(t)
dt dt dt What is p0 ?
r/ Taking p0 = u + b,
AC = = 1/
I Osculating plane u
b0 = u (u + b) = p.
C
p
A
z r//
I Centre of curvature
r
I Radius of curvature Torsion of the curve
O y
Applied Mathematical Methods Vector Analysis: Curves and Surfaces 183, Applied Mathematical Methods Scalar and Vector Fields 184,
Applied Mathematical Methods Scalar and Vector Fields 185, Applied Mathematical Methods Scalar and Vector Fields 186,
Applied Mathematical Methods Scalar and Vector Fields 189, Applied Mathematical Methods Scalar and Vector Fields 190,
Applied Mathematical Methods Scalar and Vector Fields 191, Applied Mathematical Methods Scalar and Vector Fields 192,
T T O a b x O x
Z Z Z d Z x2 (y ) I Z Z Z Z Z
F2 F2 Fx Fy Fz
dxdy = dxdy = F2 (x, y )dy + + dx dy dz = (Fx nx +Fy ny +Fz nz )dS
x x y z
x1 (y ) x
T S
R c C
H
R R F2 F1 R R R Fz R R
To show: z dx dy dz = S Fz nz dS
Difference: C (F1 dx + F2 dy ) = R x y dx dy T
H R R First consider a region, the boundary of which is intersected at
In alternative form, C F dr = R curl F k dx dy . most twice by any line parallel to a coordinate axis.
Applied Mathematical Methods Scalar and Vector Fields 195, Applied Mathematical Methods Scalar and Vector Fields 196,
Lower and upper segments of S: z = z1 (x, y ) and z = z2 (x, y ). Greens identities (theorem)
Z Z Z Z Z Z z2
Fz Fz Region T and boundary S: as required in premises of
dx dy dz = dz dx dy Gausss theorem
T z R z1 z
Z Z (x, y , z) and (x, y , z): second order continuous scalar
= [Fz {x, y , z2 (x, y )} Fz {x, y , z1 (x, y )}]dx dy functions
R Z Z Z Z Z
R: projection of T on the xy -plane ndS = (2 + )dv
Z Z S Z ZT Z
Projection of area element of the upper segment: n z dS = dx dy ( ) ndS = (2 2 )dv
Projection of area element of the lower segment: n z dS = dx dy S T
R R R Fz R R
Thus, T z dx dy dz = S Fz nz dS. Direct consequences of Gausss theorem
Sum of three such components leads to the result. To establish, apply Gausss divergence theorem on , and then
on as well.
Extension to arbitrary regions by a suitable subdivision of domain!
Applied Mathematical Methods Scalar and Vector Fields 197, Applied Mathematical Methods Scalar and Vector Fields 198,
Applied Mathematical Methods Polynomial Equations 201, Applied Mathematical Methods Polynomial Equations 202,
n n1 n2
p(x) = a0 x + a1 x + a2 x + + an1 x + an b b 2 4ac
ax 2 + bx + c = 0 x =
2a
has exactly n roots x1 , x2 , , xn ; with
Method of completing the square:
p(x) = a0 (x x1 )(x x2 )(x x3 ) (x xn ). 2
b b b2 c b 2 b 2 4ac
x2 + x + = 2 x+ =
In general, roots are complex. a 2a 4a a 2a 4a2
Multiplicity: A root of p(x) with multiplicity k satisfies
Cubic equations (Cardano):
p(x) = p 0 (x) = p 00 (x) = = p (k1) (x) = 0.
x 3 + ax 2 + bx + c = 0
I Descartes rule of signs Completing the cube?
I Bracketing and separation Substituting y = x + k,
I Synthetic division and deflation
y 3 + (a 3k)y 2 + (b 2ak + 3k 2 )y + (c bk + ak 2 k 3 ) = 0.
p(x) = f (x)q(x) + r (x)
Choose the shift k = a/3.
Applied Mathematical Methods Polynomial Equations 203, Applied Mathematical Methods Polynomial Equations 204,
y3 u3 v3
Assuming y = u + v , we have = + + 3uv (u + v ). a 2 a2
x 4 +ax 3 +bx 2 +cx +d = 0 x2 + x = b x 2 cx d
uv = p/3 2 4
u 3 + v 3 = q For a perfect square,
4p 3 2
y 2 ay
3 3 2 2
and hence (u v ) = q + . a a2 y
27 x2 + x + = b+y x2 + c x + d
2 2 4 2 4
Solution:
r Under what condition, the new RHS will be a perfect square?
3 q3 q2 p3
u ,v = + = A, B (say). ay 2
2 4 27 a2 y2
c 4 b+y d =0
2 4 4
u = A1 , A1 , A1 2 , and v = B1 , B1 , B1 2
Resolvent of a quartic:
y1 = A1 + B1 , y2 = A1 + B1 2 and y3 = A1 2 + B1 .
y 3 by 2 + (ac 4d)y + (4bd a2 d c 2 ) = 0
At least one of the roots is real!!
Applied Mathematical Methods Polynomial Equations 205, Applied Mathematical Methods Polynomial Equations 206,
Applied Mathematical Methods Polynomial Equations 207, Applied Mathematical Methods Polynomial Equations 208,
Applied Mathematical Methods Polynomial Equations 209, Applied Mathematical Methods Polynomial Equations 210,
Basic Principles Basic Principles
Elimination Methods* Analytical Solution
General Polynomial Equations
Advanced Techniques* Analytical Solution
General Polynomial Equations
Two Simultaneous Equations Two Simultaneous Equations
Elimination Methods*
Advanced Techniques*
Three or more independent equations in as many unknowns?
Elimination Methods*
Advanced Techniques*
The method operates similarly even if the degrees of the original
equations in y are higher. I Cascaded elimination? Objections!
I Exploitation of special structures through clever heuristics
What about the degree of the eliminant equation? (mechanisms kinematics literature)
Two equations in x and y of degrees n1 and n2 :
x-eliminant is an equation of degree n 1 n2 in y
I Grobner basis representation
(algebraic geometry)
Maximum number of solutions:
Bezout number = n1 n2 I Continuation or homotopy method by Morgan
Note: Deficient systems may have less number of solutions. For solving the system f(x) = 0, identify another
structurally similar system g(x) = 0 with known
solutions and construct the parametrized system
Classical methods of elimination
I Sylvesters dialytic method h(x) = tf(x) + (1 t)g(x) = 0 for t [0, 1].
I Bezouts method
Track each solution from t = 0 to t = 1.
Applied Mathematical Methods Polynomial Equations 211, Applied Mathematical Methods Solution of Nonlinear Equations and Systems 212,
Applied Mathematical Methods Solution of Nonlinear Equations and Systems 213, Applied Mathematical Methods Solution of Nonlinear Equations and Systems 214,
with x0 +x 1
.
2 If x is the unique solution in interval J and
|g 0 (x)| h < 1 in J, then any x0 J converges to x .
Applied Mathematical Methods Solution of Nonlinear Equations and Systems 215, Applied Mathematical Methods Solution of Nonlinear Equations and Systems 216,
f (x + x) f (x) + f 0 (x)x
a
In the Newton-Raphson formula, f(x0)
f (x )f (x )
From f (xk + x) = 0, f 0 (x) xkk xk1k1
x = f (xk )/f 0 (xk ) e
xk xk1
Iteration: xk+1 = xk f (xk )f (xk1 ) f (xk )
xk+1 = xk f (xk )/f 0 (xk )
Draw the chord or
Convergence criterion: secant to f (x) through O
x1 x2 x3 x*
x0 x
f
|f (x)f 00 (x)| < |f 0 (x)|2 O
b
g
x*
d x0 x
(xk1 , f (xk1 )) and (xk , f (xk )). f(x1)
Applied Mathematical Methods Solution of Nonlinear Equations and Systems 219, Applied Mathematical Methods Solution of Nonlinear Equations and Systems 220,
Levenberg-Marquardt method
Applied Mathematical Methods Optimization: Introduction 221, Applied Mathematical Methods Optimization: Introduction 222,
f( x) f = f (x + x) f (x )
1 1 1
= f 0 (x )x + f 00 (x )x 2 + f 000 (x )x 3 + f iv (x )x 4 +
2! 3! 4!
Applied Mathematical Methods Optimization: Introduction 225, Applied Mathematical Methods Optimization: Introduction 226,
Applied Mathematical Methods Optimization: Introduction 227, Applied Mathematical Methods Optimization: Introduction 228,
Convexity
Unconstrained minimization problem Set S R n is a convex set if
x is called a local minimum of f (x) if such that x1 , x2 S and (0, 1), x1 + (1 )x2 S.
f (x) f (x ) for all x satisfying kx x k < .
Function f (x) over a convex set S: a convex function if
x1 , x2 S and (0, 1),
Optimality criteria f (x1 + (1 )x2 ) f (x1 ) + (1 )f (x2 ).
From Taylors series,
Chord approximation is an overestimate at intermediate points!
1
f (x) f (x ) = [g(x )]T x + xT [H(x )]x + .
2 x2 f(x)
Indefinite Hessian matrix characterizes a saddle point. Figure: A convex domain Figure: A convex function
Applied Mathematical Methods Optimization: Introduction 229, Applied Mathematical Methods Optimization: Introduction 230,
Applied Mathematical Methods Optimization: Introduction 231, Applied Mathematical Methods Optimization: Introduction 232,
Optimization Algorithms
Convergence of algorithms: notions of guarantee and speed
From the current point, move to another point, hopefully better.
Global convergence: the ability of an algorithm to approach and
converge to an optimal solution for an arbitrary
Which way to go? How far to go? Which decision is first?
problem, starting from an arbitrary point
I Practically, a sequence (or even subsequence) of
Strategies and versions of algorithms:
monotonically decreasing errors is enough.
Trust Region: Develop a local quadratic model
1 Local convergence: the rate/speed of approach, measured by p,
f (xk + x) = f (xk ) + [g(xk )]T x + xT Fk x, where
2 kxk+1 x k
and minimize it in a small trust region around x k . = lim <
k kxk x kp
(Define trust region with dummy boundaries.)
Line search: Identify a descent direction d k and minimize the I Linear, quadratic and superlinear rates of
function along it through the univariate function
convergence for p = 1, 2 and intermediate.
() = f (xk + dk ). I Comparison among algorithms with linear rates
I Exact or accurate line search of convergence is by the convergence ratio .
I Inexact or inaccurate line search
I Armijo, Goldstein and Wolfe conditions
Applied Mathematical Methods Optimization: Introduction 233, Applied Mathematical Methods Multivariate Optimization 234,
Direct search methods using only function values Simplex in n-dimensional space: polytope formed by n + 1 vertices
I Cyclic coordinate search Nelder and Meads method iterates over simplices that are
I Rosenbrocks method non-degenerate (i.e. enclosing non-zero hypervolume).
I Hooke-Jeeves pattern search First, n + 1 suitable points are selected for the starting simplex.
I Boxs complex method
Among vertices of the current simplex, identify the worst point x w ,
I Nelder and Meads simplex search the best point xb and the second worst point xs .
I Powells conjugate directions method Need to replace xw with a good point.
Useful for functions, for which derivative either does not exist at all
points in the domain or is computationally costly to evaluate. Centre of gravity of the face not containing x w :
n+1
X
1
Note: When derivatives are easily available, gradient-based xc = xi
n
algorithms appear as mainstream methods. i =1,i 6=w
Applied Mathematical Methods Multivariate Optimization 237, Applied Mathematical Methods Multivariate Optimization 238,
Applied Mathematical Methods Multivariate Optimization 239, Applied Mathematical Methods Multivariate Optimization 240,
Direct Methods Direct Methods
Steepest Descent (Cauchy) Method Steepest Descent (Cauchy) Method
Newtons Method
Steepest Descent (Cauchy) Method Steepest Descent (Cauchy) Method
Newtons Method
Hybrid (Levenberg-Marquardt) Method Hybrid (Levenberg-Marquardt) Method
Least Square Problems Least Square Problems
Analysis on a quadratic function
Steepest descent algorithm
1. Select a starting point x0 , set k = 0 and several parameters: For minimizing q(x) = 21 xT Ax + bT x, the error function:
tolerance G on gradient, absolute tolerance A on reduction
1
in function value, relative tolerance R on reduction in E (x) = (x x )T A(x x )
function value and maximum number of iterations M. 2
2
2. If kgk k G , STOP. Else dk = gk /kgk k. Convergence ratio: EE(x(xk+1 )
(A)1
k) (A)+1
3. Line search: Obtain k by minimizing () = f (xk + dk ),
> 0. Update xk+1 = xk + k dk . Local convergence is poor.
4. If |f (xk+1 ) f (xk )| A + R |f (xk )|,STOP. Else k k + 1.
Importance of steepest descent method
5. If k > M, STOP. Else go to step 2.
I conceptual understanding
Very good global convergence.
I initial iterations in a completely new problem
I spacer steps in other sophisticated methods
But, why so many STOPS?
Re-scaling of the problem through change of variables?
Applied Mathematical Methods Multivariate Optimization 241, Applied Mathematical Methods Multivariate Optimization 242,
Applied Mathematical Methods Multivariate Optimization 243, Applied Mathematical Methods Multivariate Optimization 244,
Direct Methods
Hybrid (Levenberg-Marquardt) Method
Steepest Descent (Cauchy) Method
Newtons Method
Least Square Problems Direct Methods
Steepest Descent (Cauchy) Method
Newtons Method
Hybrid (Levenberg-Marquardt) Method Hybrid (Levenberg-Marquardt) Method
Least Square Problems Least Square Problems
Methods of deflected gradients
Linear least square problem:
xk+1 = xk k [Mk ]gk
y () = x1 1 () + x2 2 () + + xn n ()
I identity matrix in place of Mk : steepest descent step For measured values y (i ) = yi ,
I Mk = F1
k : step of modified Newtons method n
X
I Mk = [H(xk )]1 and k = 1: pure Newtons step ei = xk k (i ) yi = [(i )]T x yi .
k=1
In Mk = [H(xk ) + k I ]1 , tune parameter k over iterations.
I Initial value of : large enough to favour steepest descent Error vector: e = Ax y
trend
Last square fit:
I Improvement in an iteration: reduced by a factor P
1
I Increase in function value: step rejected and increased Minimize E = 2
2
i ei = 12 eT e
Opportunism systematized!
Note: Cost of evaluating the Hessian remains a bottleneck. Pseudoinverse solution and its variants
Useful for problems where Hessian estimates come cheap!
Applied Mathematical Methods Multivariate Optimization 245, Applied Mathematical Methods Multivariate Optimization 246,
Combining a modified form diag(JT J) x = g(x) of steepest Professional procedure for nonlinear least square problems and also
descent formula with Newtons formula, for solving systems of nonlinear equations in the form h(x) = 0.
Levenberg-Marquardt step: [JT J + diag(JT J)]x = g(x)
Applied Mathematical Methods Multivariate Optimization 247, Applied Mathematical Methods Methods of Nonlinear Optimization* 248,
Applied Mathematical Methods Methods of Nonlinear Optimization* 249, Applied Mathematical Methods Methods of Nonlinear Optimization* 250,
Conjugacy of directions:
Question: How to find a set of n conjugate directions?
Two vectors d1 and d2 are mutually conjugate with
respect to a symmetric matrix A, if d T
1 Ad2 = 0. Gram-Schmidt procedure is a poor option!
Linear independence of conjugate directions: Conjugate gradient method
Conjugate directions with respect to a positive definite
Starting from d0 = g0 ,
matrix are linearly independent.
Expanding subspace property: In R n , with conjugate vectors dk+1 = gk+1 + k dk
{d0 , d1 , , dn1 } with respect to symmetric positive definite A,
Imposing the condition of conjugacy of d k+1 with dk ,
for any x0 R n , the sequence {x0 , x1 , x2 , , xn } generated as
T Ad T (g
gkT dk gk+1 k gk+1 k+1 gk )
xk+1 = xk + k dk , with k = , k = =
dT
k Adk dT
k Adk k dT
k Adk
where gk = Axk + b, has the property that
Resulting dk+1 conjugate to all the earlier directions, for
xk minimizes q(x) = 12 xT Ax + bT x on the line
a quadratic problem.
xk1 + dk1 , as well as on the linear variety x0 + Bk ,
where Bk is the span of d0 , d1 , , dk1 .
Applied Mathematical Methods Methods of Nonlinear Optimization* 251, Applied Mathematical Methods Methods of Nonlinear Optimization* 252,
Using k in place of k + 1 in the formula for d k+1 , Extension to general (non-quadratic) functions
I Varying Hessian A: determine the step size by line search.
dk = gk + k1 dk1
I After n steps, minimum not attained.
gkT gk But, gkT dk = gkT gk implies guaranteed descent.
gkT dk = gkT gk and k =
dT
k Adk
Globally convergent, with superlinear rate of convergence.
I What to do after n steps? Restart or continue?
Polak-Ribiere formula:
T (g Algorithm
gk+1 k+1 gk )
k = 1. Select x0 and tolerances G , D . Evaluate g0 = f (x0 ).
gkT gk 2. Set k = 0 and dk = gk .
No need to know A! 3. Line search: find k ; update xk+1 = xk + k dk .
Further, 4. Evaluate gk+1 = f (xk+1 ). If kgk+1 k G , STOP.
T (g
gk+1 k+1 gk )
T T
dk = 0 gk+1 gk = k1 (gkT + k dT 5. Find k = (Polak-Ribiere)
gk+1 k A)dk1 = 0. gkT gk
T
gk+1 gk+1
Fletcher-Reeves formula: or k = gkT gk
(Fletcher-Reeves).
T g
gk+1 Obtain dk+1 = g k+1 + k dk .
k+1 dT d
k = 6. If 1 kdk kk kdk+1 < D , reset g0 = gk+1 and go to step 2.
gkT gk k+1 k
Else, k k + 1 and go to step 3.
Applied Mathematical Methods Methods of Nonlinear Optimization* 253, Applied Mathematical Methods Methods of Nonlinear Optimization* 254,
(g2 g1 ) Bk dT T
i A(x2 x1 ) = di (g2 g1 ) = 0 for i = 1, 2, , k.
Applied Mathematical Methods Methods of Nonlinear Optimization* 255, Applied Mathematical Methods Methods of Nonlinear Optimization* 256,
Applied Mathematical Methods Methods of Nonlinear Optimization* 257, Applied Mathematical Methods Methods of Nonlinear Optimization* 258,
Applied Mathematical Methods Constrained Optimization 261, Applied Mathematical Methods Constrained Optimization 262,
Constraints Constraints
Optimality Criteria
Sensitivity
Constraints Constraints
Optimality Criteria
Sensitivity
Constrained optimization problem: Duality*
Structure of Methods: An Overview*
Duality*
Structure of Methods: An Overview*
Constraint qualification
Minimize f (x)
subject to gi (x) 0 for i = 1, 2, , l, or g(x) 0; h1 (x), h2 (x) etc are linearly independent, i.e. h(x) is
and hj (x) = 0 for j = 1, 2, , m, or h(x) = 0. full-rank.
Conceptually, minimize f (x), x . If a feasible point x0 , with h(x0 ) = 0, satisfies the constraint
Equality constraints reduce the domain to a surface or a manifold, qualification condition, we call it a regular point.
possessing a tangent plane at every point.
Gradient of the vector function h(x): At a regular feasible point x0 , tangent plane
T M = {y : [h(x0 )]T y = 0}
h
x1
hT gives the collection of feasible directions.
x2
h(x) [h1 (x) h2 (x) hm (x)] .. ,
.
Equality constraints reduce the dimension of the problem.
hT
xn
Variable elimination?
h
related to the usual Jacobian as Jh (x) = x = [h(x)]T .
Applied Mathematical Methods Constrained Optimization 263, Applied Mathematical Methods Constrained Optimization 264,
Constraints Constraints
Optimality Criteria
Sensitivity
Optimality Criteria Constraints
Optimality Criteria
Sensitivity
Duality* Duality*
Active inequality constraints gi (x0 ) = 0: Structure of Methods: An Overview*
Suppose x is a regular point with Structure of Methods: An Overview*
Applied Mathematical Methods Constrained Optimization 267, Applied Mathematical Methods Constrained Optimization 268,
z(0)T Hhj (x )z(0) + [hj (x )]T z(0) = 0. Restriction of the mapping HL (x ) : R n R n on subspace M?
Applied Mathematical Methods Constrained Optimization 269, Applied Mathematical Methods Constrained Optimization 270,
Question: Matrix representation for L M of size (n m) (n m)? By choosing parameters (p), we arrive at x . Call it x (p).
Select local orthonormal basis D R n(nm) for M. Question: How does f (x (p), p) depend on p?
Sensitivity Constraints
Optimality Criteria
Sensitivity
Duality* Constraints
Optimality Criteria
Sensitivity
Duality* Duality*
Structure of Methods: An Overview* Dual problem: Structure of Methods: An Overview*
Sensitivity to constraints
Reformulation of a problem in terms of the Lagrange multipliers.
In particular, in a revised problem, with h(x) = c and g(x) d,
Suppose x as a local minimum for the problem
using p = c,
Minimize f (x) subject to h(x) = 0,
p f (x , p) = 0, p h(x , p) = I and p g(x , p) = 0.
with Lagrange multiplier (vector) .
c f (x (p), p) =
f (x ) + [h(x )] = 0
Similarly, using p = d, we get df
(x (p), p) = . If HL (x ) is positive definite (assumption of local duality), then x
Lagrange multipliers and signify costs of pulling the minimum is also a local minimum of
point in order to satisfy the constraints!
f(x) = f (x) + T h(x).
I Equality constraint: both sides infeasible, sign of j identifies
one side or the other of the hypersurface. If we vary around , the minimizer of
I Inequality constraint: one side is feasible, no cost of pulling
L(x, ) = f (x) + T h(x)
from that side, so i 0.
varies continuously with .
Applied Mathematical Methods Constrained Optimization 273, Applied Mathematical Methods Constrained Optimization 274,
Constraints Constraints
Duality* Optimality Criteria
Sensitivity
Duality* Optimality Criteria
Sensitivity
Duality* Duality*
In the neighbourhood of , define the dual function Structure of Methods: An Overview*
Hessian of the dual function:
Structure of Methods: An Overview*
T
() = min L(x, ) = min[f (x) + h(x)]. H () = x()x h(x())
x x
Applied Mathematical Methods Constrained Optimization 275, Applied Mathematical Methods Constrained Optimization 276,
Constraints
Duality* Optimality Criteria
Sensitivity
Structure of Methods: An Overview* Constraints
Optimality Criteria
Sensitivity
Duality* Duality*
Consolidation (including all constraints) Structure of Methods: An Overview* For a problem of n variables, with m active Structure
constraints,
of Methods: An Overview*
I Assuming local convexity, the dual function: nature and dimension of working spaces
Penalty methods (R n ): Minimize the penalized function
(, ) = min L(x, , ) = min[f (x) + T h(x) + T g(x)].
x x
q(c, x) = f (x) + cP(x).
I Constraints on the dual: x L(x, , ) = 0, optimality of the
primal. Example: P(x) = 21 kh(x)k2 + 12 [max(0, g(x))]2 .
I Corresponding to inequality constraints of the primal problem, Primal methods (R nm ): Work only in feasible domain, restricting
non-negative variables in the dual problem. steps to the tangent plane.
I First order necessary conditons for the dual optimality: Example: Gradient projection method.
equivalent to the feasibility of the primal problem. Dual methods (R m ): Transform the problem to the space of
I The dual function is concave globally! Lagrange multipliers and maximize the dual.
Example: Augmented Lagrangian method.
I Under suitable conditions, ( ) = L(x , ) = f (x ).
Lagrange methods (R m+n ): Solve equations appearing in the KKT
I The Lagrangian L(x, , ) has a saddle point in the combined
conditions directly.
space of primal and dual variables: positive curvature along x
Example: Sequential quadratic programming.
directions and negative curvature along and directions.
Applied Mathematical Methods Constrained Optimization 277, Applied Mathematical Methods Linear and Quadratic Programming Problems* 278,
Duality*
Structure of Methods: An Overview*
I Constraint qualification
I KKT conditions
Linear and Quadratic Programming Problems*
I Second order conditions
Linear Programming
I Basic ideas for solution strategy Quadratic Programming
Applied Mathematical Methods Linear and Quadratic Programming Problems* 279, Applied Mathematical Methods Linear and Quadratic Programming Problems* 280,
Applied Mathematical Methods Linear and Quadratic Programming Problems* 281, Applied Mathematical Methods Linear and Quadratic Programming Problems* 282,
Applied Mathematical Methods Linear and Quadratic Programming Problems* 285, Applied Mathematical Methods Linear and Quadratic Programming Problems* 286,
If q 0, then w = q, z = 0 is a solution!
Lemkes method: artificial variable z 0 with e = [1 1 1 1]T :
I Fundamental issues and general perspective of the linear
Iw Mz ez0 = q programming problem
I The simplex method
With z0 = max(qi ),
I Quadratic programming
w = q + ez0 0 and z = 0: basic feasible solution I The active set method
I Lemkes method via the linear complementary problem
I Evolution of the basis similar to the simplex method.
I Out of a pair of w and z variables, only one can be there in
any basis. Necessary Exercises: 1,2,3,4,5
I At every step, one variable is driven out of the basis and its
partner called in.
I The step driving out z0 flags termination.
Applied Mathematical Methods Interpolation and Approximation 287, Applied Mathematical Methods Interpolation and Approximation 288,
p(x) = a0 + a1 x + a2 x 2 + + an x n .
Applied Mathematical Methods Interpolation and Approximation 291, Applied Mathematical Methods Interpolation and Approximation 292,
Applied Mathematical Methods Interpolation and Approximation 293, Applied Mathematical Methods Interpolation and Approximation 294,
Applied Mathematical Methods Interpolation and Approximation 297, Applied Mathematical Methods Interpolation and Approximation 298,
Applied Mathematical Methods Interpolation and Approximation 299, Applied Mathematical Methods Basic Methods of Numerical Integration 300,
Z Mid-point rule
b x +x
J= f (x)dx Selecting xi as xi = i 12 i ,
a Z xi Z b n
X
Divide [a, b] into n sub-intervals with f (x)dx hf (xi ) and f (x)dx h f (xi ).
xi 1 a i =1
a = x0 < x1 < x2 < < xn1 < xn = b, Error analysis: From Taylors series of f (x) about x i ,
Z xi Z xi
where xi xi 1 = h = ba
n .
(x xi )2
f (x)dx = f (xi ) + f 0 (xi )(x xi ) + f 00 (xi ) + dx
xi 1 xi 1 2
n
X
J = hf (xi ) = h[f (x1 ) + f (x2 ) + + f (xn )] h3 00 h5 iv
= hf (xi ) + f (xi ) + f (xi ) + ,
i =1 24 1920
Taking xi [xi 1 , xi ] as xi 1 and xi , we get summations J1 and J2 . third order accurate!
Over the entire domain [a, b],
As n (i.e. h 0), if J1 and J2 approach the same
Z b Xn n n
limit, then function f (x) is integrable over interval [a, b]. h3 X 00 X h2
f (x)dx h f (xi )+ f (xi ) = h f (xi )+ (ba)f 00 (),
a 24 24
A rectangular rule or a one-point rule i =1 i =1 i =1
Question: Which point to take as xi ? for [a, b] (from mean value theorem): second order accurate.
Applied Mathematical Methods Basic Methods of Numerical Integration 303, Applied Mathematical Methods Basic Methods of Numerical Integration 304,
Applied Mathematical Methods Basic Methods of Numerical Integration 305, Applied Mathematical Methods Basic Methods of Numerical Integration 306,
Applied Mathematical Methods Basic Methods of Numerical Integration 309, Applied Mathematical Methods Advanced Topics in Numerical Integration* 310,
Applied Mathematical Methods Advanced Topics in Numerical Integration* 311, Applied Mathematical Methods Advanced Topics in Numerical Integration* 312,
Applied Mathematical Methods Advanced Topics in Numerical Integration* 315, Applied Mathematical Methods Advanced Topics in Numerical Integration* 316,
Choose quadrature points x1 , x2 , , xn so that (x) is orthogonal Weight functions in Gaussian quadrature
to all polynomials of degree less than n.
Legendre polynomial What is so great about exact integration of polynomials?
Demand something else: generalization
Gauss-Legendre quadrature
Exact integration of polynomials times function W (x)
1. Choose Pn (x), Legendre polynomial of degree n, as (x).
2. Take its roots x1 , x2 , , xn as the quadrature points. Given weight function W (x) and number (n) of quadrature points,
3. Fit Lagrange polynomial of f (x), using these n points. work out the locations (xj s) of the n points and the
corresponding weights (wj s), so that integral
p(x) = L1 (x)f (x1 ) + L2 (x)f (x2 ) + + Ln (x)f (xn )
Z b n
X
4. Z Z Z W (x)f (x)dx = wj f (xj )
1 1 n
X 1 a j=1
f (x)dx = p(x)dx = f (xj ) Lj (x)dx
1 1 1
j=1 is exact for an arbitrary polynomial f (x) of degree up to
R1 (2n 1).
Weight values: wj = 1 Lj (x)dx, for j = 1, 2, , n
Applied Mathematical Methods Advanced Topics in Numerical Integration* 317, Applied Mathematical Methods Advanced Topics in Numerical Integration* 318,
Z b Z g2 (x)
A family of orthogonal polynomials with increasing degree:
S= f (x, y ) dy dx
quadrature points: roots of n-th member of the family. a g1 (x)
Z g2 (x) Z b
F (x) = f (x, y ) dy and S = F (x)dx
For different kinds of functions and different domains, g1 (x) a
I Gauss-Chebyshev quadrature with complete flexibility of individual quadrature methods.
I Gauss-Laguerre quadrature
Double integral on rectangular domain
I Gauss-Hermite quadrature
Two-dimensional version of Simpsons one-third rule:
I
Z 1Z 1
Several singular functions and infinite domains can be handled. f (x, y )dxdy
1 1
A very special case: = w0 f (0, 0) + w1 [f (1, 0) + f (1, 0) + f (0, 1) + f (0, 1)]
For W (x) = 1, Gauss-Legendre quadrature! + w2 [f (1, 1) + f (1, 1) + f (1, 1) + f (1, 1)]
N
V X
I F (xi )
N
i =1
Applied Mathematical Methods Numerical Solution of Ordinary Differential Equations 321, Applied Mathematical Methods Numerical Solution of Ordinary Differential Equations 322,
dy
= f (x, y ), y (x0 ) = y0
Numerical Solution of Ordinary Differential Equations dx
Single-Step Methods To determine: y (x) for x [a, b] with x 0 = a.
Practical Implementation of Single-Step Methods
Systems of ODEs Numerical solution: Start from the point (x 0 , y0 ).
Multi-Step Methods*
I y1 = y (x1 ) = y (x0 + h) =?
I Found (x1 , y1 ). Repeat up to x = b.
Applied Mathematical Methods Numerical Solution of Ordinary Differential Equations 323, Applied Mathematical Methods Numerical Solution of Ordinary Differential Equations 324,
Q1
y0 y0 P
Repitition of such steps constructs y (x).
P1
First order truncated Taylors series: O x0 x1 x2 x3 x O x0 x1 x
Applied Mathematical Methods Numerical Solution of Ordinary Differential Equations 327, Applied Mathematical Methods Numerical Solution of Ordinary Differential Equations 328,
Applied Mathematical Methods Numerical Solution of Ordinary Differential Equations 329, Applied Mathematical Methods Numerical Solution of Ordinary Differential Equations 330,
A typical IVP with an ODE system: The resulting form of the ODEs: normal system of ODEs
dy Example:
= f(x, y), y(x0 ) = y0
dx r 22
d 2x dy dxdx d y
An n-th order ODE: convert into a system of first order ODEs y 3 + 2x
+4 = 0
dt 2 dt dtdt dt 2
Defining state vector z(x) = [y (x) y 0 (x) y (n1) (x)]T , 2 3/2
d 3y d y
work out dz
to form the state space equation. e xy 3 y + 2x + 1 = e t
dx dt dt 2
h iT
Initial condition: z(x0 ) = [y (x0 ) y 0 (x0 ) y (n1) (x0 )]T State vector: z(t) = x dx y dy d2y
dt dt dt 2
A system of higher order ODEs with the highest order derivatives With three trivial derivatives z10 (t) = z2 , z30 (t) = z4 and z40 (t) = z5
of orders n1 , n2 , n3 , , nk and the other two obtained from the given ODEs,
I Cast into the state space form with the state vector of we get the state space equations as dz
= f(t, z).
dt
dimension n = n1 + n2 + n3 + + nk
Applied Mathematical Methods Numerical Solution of Ordinary Differential Equations 331, Applied Mathematical Methods Numerical Solution of Ordinary Differential Equations 332,
Applied Mathematical Methods ODE Solutions: Advanced Issues 333, Applied Mathematical Methods ODE Solutions: Advanced Issues 334,
Applied Mathematical Methods ODE Solutions: Advanced Issues 335, Applied Mathematical Methods ODE Solutions: Advanced Issues 336,
UNSTABLE
n
n+1 (I + hJ) 1 . 1
RK2
Euler
0
O
2Re () Re(h)
|1 + h| < 1 h <
||2 Figure: Stability regions of explicit methods
Note: Same result for single ODE w 0 = w , with complex .
For second order Runge-Kutta method, Question: What do these stability regions mean with reference to
h 2 2 the system eigenvalues?
n+1 = 1 + h + n Question: How does the step size adaptation of RK4 operate on a
2
system with eigenvalues on the left half of complex plane?
2
Region of stability in the plane of z = h: 1 + z + z2 < 1 Step size adaptation tackles instability by its symptom!
Applied Mathematical Methods ODE Solutions: Advanced Issues 337, Applied Mathematical Methods ODE Solutions: Advanced Issues 338,
1.5
STABLE
0.5 UNSTABLE
Im(h)
0
O
0.5
1.5
Applied Mathematical Methods ODE Solutions: Advanced Issues 339, Applied Mathematical Methods ODE Solutions: Advanced Issues 340,
4
x 10
3
(a) c = 3, k = 2: x = e t e 2t 3 3
1 1
1 1
0 0
x
x
0.8 0.8
1 1
0.6 0.6
2 2
0.4 0.4
0.2 0.2 3 3
0 0
x
4 4
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
t t
0.2 0.2
0.4 0.4
(c) With RK4 (d) With implicit Euler
0.6 0.6
0.8 0.8
1 1
Applied Mathematical Methods ODE Solutions: Advanced Issues 341, Applied Mathematical Methods ODE Solutions: Advanced Issues 342,
nN (scalar) equations
Merits of shooting method
3. Assemble additional n equations from boundary conditions.
I Very few parameters to start
4. Starting from a guess solution over the grid, solve this system.
I In many cases, it is found quite efficient.
(Sparse Jacobian is an advantage.)
Iterative schemes for solution of systems of linear equations.
Applied Mathematical Methods ODE Solutions: Advanced Issues 345, Applied Mathematical Methods Existence and Uniqueness Theory 346,
Applied Mathematical Methods Existence and Uniqueness Theory 347, Applied Mathematical Methods Existence and Uniqueness Theory 348,
k
Mh
Mh
But, premises here are sufficient, not necessary!
y y
0 0
k
k
Mh
Mh
Result inconclusive.
y k
y0k
0
The IVP has solutions: y (x) = 1 + cx for all values of c.
h h h h
The solution is not unique.
O x 0 h x0 x 0+h x O x 0h x0 x0+h x
Applied Mathematical Methods Existence and Uniqueness Theory 351, Applied Mathematical Methods Existence and Uniqueness Theory 352,
Applied Mathematical Methods Existence and Uniqueness Theory 353, Applied Mathematical Methods Existence and Uniqueness Theory 354,
Applied Mathematical Methods Existence and Uniqueness Theory 357, Applied Mathematical Methods Existence and Uniqueness Theory 358,
Continuity and boundedness of P1 (x), P2 (x), , Pn (x) Multiple solutions or non-existence of solution is no surprise.
and R(x) guarantees well-posedness.
Applied Mathematical Methods Existence and Uniqueness Theory 359, Applied Mathematical Methods First Order Ordinary Differential Equations 360,
Applied Mathematical Methods First Order Ordinary Differential Equations 363, Applied Mathematical Methods First Order Ordinary Differential Equations 364,
ODEs with Rational Slope FunctionsFormation of Differential Equations and Their Solutions
Separation of Variables
ODEs with Rational Slope Functions
Some Special ODEs Formation of Differential Equations and Their Solutions
Separation of Variables
ODEs with Rational Slope Functions
Some Special ODEs Some Special ODEs
Exact Differential Equations and Reduction to the Exact Form Exact Differential Equations and Reduction to the Exact Form
f1 (x, y ) First Order Linear (Leibnitz) ODE and Associated Forms Clairauts equation First Order Linear (Leibnitz) ODE and Associated Forms
0
y = Orthogonal Trajectories
Modelling and Simulation y = xy 0 + f (y 0 )
Orthogonal Trajectories
Modelling and Simulation
f2 (x, y )
If f1 and f2 are homogeneous functions of n-th degree, then Substitute p = y 0 and differentiate:
substitution y = ux separates variables x and u.
dp dp dp
dy 1 (y /x) du 1 (u) dx 2 (u) p =p+x + f 0 (p) [x + f 0 (p)] = 0
= u+x = = du dx dx dx
dx 2 (y /x) dx 2 (u) x 1 (u) u2 (u)
dp
For y 0 = a1 x+b1 y +c1 = 0 means y 0 = p = m (constant)
a2 x+b2 y +c2 , coordinate shift dx
dy dY
I family of straight lines y = mx + f (m) as general solution
x = X + h, y = Y + k y 0 = =
dx dX Singular solution:
produces
dY a1 X + b1 Y + (a1 h + b1 k + c1 ) x = f 0 (p) and y = f (p) pf 0 (p)
= .
dX a2 X + b2 Y + (a2 h + b2 k + c2 )
Choose h and k such that Singular solution is the envelope of the family of straight
a1 h + b 1 k + c 1 = 0 = a 2 h + b 2 k + c 2 . lines that constitute the general solution.
If the system is inconsistent, then substitute u = a 2 x + b2 y .
Applied Mathematical Methods First Order Ordinary Differential Equations 365, Applied Mathematical Methods First Order Ordinary Differential Equations 366,
dp dp dy dp dp Solution: (x, y ) = c
y 00 = = =p f (y , p, p ) = 0.
dx dy dx dy dy Working rule:
Z Z
Solve for p(y ). 1 (x, y ) = M(x, y )dx+g1 (y ) and 2 (x, y ) = N(x, y )dy +g2 (x)
Resulting equation solved through a quadrature as
Z Determine g1 (y ) and g2 (x) from 1 (x, y ) = 2 (x, y ) = (x, y ).
dy dy
= p(y ) x = x0 + . If M N
y 6= x , but y (FM) = x (FN)?
dx p(y )
F : Integrating factor
Applied Mathematical Methods First Order Ordinary Differential Equations 367, Applied Mathematical Methods First Order Ordinary Differential Equations 368,
Applied Mathematical Methods First Order Ordinary Differential Equations 369, Applied Mathematical Methods First Order Ordinary Differential Equations 370,
dy 1
=
dx f1 (x, y )
Necessary Exercises: 1,3,5,7
Solving this ODE, another family of curves (x, y , k) = 0.
Orthogonal trajectories
Applied Mathematical Methods Second Order Linear Homogeneous ODEs 371, Applied Mathematical Methods Second Order Linear Homogeneous ODEs 372,
Outline Introduction
Homogeneous Equations with Constant Coefficients
Euler-Cauchy Equation
Introduction Introduction
Homogeneous Equations with Constant Coefficients
Euler-Cauchy Equation
Theory of the Homogeneous Equations Theory of the Homogeneous Equations
Basis for Solutions Second order ODE: Basis for Solutions
f (x, y , y 0 , y 00 ) = 0
Special case of a linear (non-homogeneous) ODE:
Second Order Linear Homogeneous ODEs
y 00 + P(x)y 0 + Q(x)y = R(x)
Introduction
Homogeneous Equations with Constant Coefficients Non-homogeneous linear ODE with constant coefficients:
Euler-Cauchy Equation
Theory of the Homogeneous Equations y 00 + ay 0 + by = R(x)
Basis for Solutions
For R(x) = 0, linear homogeneous differential equation
y 00 + P(x)y 0 + Q(x)y = 0
y 00 + ay 0 + by = 0
Applied Mathematical Methods Second Order Linear Homogeneous ODEs 373, Applied Mathematical Methods Second Order Linear Homogeneous ODEs 374,
00 0
I Real and equal (a2 = 4b): 1 = 2 = = 2a
y + ay + by = 0
only solution in hand: y1 = e x
Assume
y = e x y 0 = e x and y 00 = 2 e x . Method to develop another solution?
I Verify that y2 = xe x is another solution.
Substitution: (2 + a + b)e x = 0 y (x) = c1 y1 (x) + c2 y2 (x) = (c1 + c2 x)e x
Auxiliary equation:
2 + a + b = 0
I Complex conjugate (a2 < 4b): 1,2 = 2a i
Solve for 1 and 2 :
a a
Solutions: e 1 x and e 2 x y (x) = c1 e ( 2 +i )x + c2 e ( 2 i )x
ax
= e 2 [c1 (cos x + i sin x) + c2 (cos x i sin x)]
Three cases ax
= e 2 [A cos x + B sin x],
I Real and distinct (a2 > 4b): 1 6= 2
with A = c1 + c2 , B = i(c1 c2 ).
y (x) = c1 y1 (x) + c2 y2 (x) = c1 e 1 x + c2 e 2 x I A third form: y (x) = Ce 2 cos(x )
ax
Applied Mathematical Methods Second Order Linear Homogeneous ODEs 375, Applied Mathematical Methods Second Order Linear Homogeneous ODEs 376,
x 2 y 00 + axy 0 + by = 0
Substituting y = x k , auxiliary (or indicial) equation: y 00 + P(x)y 0 + Q(x)y = 0
2
k + (a 1)k + b = 0 Well-posedness of its IVP:
The initial value problem of the ODE, with arbitrary
1. Roots real and distinct [(a 1)2 > 4b]: k1 6= k2 . initial conditions y (x0 ) = Y0 , y 0 (x0 ) = Y1 , has a unique
solution, as long as P(x) and Q(x) are continuous in the
y (x) = c1 x k1 + c2 x k2 . interval under question.
2. Roots real and equal [(a 1)2 = 4b]: k1 = k2 = k = a1
2 .
Applied Mathematical Methods Second Order Linear Homogeneous ODEs 377, Applied Mathematical Methods Second Order Linear Homogeneous ODEs 378,
is the unique solution of the IVP u 00 y1 + 2u 0 y10 + uy100 + P(u 0 y1 + uy10 ) + Quy1 = 0
Applied Mathematical Methods Second Order Linear Homogeneous ODEs 381, Applied Mathematical Methods Second Order Linear Homogeneous ODEs 382,
Applied Mathematical Methods Second Order Linear Homogeneous ODEs 383, Applied Mathematical Methods Second Order Linear Non-Homogeneous ODEs 384,
Linear ODEs and Their Solutions Linear ODEs and Their Solutions
Method of Undetermined Coefficients
Method of Variation of Parameters
Linear ODEs and Their Solutions Linear ODEs and Their Solutions
Method of Undetermined Coefficients
Method of Variation of Parameters
The Complete Analogy Closure
Procedure to solve y 00 + P(x)y 0 + Q(x)y = R(x) Closure
Applied Mathematical Methods Second Order Linear Non-Homogeneous ODEs 387, Applied Mathematical Methods Second Order Linear Non-Homogeneous ODEs 388,
Applied Mathematical Methods Second Order Linear Non-Homogeneous ODEs 389, Applied Mathematical Methods Second Order Linear Non-Homogeneous ODEs 390,
As y1 and y2 satisfy the associated HE, u10 y10 + u20 y20 = R(x)
Applied Mathematical Methods Second Order Linear Non-Homogeneous ODEs 391, Applied Mathematical Methods Second Order Linear Non-Homogeneous ODEs 392,
y1 y2 u10 0
=
y10 y20 u20 R
I Function space perspective of linear ODEs
Since Wronskian is non-zero, this system has unique solution I Method of undetermined coefficients
y2 R y1 R I Method of variation of parameters
u10 = and u20 = .
W W
Direct quadrature:
Z Z
y2 (x)R(x) y1 (x)R(x) Necessary Exercises: 1,3,5,6
u1 (x) = dx and u2 (x) = dx
W [y1 (x), y2 (x)] W [y1 (x), y2 (x)]
Applied Mathematical Methods Higher Order Linear ODEs 393, Applied Mathematical Methods Higher Order Linear ODEs 394,
y (n) +P1 (x)y (n1) +P2 (x)y (n2) + +Pn1 (x)y 0 +Pn (x)y = R(x)
General solution: y (x) = yh (x) + yp (x), where
I yp (x): a particular solution
Higher Order Linear ODEs
I yh (x): general solution of corresponding HE
Theory of Linear ODEs
Homogeneous Equations with Constant Coefficients y (n) +P1 (x)y (n1) +P2 (x)y (n2) + +Pn1 (x)y 0 +Pn (x)y = 0
Non-Homogeneous Equations
For the HE, suppose we have n solutions y 1 (x), y2 (x), , yn (x).
Euler-Cauchy Equation of Higher Order
Assemble the state vectors in matrix
y1 y2 yn
y10 y2 0 0
yn
y100 y 00 y 00
Y(x) = 2 n .
.. .. .. ..
. . . .
(n1) (n1) (n1)
y1 y2 yn
Wronskian:
W (y1 , y2 , , yn ) = det[Y(x)]
Applied Mathematical Methods Higher Order Linear ODEs 395, Applied Mathematical Methods Higher Order Linear ODEs 396,
Applied Mathematical Methods Higher Order Linear ODEs 399, Applied Mathematical Methods Laplace Transforms 400,
Applied Mathematical Methods Laplace Transforms 401, Applied Mathematical Methods Laplace Transforms 402,
Introduction Introduction
Basic Properties and Results
Application to Differential Equations
Introduction Introduction
Basic Properties and Results
Application to Differential Equations
Handling Discontinuities Handling Discontinuities
Convolution Another question: What if R(x) is not continuous? Convolution
Classical perspective Advanced Issues Advanced Issues
I When power is switched on or off, what happens?
I Entire differential equation is known in advance.
I If there is a sudden voltage fluctuation, what happens to the
I Go for a complete solution first.
equipment connected to the power line?
I Afterwards, use the initial (or other) conditions.
Or, does anything happen in the immediate future?
A practical situation Something certainly happens. The IVP has a solution!
I You have a plant Laplace transforms provide a tool to find the solution, in
I intrinsic dynamic model as well as the starting conditions. spite of the discontinuity of R(x).
I You may drive the plant with different kinds of inputs on Integral transform:
different occasions.
Z b
Introduction Introduction
Basic Properties and Results
Application to Differential Equations
Basic Properties and Results Introduction
Basic Properties and Results
Application to Differential Equations
Handling Discontinuities Handling Discontinuities
Convolution Linearity: Convolution
With kernel function K (s, t) = e st , and limits a = 0, b = , Advanced Issues Advanced Issues
When this integral exists, f (t) has its Laplace transform. Laplace transforms of some elementary functions:
Z st
e 1
Sufficient condition: L(1) = e st dt = = ,
0 s 0 s
I f (t) is piecewise continuous, and Z st Z
e 1 st 1
I it is of exponential order, i.e. |f (t)| < Me ct for some (finite) L(t) = e st tdt = t + e dt = 2 ,
0 s 0 s 0 s
M and c.
n!
L(t n ) = n+1 (for positive integer n),
s
Inverse Laplace transform:
(a + 1)
L(t a ) = (for a R + )
f (t) = L1 {F (s)} s a+1
1
and L(e at ) = .
s a
Applied Mathematical Methods Laplace Transforms 405, Applied Mathematical Methods Laplace Transforms 406,
Applied Mathematical Methods Laplace Transforms 407, Applied Mathematical Methods Laplace Transforms 408,
o a t o a a+k t o a a+k t o a t
0 if t<a 1
f (t a)u(t a) = 1
k
f (t a) if t>a 1 u(tak)
k
has its Laplace transform as (a) Unit step function (b) Composition (c) Function f
k
(d) Diracs function
Z
L{f (t a)u(t a)} = e st f (t a)dt Figure: Step and impulse functions
Za and note that its integral
= e s(a+ ) f ( )d = e as L{f (t)}. Z Z a+k
1
0 Ik = fk (t a)dt = dt = 1.
0 a k
Second shifting property or the time shifting rule
does not depend on k.
Applied Mathematical Methods Laplace Transforms 409, Applied Mathematical Methods Laplace Transforms 410,
1
L{(t a)} = [L{u(t a)} L{u(t a k)}]
lim
k
k0
e as e (a+k)s
= lim = e as
k0 ks o t o t
Applied Mathematical Methods Laplace Transforms 411, Applied Mathematical Methods Laplace Transforms 412,
Convolution Introduction
Basic Properties and Results
Application to Differential Equations
Points to note Introduction
Basic Properties and Results
Application to Differential Equations
Handling Discontinuities Handling Discontinuities
Convolution Convolution
Through substitution t 0 = t , Advanced Issues Advanced Issues
Z Z
0
H(s) = f ( ) e s(t + ) g (t 0 ) dt 0 d I A paradigm shift in solution of IVPs
0 0
Z Z I Handling discontinuous input functions
0
= f ( )e s e st g (t 0 ) dt 0 d I Extension to ODE systems
0 0
I The idea of integral transforms
H(s) = F (s)G (s)
Convolution theorem:
Laplace transform of the convolution integral of two
functions is given by the product of the Laplace Necessary Exercises: 1,2,4
transforms of the two functions.
Utilities:
I To invert Q(s)R(s), one can convolute y (t) = q(t) r (t).
I In solving some integral equation.
Applied Mathematical Methods ODE Systems 413, Applied Mathematical Methods ODE Systems 414,
y0 = f(t, y)
Solution: a vector function y = h(t)
ODE Systems Autonomous system: y0 = f(y)
Fundamental Ideas I Points in y-space where f(y) = 0:
Linear Homogeneous Systems with Constant Coefficients equilibrium points or critical points
Linear Non-Homogeneous Systems
Nonlinear Systems System of linear ODEs:
y0 = A(t)y + g(t)
Absurd!
Applied Mathematical Methods ODE Systems 417, Applied Mathematical Methods ODE Systems 418,
Applied Mathematical Methods ODE Systems 419, Applied Mathematical Methods ODE Systems 420,
Necessary Exercises: 1
Applied Mathematical Methods Stability of Dynamic Systems 423, Applied Mathematical Methods Stability of Dynamic Systems 424,
Applied Mathematical Methods Stability of Dynamic Systems 425, Applied Mathematical Methods Stability of Dynamic Systems 426,
saddle point
Figure: Neighbourhood of critical points
unstable
Important terms
Phase plane analysis
Stability: If y0 is a critical point of the dynamic system
I Determine all the critical points. y0 = f(y) and for every > 0, > 0 such that
I Linearize the ODE system around each of them as
ky(t0 ) y0 k < ky(t) y0 k < t > t0 ,
y0 = J(y0 )(y y0 ).
then y0 is a stable critical point. If, further,
I With z = y y0 , analyze each neighbourhood from z 0 = Jz. y(t) y0 as t , then y0 is said to be
I Assemble outcomes of local phase plane analyses. asymptotically stable.
Positive definite function: A function V (y), with V (0) = 0, is
Features of a dynamic system are typically captured by
called positive definite if
its critical points and their neighbourhoods.
V (y) > 0 y 6= 0.
Limit cycles
Lyapunov function: A positive definite function V (y), having
I isolated closed trajectories (only in nonlinear systems)
continuous V
yi , with a negative semi-definite rate of
change
Systems with arbitrary dimension of state space?
V 0 = [V (y)]T f(y).
Applied Mathematical Methods Stability of Dynamic Systems 429, Applied Mathematical Methods Stability of Dynamic Systems 430,
Applied Mathematical Methods Series Solutions and Special Functions 431, Applied Mathematical Methods Series Solutions and Special Functions 432,
I restricted in scope
y 00 + y = 0 and 4x 2 y 00 = y .
Applied Mathematical Methods Series Solutions and Special Functions 433, Applied Mathematical Methods Series Solutions and Special Functions 434,
00 0
y + P(x)y + Q(x)y = 0
X
X
If P(x) and Q(x) are analytic at a point x = x 0 , y 0 (x) = (n + 1)an+1 x n and y 00 (x) = (n + 2)(n + 1)an+2 x n
n=0 n=0
i.e. if they possess convergent series expansions in powers
leads to
of (x x0 ) with some radius of convergence R,
"
# X
n
X X X
then the solution is analytic at x0 , and a power series solution P(x)y 0 = pn x n (n + 1)an+1 x n = pnk (k + 1)ak+1 x n
n=0 n=0 n=0 k=0
y (x) = a0 + a1 (x x0 ) + a2 (x x0 )2 + a3 (x x0 )3 +
" # X
n
X X X
is convergent at least for |x x0 | < R. Q(x)y = qn x n an x n = qnk ak x n
n=0 n=0 n=0 k=0
For x0 = 0 (without loss of generality), suppose " #
X n
X n
X
X
P(x) = n 2 3
pn x = p 0 + p 1 x + p 2 x + p 3 x + , (n + 2)(n + 1)an+2 + pnk (k + 1)ak+1 + qnk ak x n = 0
n=0 k=0 k=0
n=0
X Recursion formula:
Q(x) = qn x n = q 0 + q 1 x + q 2 x 2 + q 3 x 3 + , 1 X n
Applied Mathematical Methods Series Solutions and Special Functions 435, Applied Mathematical Methods Series Solutions and Special Functions 436,
in which b(x) and c(x) are analytic at the origin. Note: The need is to develop two solutions.
Applied Mathematical Methods Series Solutions and Special Functions 437, Applied Mathematical Methods Series Solutions and Special Functions 438,
Applied Mathematical Methods Series Solutions and Special Functions 441, Applied Mathematical Methods Series Solutions and Special Functions 442,
Choosing ak = (2k1)(2k3)31
P0 (x)
k! , 0.8
P1(x)
Pk (x) = 0.4
P3 (x)
k!
P 2(x)
xk x + x .
Pn (x)
2(2k 1) 2 4(2k 1)(2k 3) 0
0.2
P0 (x) = 1, 0.8
P1 (x) = x, 1
1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1
x
1
P2 (x) = (3x 2 1),
2 Figure: Legendre polynomials
1
P3 (x) = (5x 3 3x),
2 All roots of a Legendre polynomial are real and they lie in [1, 1].
1
P4 (x) = (35x 4 30x 2 + 3) etc.
8 Orthogonality?
Applied Mathematical Methods Series Solutions and Special Functions 443, Applied Mathematical Methods Series Solutions and Special Functions 444,
an2
an = for n 2. When k is not an integer, Jk (x) completes the basis.
n(n + 2r )
Odd coefficients are zero and For integer k, Jk (x) = (1)k Jk (x), linearly dependent!
a0 a0 Reduction of order can be used to find another solution.
a2 = , a4 = , etc. Bessel function of the second kind or Neumann function
2(2k + 2) 2 4(2k + 2)(2k + 4)
Applied Mathematical Methods Series Solutions and Special Functions 445, Applied Mathematical Methods Sturm-Liouville Theory 446,
Applied Mathematical Methods Sturm-Liouville Theory 447, Applied Mathematical Methods Sturm-Liouville Theory 448,
Applied Mathematical Methods Sturm-Liouville Theory 449, Applied Mathematical Methods Sturm-Liouville Theory 450,
Comparing with 00 + P2 0 + Q2 = 0,
d where P2 = P1 = P and
[F (x)y 0 + G (x)y ] = F (x)y 00 + [F 0 (x) + G (x)]y 0 + G 0 (x)y , Q2 = Q1 P10 = Q P 0 (P 0 ) = Q.
dx
The adjoint of the adjoint of a second order linear
F 0 (x) + G (x) = F (x)P(x) and G 0 (x) = F (x)Q(x).
homogeneous equation is the original equation itself.
Elimination of G (x):
I When is an ODE its own adjoint?
00 0 0
F (x) P(x)F (x) + [Q(x) P (x)]F (x) = 0
I y 00 + P(x)y 0 + Q(x)y = 0 is self-adjoint only in the trivial case
of P(x) = 0.
This is the adjoint of the original ODE.
I What about F (x)y 00 + F (x)P(x)y 0 + F (x)Q(x)y = 0?
Applied Mathematical Methods Sturm-Liouville Theory 451, Applied Mathematical Methods Sturm-Liouville Theory 452,
Second order self-adjoint ODE Casting a given ODE into the self-adjoint form:
Question: What is the adjoint of Fy 00 + FPy 0 + FQy = 0? Equation y 00 + P(x)y 0 + Q(x)y = 0 is converted to the
Rephrased question: What is the ODE that (x) has to satisfy if self-adjoint
R form through the multiplication of
F (x) = e P(x)dx .
d
Fy 00 + FPy 0 + FQy = Fy 0 + (x)y ?
dx General form of self-adjoint equations:
Comparing terms, d
[F (x)y 0 ] + R(x)y = 0
dx
d
(F ) + (x) = FP and 0 (x) = FQ. Working rules:
dx
d2 d
I To determine whether a given ODE is in the self-adjoint form,
Eliminating (x), we have dx 2
(F ) + FQ = dx (FP). check whether the coefficient of y 0 is the derivative of the
coefficient of y 00 .
F 00 + 2F 0 0 + F 00 + FQ = FP0 + (FP)0 I To convert an ODE into the self-adjoint form, first obtain the
F 00 + (2F 0 FP)0 + F 00 (FP)0 + FQ = 0 equation in normal form by dividing with the coefficient of y 00 .
If the coefficient of y 0 now
R is P(x), then next multiply the
This is the same as the original ODE, when F 0 (x) = F (x)P(x) resulting equation with e Pdx .
Applied Mathematical Methods Sturm-Liouville Theory 453, Applied Mathematical Methods Sturm-Liouville Theory 454,
Regular S-L problem: i.e. they are orthogonal with respect to the weight
a1 y (a) + a2 y 0 (a) = 0 and b1 y (b) + b2 y 0 (b) = 0, function p(x).
vectors [a1 a2 ]T and [b1 b2 ]T being non-zero.
From the hypothesis,
Periodic S-L problem: With r (a) = r (b),
0 0 0 0
y (a) = y (b) and y 0 (a) = y 0 (b). (rym ) + (q + m p)ym = 0 (q + m p)ym yn = (rym ) yn
Singular S-L problem: If r (a) = 0, no boundary condition is (ryn0 )0 + (q + n p)yn = 0 (q + n p)ym yn = (ryn0 )0 ym
needed at x = a. If r (b) = 0, no boundary condition Subtracting,
is needed at x = b.
(We just look for bounded solutions over [a, b].) (m n )pym yn = (ryn0 )0 ym + (ryn0 )ym0 0
(rym 0 0
)yn0 (rym ) yn
0 0
0
= r (ym yn yn ym ) .
Applied Mathematical Methods Sturm-Liouville Theory 455, Applied Mathematical Methods Sturm-Liouville Theory 456,
Applied Mathematical Methods Sturm-Liouville Theory 459, Applied Mathematical Methods Sturm-Liouville Theory 460,
Applied Mathematical Methods Sturm-Liouville Theory 461, Applied Mathematical Methods Sturm-Liouville Theory 462,
N
X Answer: Depends on the basis used.
E = kf k2 cn2 0. Convergence in the mean or mean-square convergence:
n=0 An orthonormal set of functions {k (x)} on an interval
Bessels inequality: a x b is said to be complete in a class of functions,
N
X Z b or to form a basis for it, if the corresponding generalized
cn2 kf k2 = p(x)f 2 (x)dx Fourier series for a function converges in the mean to the
n=0 a function, for every function belonging to that class.
Partial sum P 2 2
k
Parsevals identity: n=0 cn = kf k
X
sk (x) = am m (x) Eigenfunction expansion: generalized Fourier series in terms of
m=0
eigenfunctions of a Sturm-Liouville problem
Question: Does the sequence of {sk } converge? I convergent for continuous functions with piecewise continuous
Answer: The bound in Bessels inequality ensures convergence. derivatives, i.e. they form a basis for this class.
Applied Mathematical Methods Sturm-Liouville Theory 463, Applied Mathematical Methods Fourier Series and Integrals 464,
Applied Mathematical Methods Fourier Series and Integrals 465, Applied Mathematical Methods Fourier Series and Integrals 466,
Applied Mathematical Methods Fourier Series and Integrals 467, Applied Mathematical Methods Fourier Series and Integrals 468,
Multiplying the Fourier series with f (x), Original spirit of Fouries series
h I representation of periodic functions over (, ).
X nx nx i
f 2 (x) = a0 f (x) + an f (x) cos + bn f (x) sin Question: What about a function f (x) defined only on [L, L]?
L L
n=1 Answer: Extend the function as
Parsevals identity:
Z F (x) = f (x) for L x L, and F (x + 2L) = F (x).
1X 2 1 L
a02 + (an + bn2 ) = 2
f (x)dx Fourier series of F (x) acts as the Fourier series representation of
2 2L L f (x) in its own domain.
n=1
The Fourier series representation is complete. In Euler formulae, notice that bm = 0 for an even function.
I A periodic function f (x) is composed of its mean value and The Fourier series of an even function is a Fourier
several sinusoidal components, or harmonics. cosine series
I Fourier coefficients are corresponding amplitudes.
X nx
I Parsevals identity is simply a statement on energy balance! f (x) = a0 + an cos ,
L
Bessels inequality n=1
N RL RL
1X 1 where a0 = 1
f (x)dx and an = 2
f (x) cos nx
L dx.
a02 + (an2 + bn2 ) kf (x)k2 L 0 L 0
2 2L
n=1
Similarly, for an odd function, Fourier sine series.
Applied Mathematical Methods Fourier Series and Integrals 469, Applied Mathematical Methods Fourier Series and Integrals 470,
Over [0, L], sometimes we need a series of sine terms only, or Half-range expansions
cosine terms only! I For Fourier cosine series of a function f (x) over [0, L], even
f(x) fc(x) periodic extension:
f (x) for 0 x L,
fc (x) = and fc (x+2L) = fc (x)
f (x) for L x < 0,
O L x 3L 2L L O L 2L 3L x
I For Fourier sine series of a function f (x) over [0, L], odd
(a) Function over ( 0,L) (b) Even periodic extension periodic extension:
f (x) for 0 x L,
fs (x) = and fs (x+2L) = fs (x)
fs(x)
f (x) for L x < 0,
To develop the Fourier series of a function, which is available as a
set of tabulated values or a black-box library routine,
3L 2L L O L 2L 3L x
integrals in the Euler formulae are evaluated numerically.
Important: Fourier series representation is richer and more
(c) Odd periodic extension
powerful compared to interpolatory or least square approximation
Figure: Periodic extensions for cosine and sine series in many contexts.
Applied Mathematical Methods Fourier Series and Integrals 471, Applied Mathematical Methods Fourier Series and Integrals 472,
Question: How to apply the idea of Fourier series to a In the limit (if it exists), as L , p 0,
non-periodic function over an infinite domain? Z Z Z
1
Answer: Magnify a single period to an infinite length. f (x) = cos px f (v ) cos pv dv + sin px f (v ) sin pv dv dp.
0
Fourier series of function fL (x) of period 2L:
Fourier integral of f (x):
X Z
fL (x) = a0 + (an cos pn x + bn sin pn x),
n=1
f (x) = [A(p) cos px + B(p) sin px]dp,
0
n
where pn = L is the frequency of the n-th harmonic. where amplitude functions
Z Z
Inserting the expressions for the Fourier coefficients, 1 1
A(p) = f (v ) cos pv dv and B(p) = f (v ) sin pv dv
Z L
1
fL (x) = fL (x)dx are defined for a continuous frequency variable p.
2L L
Z L Z L
1X In phase angle form,
+ cos pn x fL (v ) cos pn v dv + sin pn x fL (v ) sin pn v dv p, Z
Z
n=1 L L 1
f (x) = f (v ) cos p(x v )dv dp.
where p = pn+1 pn = L . 0
Applied Mathematical Methods Fourier Series and Integrals 473, Applied Mathematical Methods Fourier Series and Integrals 474,
Applied Mathematical Methods Fourier Transforms 477, Applied Mathematical Methods Fourier Transforms 478,
Applied Mathematical Methods Fourier Transforms 479, Applied Mathematical Methods Fourier Transforms 480,
Conjugate of the Fourier transform: Consider a signal f (t) from actual measurement or sampling.
Z
1 We want to analyze its amplitude spectrum (versus frequency).
f (w ) = f (t)e iwt dt
2 For the FT, how to evaluate the integral over (, )?
Inner product of f(w ) and g (w ): Windowing: Sample the signal f (t) over a finite interval.
Z Z Z
1 A window function:
f (w )g (w )dw = f (t)e iwt dt g (w )dw
2
Z Z 1 for a t b
1 g (t) =
= f (t) g (w )e iwt dw dt 0 otherwise
2
Z
= f (t)g (t)dt. Actual processing takes place on the windowed function f (t)g (t).
Next question: Do we need to evaluate the amplitude for all
Parsevals identity: For g (t) = f (t) in the above, w (, )?
Z Z
kf(w )k2 dw = kf (t)k2 dt, Most useful signals are particularly rich only in their own
characteristic frequency bands.
equating the total energy content of the frequency spectrum of a Decide on an expected frequency band, say [w c , wc ].
wave or a signal to the total energy flow over time.
Applied Mathematical Methods Fourier Transforms 483, Applied Mathematical Methods Fourier Transforms 484,
Applied Mathematical Methods Fourier Transforms 485, Applied Mathematical Methods Fourier Transforms 486,
Chebyshev polynomials:
Polynomial solutions of the singular Sturm-Liouville problem
hp i0 n2
(1 x 2 )y 00 xy 0 + n2 y = 0 or 1 x2 y0 + y =0
1 x2
Minimax Approximation*
over 1 x 1, with Tn (1) = 1 for all n.
Approximation with Chebyshev polynomials
Minimax Polynomial Approximation
Closed-form expressions:
or,
Applied Mathematical Methods Minimax Approximation* 489, Applied Mathematical Methods Minimax Approximation* 490,
Immediate observations
I Coefficients in a Chebyshev polynomial are integers. In 1
1
P 8 (x)
T8 (x)
0.8
I For even n, Tn (x) is an even function, while for odd n it is an 0.4 0.4
y
0.2
0.2
I Zeros of a Chebyshev polynomial T n (x) are real and lie inside 0.4
0.4
0.6
1 0.8
Further, zeros of Tn (x) are interlaced by those of Tn+1 (x). Figure: Extrema and zeros of T3 (x) Figure: Contrast: P8 (x) and T8 (x)
I Extrema of Tn (x) are of magnitude equal to unity, alternate in
sign and occur at x = cos k n for k = 0, 1, 2, 3, , n. Being cosines and polynomials at the same time, Chebyshev
I Orthogonality and norms:
polynomials possess a wide variety of interesting properties!
Z 1 0 if m 6= n,
Tm (x)Tn (x)
dx = 2 if m = n 6= 0, and Most striking property:
1 1 x2
if m = n = 0. equal-ripple oscillations, leading to minimax property
Applied Mathematical Methods Minimax Approximation* 491, Applied Mathematical Methods Minimax Approximation* 492,
Minimax property
Chebyshev series
Theorem: Among all polynomials pn (x) of degree n > 0
with the leading coefficient equal to unity, 2 1n Tn (x) f (x) = a0 T0 (x) + a1 T1 (x) + a2 T2 (x) + a3 T3 (x) +
deviates least from zero in [1, 1]. That is,
with coefficients
max |pn (x)| max |21n Tn (x)| = 21n . Z Z
1x1 1x1 1 1 f (x)T0 (x) 2 1 f (x)Tn (x)
a0 = dx and an = dx for n = 1, 2, 3,
1 1 x2 1 1 x2
If there exists a monic polynomial p n (x) of degree n such that
max |pn (x)| < 21n , Pn
1x1 A truncated series k=0 ak Tk (x):
For approximating f (t) over [a, b], scale the variable as Situations in which minimax approximation is desirable:
I Develop the approximation once and keep it for use in future.
t = a+b ba
2 + 2 x, with x [1, 1].
P Requirement: Uniform quality control over the entire domain
Remark: The economized series nk=0 ak Tk (x) gives minimax
deviation of the leading error term a n+1 Tn+1 (x). Minimax approximation:
Assuming an+1 Tn+1 (x) to be the error, at the zeros of Tn+1 (x), deviation limited by the constant amplitude of ripple
the error will be officially zero, i.e.
Chebyshevs minimax theorem
n
X Theorem: Of all polynomials of degree up to n, p(x) is
ak Tk (xj ) = f (t(xj )), the minimax polynomial approximation of f (x), i.e. it
k=0
minimizes
where x0 , x1 , x2 , , xn are the roots of Tn+1 (x). max |f (x) p(x)|,
Recall: Values of an n-th degree polynomial at n + 1 if and only if there are n + 2 points xi such that
points uniquely fix the entire polynomial.
a x1 < x2 < x3 < < xn+2 b,
Interpolation of these n + 1 values leads to the same polynomial!
Chebyshev-Lagrange approximation where the difference f (x) p(x) takes its extreme values
of the same magnitude and alternating signs.
Applied Mathematical Methods Minimax Approximation* 495, Applied Mathematical Methods Minimax Approximation* 496,
Utilize any gap to reduce the deviation at the other extrema with
values at the bound.
y
d I Unique features of Chebyshev polynomials
f(x) p(x) I The equal-ripple and minimax properties
p(x)
/2 I Chebyshev series and Chebyshev-Lagrange approximation
l b
a w n
O
/2
m x I Fundamental ideas of general minimax approximation
Applied Mathematical Methods Partial Differential Equations 497, Applied Mathematical Methods Partial Differential Equations 498,
Outline Introduction
Hyperbolic Equations
Parabolic Equations
Introduction Introduction
Hyperbolic Equations
Parabolic Equations
Elliptic Equations
Two-Dimensional Wave Equation
Quasi-linear second order PDEs Elliptic Equations
Two-Dimensional Wave Equation
2u 2u 2u
a + 2b + c 2 = F (x, y , u, ux , uy )
x 2 xy y
Introduction Introduction
Hyperbolic Equations
Parabolic Equations
Introduction Introduction
Hyperbolic Equations
Parabolic Equations
Elliptic Equations
Two-Dimensional Wave Equation
Method of separation of variables Elliptic Equations
Two-Dimensional Wave Equation
Initial and boundary conditions For u(x, y ), propose a solution in the form
Time and space variables are qualitatively different.
u(x, y ) = X (x)Y (y )
I Conditions in time: typically initial conditions.
For second order PDEs, u and ut over the entire space and substitute
domain: Cauchy conditions
I Time is a single variable and is decoupled from the space ux = X 0 Y , uy = XY 0 , uxx = X 00 Y , uxy = X 0 Y 0 , uyy = XY 00
variables.
to cast the equation into the form
I Conditions in space: typically boundary conditions.
For u(t, x, y ), boundary conditions over the entire curve in the (x, X , X 0 , X 00 ) = (y , Y , Y 0 , Y 00 ).
x-y plane that encloses the domain. For second order PDEs,
I Dirichlet condition: value of the function If the manoeuvre succeeds then, x and y being independent
I Neumann condition: derivative normal to the boundary variables, it implies
I Mixed (Robin) condition
(x, X , X 0 , X 00 ) = (y , Y , Y 0 , Y 00 ) = k.
Dirichlet, Neumann and Cauchy problems Nature of the separation constant k is decided based on the
context, resulting ODEs are solved in consistency with the
boundary conditions and assembled to construct u(x, y ).
Applied Mathematical Methods Partial Differential Equations 501, Applied Mathematical Methods Partial Differential Equations 502,
2u 2u
O x L x
= c2 2
t 2 x
Figure: Transverse vibration of a stretched string
one-dimensional wave equation
Small deflection and slope: cos 1, sin tan Boundary conditions (in this case): u(0, t) = u(L, t) = 0
Horizontal (longitudinal) forces on PQ balance. Initial configuration and initial velocity:
From Newtons second law, vertical (transverse) deflection u(x, t):
u(x, 0) = f (x) and ut (x, 0) = g (x)
2u
T sin( + ) T sin = x 2 Cauchy problem: Determine u(x, t) for 0 x L, t 0.
t
Applied Mathematical Methods Partial Differential Equations 503, Applied Mathematical Methods Partial Differential Equations 504,
Applied Mathematical Methods Partial Differential Equations 507, Applied Mathematical Methods Partial Differential Equations 508,
2u 2u 2u
a + 2b + c 2 = F (x, y , u, ux , uy ), ux = U x + U x = U + U uxx = U + 2U + U
x 2 xy y
roots of am 2 + 2bm + c are ut = U t + U t = cU + cU utt = c 2 U 2c 2 U + c 2 U
b b 2 ac into the PDE utt = c 2 uxx gives
m1,2 = ,
a
real and distinct. c 2 (U 2U + U ) = c 2 (U + 2U + U ).
Coordinate transformation
Canonical form: U = 0
= y + m1 x, = y + m2 x
leads to U = (, , U, U , U ). Integration: Z
For the BVP U = U d + () = ()
utt = c 2 uxx , u(0, t) = u(L, t) = 0, u(x, 0) = f (x), u t (x, 0) = g (x), Z
U(, ) = ()d + f2 () = f1 () + f2 ()
canonical coordinate transformation:
1 1
= x ct, = x + ct, with x = ( + ), t = ( ). DAlemberts solution: u(x, t) = f1 (x ct) + f2 (x + ct)
2 2c
Applied Mathematical Methods Partial Differential Equations 509, Applied Mathematical Methods Partial Differential Equations 510,
Applied Mathematical Methods Partial Differential Equations 513, Applied Mathematical Methods Partial Differential Equations 514,
Applied Mathematical Methods Partial Differential Equations 515, Applied Mathematical Methods Partial Differential Equations 516,
2
2u u 2u
2 u = (x, y ) 2
= c2 2
+ 2
t x y
Poissons equation
A Cauchy problem of the membrane:
Separation of variables impossible!
utt = c 2 (uxx + uyy ); u(x, y , 0) = f (x, y ), ut (x, y , 0) = g (x, y );
Consider function u(x, y ) as
u(0, y , t) = u(a, y , t) = u(x, 0, t) = u(x, b, t) = 0.
u(x, y ) = uh (x, y ) + up (x, y )
Separate the time variable from the space variables:
Sequence of steps
Fxx + Fyy T 00
I one particular solution up (x, y ) that may or may not satisfy u(x, y , t) = F (x, y )T (t) = 2 = 2
F c T
some or all of the boundary conditions
I solution of the corresponding homogeneous equation, namely Helmholtz equation:
uxx + uyy = 0 for uh (x, y )
I such that u = uh + up satisfies all the boundary conditions Fxx + Fyy + 2 F = 0
Applied Mathematical Methods Partial Differential Equations 519, Applied Mathematical Methods Partial Differential Equations 520,
X
X
X 00 Y 00 + 2 Y mx ny
= = 2 u(x, y , t) = [Amn cos cmn t+Bmn sin cmn t] sin sin ,
X Y a b
m=1 n=1
X 00 + 2 X = 0 and Y 00 + 2 Y = 0, coefficients being determined from the double Fourier series
p X
such that = 2 + 2 . X mx ny
f (x, y ) = Amn sin sin
With BCs X (0) = X (a) = 0 and Y (0) = Y (b) = 0, a b
m=1 n=1
X X
mx ny mx ny
Xm (x) = sin and Yn (y ) = sin . and g (x, y ) = cmn Bmn sin sin .
a b a b
m=1 n=1
Corresponding values of are
r
m 2 n 2 BVPs modelled in polar coordinates
mn = + For domains of circular symmetry, important in many practical
a b
systems, the BVP is conveniently modelled in polar coordinates,
with solutions of T 00 + c 2 2 T = 0 as
the separation of variables quite often producing
Tmn (t) = Amn cos cmn t + Bmn sin cmn t. I Bessels equation, in cylindrical coordinates, and
I Legendres equation, in spherical coordinates
Applied Mathematical Methods Partial Differential Equations 521, Applied Mathematical Methods Analytic Functions 522,
Crucial difference from real functions: z can approach z 0 in all Points to be settled later:
possible manners in the complex plane. I Derivative of an analytic function is also analytic.
Definition of the limit is more restrictive. I An analytic function possesses derivatives of all orders.
Continuity: limzz0 f (z) = f (z0 ) A great qualitative difference between functions of a real variable
Continuity in a domain D: continuity at every point in D and those of a complex variable!
Applied Mathematical Methods Analytic Functions 525, Applied Mathematical Methods Analytic Functions 526,
Applied Mathematical Methods Analytic Functions 527, Applied Mathematical Methods Analytic Functions 528,
Function: mapping of elements in domain to their images in range Conformal mapping: a mapping that preserves the angle between
Depiction of a complex variable requires a plane with two axes. any two directions in magnitude and sense.
Mapping of a complex function w = f (z) is shown in two planes. Verify: w = e z defines a conformal mapping.
Example: mapping of a rectangle under transformation w = e z Through relative orientations of curves at the points of
3.5
intersection, local shape of a figure is preserved.
2
1.5
D C C Take curve z(t), z(0) = z0 and image w (t) = f [z(t)], w0 = f (z0 ).
For analytic f (z), w (0) = f 0 (z0 )z(0), implying
2.5
2
1
1.5 |w (0)| = |f 0 (z0 )| |z(0)| and arg w (0) = arg f 0 (z0 ) + arg z(0).
v
0.5
y
1 D
0.5
For several curves through z0 ,
0
O A B
0
O B
image curves pass through w0 and all of them turn by the
same angle arg f 0 (z0 ).
0.5 A
0.5
1 1
1 0.5 0 0.5 1 1.5 2 1 0.5 0 0.5 1 1.5 2 2.5 3 3.5
x u
Cautions
(a) The z-plane (b) The w -plane I f 0 (z) varies from point to point. Different scaling and turning
effects take place at different points. Global shape changes.
Figure: Mapping corresponding to function w = e z I For f 0 (z) = 0, argument is undefined and conformality is lost.
Applied Mathematical Methods Analytic Functions 531, Applied Mathematical Methods Analytic Functions 532,
An analytic function defines a conformal mapping except Riemann mapping theorem: Let D be a simply connected
at its critical points where its derivative vanishes. domain in the z-plane bounded by a closed curve C . Then there
exists a conformal mapping that gives a one-to-one correspondence
Except at critical points, an analytic function is invertible.
between D and the unit disc |w | < 1 as well as between C and the
We can establish an inverse of any conformal mapping. unit circle |w | = 1, bounding the unit disc.
Examples Application to boundary value problems
I Linear function w = az + b (for a 6= 0) I First, establish a conformal mapping between the given
I Linear fractional transformation domain and a domain of simple geometry.
az + b I Next, solve the BVP in this simple domain.
w= , ad bc 6= 0
cz + d I Finally, using the inverse of the conformal mapping, construct
I Other elementary functions like z n , e z etc the solution for the given domain.
Special significance of conformal mappings:
Example: Dirichlet problem with Poissons integral formula
A harmonic function (u, v ) in the w -plane is also a
Z 2
harmonic function, in the form (x, y ) in the z-plane, as 1 (R 2 r 2 )f (Re i )
long as the two planes are related through a conformal f (re i ) = d
2 0 R 2 2Rr cos( ) + r 2
mapping.
Applied Mathematical Methods Analytic Functions 533, Applied Mathematical Methods Analytic Functions 534,
Applied Mathematical Methods Integrals in the Complex Plane 537, Applied Mathematical Methods Integrals in the Complex Plane 538,
HIf f (z) is analytic in a simply connected domain D, then Consequence: Definition of the function
C f (z)dz = 0 for every simple closed curve C in D.
Z z
F (z) = f ()d
Importance of Goursats contribution: z0
I continuity of f 0 (z) appears as consequence! What does the formulation suggest?
Applied Mathematical Methods Integrals in the Complex Plane 539, Applied Mathematical Methods Integrals in the Complex Plane 540,
Indefinite integral
Principle of deformation of paths
Question: Is F (z) analytic? Is F 0 (z) = f (z)?
Z z+z Z z C*
= [f () f (z)]d Z Z Z
z z z1 s3
f (z)dz = f (z)dz = f (z)dz C3 s2
f is continuous , such that | z| < |f () f (z)| < C1 C2 C3
Choosing z < ,
Z z+z Not so for path C .
F (z + z) F (z) Figure: Path deformation
f (z) < d = .
z z z
If f (z) is analytic in a simply connected domain D, then The line integral remains unaltered through a continuous
there exists an analytic function F (z) in D such that deformation of the path of integration with fixed
end-points, as long as the sweep of the deformation
Z z2
includes no point where the integrand is non-analytic.
F 0 (z) = f (z) and f (z)dz = F (z2 ) F (z1 ).
z1
Applied Mathematical Methods Integrals in the Complex Plane 541, Applied Mathematical Methods Integrals in the Complex Plane 542,
Cauchys theorem in multiply connected domain f (z): analytic function in a simply connected domain D
L1
C1
C
For z0 D and simple closed curve C in D,
C2 I
L2 f (z)
dz = 2if (z0 ).
C z z0
C3
L3
Consider C as a circle with centre at z 0 and radius ,
with no loss of generality (why?).
Figure: Contour for multiply connected domain
I I I I I I I
f (z)dz f (z)dz f (z)dz f (z)dz = 0. f (z) dz f (z) f (z0 )
C C1 C2 C3
dz = f (z0 ) + dz
C z z0 C z z0 C z z0
If f (z) is analytic in a region bounded by the contour C
From continuity of f (z), such that for any ,
as the outer boundary and non-overlapping contours C 1 ,
C2 , C3 , , Cn as inner boundaries, then f (z) f (z0 )
|z z0 | < |f (z) f (z0 )| < and < ,
I n I
X z z0
f (z)dz = f (z)dz.
C Ci with < . From M-L inequality, the second integral vanishes.
i =1
Applied Mathematical Methods Integrals in the Complex Plane 543, Applied Mathematical Methods Integrals in the Complex Plane 544,
Applied Mathematical Methods Integrals in the Complex Plane 545, Applied Mathematical Methods Integrals in the Complex Plane 546,
Applied Mathematical Methods Singularities of Complex Functions 549, Applied Mathematical Methods Singularities of Complex Functions 550,
Applied Mathematical Methods Singularities of Complex Functions 551, Applied Mathematical Methods Singularities of Complex Functions 552,
I I
1 f (w )dw 1 f (w )dw 1 1 z z0 (z z0 )n1 z z0 n 1
f (z) = . = + + + +
2i C1 w z 2i C2 w z w z w z0 (w z0 )2 (w z0 )n w z0 w z
I
1 f (w )dw n1
Organization of the series: = a0 + a1 (z z0 ) + + an1 (z z0 ) + Tn ,
z 2i C1 w z
w
1 1 z0
with coefficients as required and
= I
w z (w z0 )[1 (z z0 )/(w z0 )] C2 C1
1 z z0 n f (w )
Tn = dw .
1 1 2i C1 w z0 w z
=
w z (z z0 )[1 (w z0 )/(z z0 )] z0
Similarly, with q = wzz ,
Figure: The annulus 0
I
1 f (w )dw
Using the expression for the sum of a geometric series, = a1 (z z0 )1 + + an (z z0 )n + Tn ,
2i C2 w z
1 qn 1 qn with appropriate coefficients and the remainder term
1+q+q 2 + +q n1 = = 1+q+q 2 + +q n1 + .
1q 1q 1q I
1 w z0 n f (w )
We use q = zz0
for integral over C1 and q = w z0
over C2 . Tn = dw .
w z0 zz0 2i C2 z z0 z w
Applied Mathematical Methods Singularities of Complex Functions 553, Applied Mathematical Methods Singularities of Complex Functions 554,
Remark: For actually developing Taylors or Laurents series of a Implication: If f (z) has a zero in every neighbourhood around
function, algebraic manipulation of known facts are employed quite z0 then it cannot be analytic at z0 , unless it is the zero function
often, rather than evaluating so many contour integrals! [i.e. f (z) = 0 everywhere].
Applied Mathematical Methods Singularities of Complex Functions 555, Applied Mathematical Methods Singularities of Complex Functions 556,
Applied Mathematical Methods Singularities of Complex Functions 557, Applied Mathematical Methods Singularities of Complex Functions 558,
1
Residue: Res z0 f (z) = a1 = 2i C f (z)dz I Identify the required integral as a contour integral of a
If f (z) has a pole (of order m) at z0 , then complex function, or a part thereof.
X I If the domain of integration is infinite, then extend the
(z z0 )m f (z) = an (z z0 )m+n contour infinitely, without enclosing new singularities.
n=m
is analytic at z0 , and Example: Z 2
X I = (cos , sin )d
d m1 (m + n)!
m1
[(z z0 )m f (z)] = an (z z0 )n+1 0
dz (n + 1)!
n=1 With z = ei
and dz = izd,
I I
1 d m1 1 1 1 1 dz
Resz0 f (z) = a1 = (m 1)! zz lim m1
[(z z0 )m f (z)]. I = z+ , z = f (z)dz,
0 dz C 2 z 2i z iz C
Residue theorem: If f (z) is analytic inside and on simple closed where C is the unit circle centred at the origin.
curve C , with singularities at z1 , z2 , z3 , , zk inside C ; then Denoting poles falling inside the unit circle C as p j ,
I Xk X
f (z)dz = 2i Resf (z). I = 2i Resf (z).
pj
C
z i j
i =1
Applied Mathematical Methods Singularities of Complex Functions 559, Applied Mathematical Methods Singularities of Complex Functions 560,
Consider contour C enclosing semi-circular region |z| R, y 0, I = A(s) + iB(s) = f (x)e isx dx.
large enough to enclose all singularities above the x-axis. Similar to the previous case,
I Z Z I Z R Z
R y
M
p R S
As |e isz | = |e isx | |e sy | = |e sy | 1 for y 0, we have
For finite M, |f (z)| < on C R2
p
Z
p
Z
R O R x
f (z)e isz dz < M R = M ,
R2 R
f (z)dz < M R = M . S
R2 R Figure: The contour which yields, as R ,
S
X
Z X I = 2i Res[f (z)e isz ].
pj
I = f (x)dx = 2i Resf (z)
pj as R . j
j
Applied Mathematical Methods Singularities of Complex Functions 561, Applied Mathematical Methods Variational Calculus* 562,
Applied Mathematical Methods Variational Calculus* 563, Applied Mathematical Methods Variational Calculus* 564,
Introduction Introduction
Eulers Equation
Direct Methods
Introduction Introduction
Eulers Equation
Direct Methods
Question: What are the variables of the problem? First order necessary condition: Functional is stationary with
respect to arbitrary small variations in {q j }.
Answer: The entire curve or function q(t). [Equivalent to vanishing of the gradient]
Variational problem:
This gives equations for the stationary points.
Optimization of a function of functions, i.e. a functional.
Here, these equations are differential equations!
Applied Mathematical Methods Variational Calculus* 565, Applied Mathematical Methods Variational Calculus* 566,
Introduction Introduction
Eulers Equation
Direct Methods
Eulers Equation Introduction
Eulers Equation
Direct Methods
Examples of variational problems Find out a function y (x), that will make the functional
Rb
Geodesic path: Minimize l = a kr0 (t)kdt Z x2
Minimal surface ofRrevolution: Minimize I [y (x)] = f [x, y (x), y 0 (x)]dx
Rb p x1
S = 2yds = 2 a y 1 + y 02 dx
The brachistochrone problem: To find the curve along which the stationary, with boundary conditions y (x 1 ) = y1 and y (x2 ) = y2 .
descent is fastest. Consider variation y (x) with y (x 1 ) = y (x2 ) = 0 and consistent
R R b q 1+y 02
Minimize T = ds variation y 0 (x).
v = a 2gy dx
Fermats principle: Light takes the fastest path. Z x2
R u x 02 +y 02 +z 02 f f
I = y + 0 y 0 dx
Minimize T = u12 c(x,y ,z) du x1 y y
Isoperimetric problem: Largest area in the plane enclosed by a Integration of the second term by parts:
closed curve of given perimeter. By extension, Z x2 Z x2 x2 Z x2
extremize a functional under one or more equality f 0 f d f d f
0
y dx = 0
(y )dx = y y dx
constraints. x1 y x1 y dx y 0 x1 x1 dx y
0
Hamiltons principle of least action: Evolution of a dynamic With y (x1 ) = y (x2 ) = 0, the first term vanishes identically, and
system through the minimization of the action Z x2
Z t2 Z t2 f d f
I = 0
y dx.
s= Ldt = (K P)dt x1 y dx y
t1 t1
Applied Mathematical Methods Variational Calculus* 567, Applied Mathematical Methods Variational Calculus* 568,
Applied Mathematical Methods Variational Calculus* 569, Applied Mathematical Methods Variational Calculus* 570,
Applied Mathematical Methods Variational Calculus* 573, Applied Mathematical Methods Variational Calculus* 574,
Applied Mathematical Methods Variational Calculus* 575, Applied Mathematical Methods Epilogue 576,
Epilogue Epilogue
Source for further information:
http://home.iitk.ac.in/ dasgupta/MathBook
Some specialized courses in immediate continuation
Destination for feedback: I Linear Algebra and Matrix Theory
dasgupta@iitk.ac.in I Approximation Theory
I Variational Calculus and Optimal Control
Some general courses in immediate continuation I Advanced Mathematical Physics
I Advanced Mathematical Methods I Geometric Modelling
I Scientific Computing I Computational Geometry
I Advanced Numerical Analysis I Computer Graphics
I Optimization I Signal Processing
I Advanced Differential Equations I Image Processing
I Partial Differential Equations
I Finite Element Methods
Applied Mathematical Methods Selected References 579, Applied Mathematical Methods Selected References 580,
F. S. Acton.
Numerical Methods that usually Work.
The Mathematical Association of America (1990).
C. M. Bender and S. A. Orszag.
Selected References
Advanced Mathematical Methods for Scientists and Engineers.
Springer-Verlag (1999).
G. Birkhoff and G.-C. Rota.
Ordinary Differential Equations.
John Wiley and Sons (1989).
G. H. Golub and C. F. Van Loan.
Matrix Computations.
The John Hopkins University Press (1983).
Applied Mathematical Methods Selected References 581, Applied Mathematical Methods Selected References 582,